Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
I've said for a couple of years that high-end automotive companies are going to have to do what the high-end mobile companies did, and build their own application processors. It will be the only way for them to get differentiation and built a product that does just what they need. Well, at Tesla's recent "Autonomy Day", they announced that they had done just that. In fact, Elon Musk even said that it's been shipping in some models for a couple of months already, replacing an NVIDIA-based system in the older vehicles (which will be upgraded for people who bought the right options).
One criticism I've seen leveled at Tesla is "how can a car manufacturer build a chip?" But I think that misunderstands Tesla. They are mostly an electronic system and software company. In fact, if there's one part of their business they've struggled with, it's the car manufacturing thing. Their big advantage is total focus on electric power trains and computerized driving. They have no internal combustion engine expertise to protect, no financially sunk engine plants, no brand name associated with engines that they have to move away from.
I wasn't there, it was for financial analysts mostly. But they live-streamed and recorded the whole thing and I think it is well worth a watch. One thing I found surprising was just how much information they revealed about the chip in terms of performance, power, how long it took to design, and more. It is in Samsung's 14nm process and is manufactured in their Austin fab. They call it the FSD Computer, where FSD stands for Full Self-Driving.
The video, embedded below, is nearly four hours long. If you want the chip stuff, then skip to 1:10:00. I'm just going to give a bit of color about the chip stuff, but the next section about how they train the neural networks, and the one after that about their software development, are both interesting, too.
Pete Bannon, the head of Autopilot Hardware, has a background at PA Semiconductor and joined Apple when they needed to kick-start a world-class IC design team and acquired the whole company. He was lead on the iPhone 6 design.
The goals were to design a chip focused entirely on Tesla's needs for automated driving. They wanted a lower chip cost, so they could add redundancy (see the picture at the start of this post—those two chips are identical redundant FSD Computers). The needed a batch size of 1 (that's a neural network thing, where they can start processing immediately an image that is starting to be available without having to build up a big data transfer). Power less than 100W. At least 50 tera operations per second (TOPS). And, obviously, safety and security.
Here it is. It is not insanely huge by modern standards, just 6 billion transistors, 250 million gates (I'm not sure if that is 250K logic gates, plus 6 billion memory transistors, or whether one billion of those transistors are the 250 million gates, but you can see the order of magnitude). One challenge is that they had to be able to retrofit cars and fit it into the space where the old unit went, between the back of the glovebox and the firewall. That was another reason the power had to be low, since it is a thermally challenging environment.
The neural network accelerator is 32 bit adds and 8 bit multiplies, which seems to be the new normal in edge devices that don't need full 32-bit floating point. It has a 96x96 mac array. There are two on each chip (plus another two on the redundant chip) and together they achieve 72 TOPS. The FSD runs at 72W and the neural network accelerator is 15W, so they made their 100W goal.
They claim it is much faster than the NVIDIA chip, although I always take these comparisons with a grain of salt since often they are comparing their brand new design with the previous generation of the comparison in an older process node. But I'm sure that it does what they want much better than retargeting a general-purpose GPU. Also, they claim 144 TOPS in the comparison, but that's with a second redundant chip, so while it is technically true, it's really only half that in terms of what is available for processing the cameras for driving.
If you want more details about the chip (lots more pictures) their neural network architecture, and more...well, there's a lot. The chip stuff is about 30 minutes long, well worth your time. There's a Q&A but again...these are financial analysts covering automotive so they're a bit bewildered by semiconductor. One asks if they are manufacturing the chip themselves or contracting it out, as if Tesla might have built a 14nm fab. Another analyst asks if they are worried they'll be able to get enough supply. Somehow, Tesla's volumes of around 2,000 cars per week aren't going to make much of a dent in what I believe is the largest fab in the US, with a volume in the 50,000 wafer starts per month range. And it's not even a big chip (250mm2).
There is a second design started about a year ago. They didn't say much about it other than it not being a chiplet-based design. Oh, and they still have no plans to use lidar.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.