Get email delivery of the Cadence blog featured here
At the recent TSMC OIP Symposium, Cadence's Tom Wong presented Sensor Fusion and ADAS SoC Designs in TSMC 16FFC and N7. These two processes are the "compact" 16nm process and the mainline 7nm process, two processes that TSMC selected for adding additional characterization and manufacturing tracking to support the automotive end-markets.
There are four big drivers in automotive electronics:
In this post, I am going to use the term EV for electric vehicles. In China, these are called NEV, for new energy vehicle. Hybrids, whether chargeable or not, are also in there with similar requirements (along with the need for an internal combustion engine, which is not today's topic). I'm also going to assume you know your autonomous driving levels, and the tiered nature of the traditional automotive supply chain. If not, then see my post Automotive Industry Basics.
For the people attending OIP, who are mostly from the semiconductor industry of course, the big deal is that more SoCs are used in cars. Not just more semiconductors, which historically were low complexity designs in mature processes, but SoCs in 16nm and 7nm. The old electronics were a lot about body, powertrain, and chassis. These are being replaced by electric powertrains in EVs. In all vehicles, there is growth in the infotainment and digital cockpit, along with all the ADAS/autonomous capabilities with their stringent safety requirements.
The sensor requirements for level 2 and 3 (and 2+) ADAS are camera and radar (and ultrasound for parking). For levels 3 to 5, they are camera, radar, and lidar. Today, lidar is still at a price point where each unit costs thousands of dollars, clearly too expensive for commercial deployment. One big trend in sensors is the increasing performance and resolution of radar. The big question is whether radar is getting better faster than lidar is getting cheaper, and thus whether lidar will ever be deployed in mainstream vehicles.
Today's sensor capabilities, in the above table, are good for distance measurement, traffic signs, lane detection, segmentation, and mapping. A couple of critical points: only cameras can "see" traffic lights, and only radar can cut through rain and fog. So you will always need cameras and radar, even if you have lidar and ultrasound (which is only short range).
All radar is not created equal, and there is a big difference between current radar and the capabilities of next-generation radar. Short-range radar can replace ultrasound. Medium-range radar can detect cars alongside for long-range change and blind-spot detection. Long-range radar can detect cars in front, and their speed and direction, for adaptive cruise control and, eventually, more autonomous driving still. The pictures below show radar today on the left (poor), next-generation radar in the center, and lidar on the right. As I said near the start of this post, the big question is whether radar is getting better fast enough to make lidar unnecessary.
Another big change is the move toward having more centralized sensor fusion. Instead of each sensor having its own signal processing and then things like object recognition, the basic signal processing is done in a central unit, and then the perception and decision process is also centralized. This requires much higher bandwidth in-car networks. It is still debatable just how much processing should be done at the sensor, the tradeoff being duplicating sensor processing at each sensor versus network bandwidth and reliability requirements.
Obviously, as ADAS and autonomous driving improve their capabilities and become mainstream in even cheap vehicles, the sensor market will grow tremendously. The table below shows the explosive growth that is forecast, more than tripling between 2020 and 2030. I have to point out that I think this is extremely speculative since we don't really have a consensus on what level of autonomy we will get to in a decade. I've gone from being optimistic that my next car would be truly self-driving, to not expecting to see level 5 cars in my lifetime. Cars have steering wheels and always will.
Cadence has a broad portfolio of automotive IP in both 16FFC and N7, as shown in the table (green columns = today; rightmost columns = future). Some additional notes:
The other part of Cadence's IP product line for automotive are the specialized Tensilica processors: the Vision series (for...er...vision); the Fusion and ConnX series for radar, lidar, and ultrasound; and the DNA 100 for deep learning and neural network inference.
For more about these processors, see my posts:
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.