• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Sensor Fusion and ADAS in TSMC Automotive Processes
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
OIP
Automotive
sensor fusion
TSMC
lidar
radar
Tensilica
vision
camera
ADAS

Sensor Fusion and ADAS in TSMC Automotive Processes

11 Oct 2019 • 4 minute read

 breakfast bytes logo At the recent TSMC OIP Symposium, Cadence's Tom Wong presented Sensor Fusion and ADAS SoC Designs in TSMC 16FFC and N7. These two processes are the "compact" 16nm process and the mainline 7nm process, two processes that TSMC selected for adding additional characterization and manufacturing tracking to support the automotive end-markets.

There are four big drivers in automotive electronics:

  • 5G and DSRC (for V2x, and cloud communication)
  • The growth of increasingly autonomous driving (broad level 2+ deployment)
  • Vehicle electrification (2M in China, 1M in US, $100/kWh of battery)
  • Smart mobility and ride-sharing (long term reliability)

In this post, I am going to use the term EV for electric vehicles. In China, these are called NEV, for new energy vehicle. Hybrids, whether chargeable or not, are also in there with similar requirements (along with the need for an internal combustion engine, which is not today's topic). I'm also going to assume you know your autonomous driving levels, and the tiered nature of the traditional automotive supply chain. If not, then see my post Automotive Industry Basics.

For the people attending OIP, who are mostly from the semiconductor industry of course, the big deal is that more SoCs are used in cars. Not just more semiconductors, which historically were low complexity designs in mature processes, but SoCs in 16nm and 7nm. The old electronics were a lot about body, powertrain, and chassis. These are being replaced by electric powertrains in EVs. In all vehicles, there is growth in the infotainment and digital cockpit, along with all the ADAS/autonomous capabilities with their stringent safety requirements.

The sensor requirements for level 2 and 3 (and 2+) ADAS are camera and radar (and ultrasound for parking). For levels 3 to 5, they are camera, radar, and lidar. Today, lidar is still at a price point where each unit costs thousands of dollars, clearly too expensive for commercial deployment. One big trend in sensors is the increasing performance and resolution of radar. The big question is whether radar is getting better faster than lidar is getting cheaper, and thus whether lidar will ever be deployed in mainstream vehicles.

Today's sensor capabilities, in the above table, are good for distance measurement, traffic signs, lane detection, segmentation, and mapping. A couple of critical points: only cameras can "see" traffic lights, and only radar can cut through rain and fog. So you will always need cameras and radar, even if you have lidar and ultrasound (which is only short range).

All radar is not created equal, and there is a big difference between current radar and the capabilities of next-generation radar. Short-range radar can replace ultrasound. Medium-range radar can detect cars alongside for long-range change and blind-spot detection. Long-range radar can detect cars in front, and their speed and direction, for adaptive cruise control and, eventually, more autonomous driving still. The pictures below show radar today on the left (poor), next-generation radar in the center, and lidar on the right. As I said near the start of this post, the big question is whether radar is getting better fast enough to make lidar unnecessary.

Another big change is the move toward having more centralized sensor fusion. Instead of each sensor having its own signal processing and then things like object recognition, the basic signal processing is done in a central unit, and then the perception and decision process is also centralized. This requires much higher bandwidth in-car networks. It is still debatable just how much processing should be done at the sensor, the tradeoff being duplicating sensor processing at each sensor versus network bandwidth and reliability requirements.

 Obviously, as ADAS and autonomous driving improve their capabilities and become mainstream in even cheap vehicles, the sensor market will grow tremendously. The table below shows the explosive growth that is forecast, more than tripling between 2020 and 2030. I have to point out that I think this is extremely speculative since we don't really have a consensus on what level of autonomy we will get to in a decade. I've gone from being optimistic that my next car would be truly self-driving, to not expecting to see level 5 cars in my lifetime. Cars have steering wheels and always will.

Cadence Automotive IP

Cadence has a broad portfolio of automotive IP in both 16FFC and N7, as shown in the table (green columns = today; rightmost columns = future). Some additional notes:

  • Infotainment can stay on 16FFC
  • Advanced ADAS is moving to N7
  • Memory interfaces are LPDDR4 at 4266MT/s migrating to LPDDR4X, then LPDDR5
  • Ethernet migrating from 1G to 10G, with TSN required
  • MIPI with 1-4m can use D-PHY, new 15m (anywhere in the vehicle) automotive A-PHY in development
  • Don't forget reliability and functional safety (ISO 26262 compliance to ASIL-B to D), AEC-Q100 grades 1 and 2

The other part of Cadence's IP product line for automotive are the specialized Tensilica processors: the Vision series (for...er...vision); the Fusion and ConnX series for radar, lidar, and ultrasound; and the DNA 100 for deep learning and neural network inference.

For more about these processors, see my posts:

  • Vision Q7 DSP: Real-Time Vision and AI at the Edge
  • The New Tensilica Fusion G6 DSP
  • Tensilica ConnX B20 for 5G, and Automotive Radar/Lidar
  • The New Tensilica DNA 100 Deep Neural-Network Accelerator

Summary

  • SoCs for level 2-3 are becoming mainstream
  • Levels 3 and 4 will require new thinking on the evolution of autonomous driving (technical, market, and human behavior)
  • 16FFC will continue to be mainstream for many years
  • 7nm will be needed for perception AI and for acceleration
  • IP will continue to evolve, but the table stakes will not change much in the next few years

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.