• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Automotive at Linley: Intelligent Vehicles and Intelligent…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Automotive
NXP
linley processor conference
tensilica vision
mike demler
Automotive Ethernet
Linley
autonomous vehicles
Breakfast Bytes

Automotive at Linley: Intelligent Vehicles and Intelligent Intersections

31 Oct 2016 • 5 minute read

objects in mirror are closer than they appear The whole afternoon the second day of the Linley Processor Conference was dedicated to automotive. To show how important this has become, what used to be the Mobile Processor Conference is no more, and in 2017 there will be an Autonomous Processor Conference, which will inevitably be focused on automotive (although there can be autonomous drones and other things).

Mike Demler, Linley

Linley's Mike Demler kicked things off with a review of where we are. The National Highway and Transportation Safety Administration (NHTSA) and the Society of Automotive Engineers (SAE) have defined different levels of of autonomy.

 levels of autonomy

The scale goes from 0, manual control, up to 4 (or 5 for SAE), fully autonomous under all conditions. I'm not sure where the early 19th century's "horse that could take you home from the bar" falls on this scale. Products like Tesla's autopilot are level 2, Google's test vehicles are level 3. SAE level 4 is fully autonomous but may require human control under some conditions (at night or in snow, for example). When you consider that 12 years ago, the furthest an autonomous vehicle could get on the DARPA Challenge was 8 miles (with no other traffic), thing are advancing faster than I think anyone predicted. Some recent announcements (and, of course, announcing something is not the same as delivering it):

  • Ford targets full autonomy for 2021 for ride-sharing (not sure why ride-sharing is different from other types of driving)
  • GM and Lyft to launch self-driving taxis in 2019
  • Audi expected to launch level 3 model in 2018 (see my post Piloted Driving: Audi's View)
  • Tesla plans self-driving network with massive fleet learning
  • First self-driving Uber prototypes have rolled out in Pittsburg

Autonomous vehicles create two major challenges. The automotive ecosystem knows lots about carscitation needed, but little about designing semiconductors in advanced nodes. The semiconductor ecosystem (and various system companies) knows lots about designing semiconductors in advanced nodes, but nothing about automotive reliability and functional safety requirements. However, autonomous driving requires a lot of compute power and high-bandwidth networking, meaning that there is no escape from designing in advanced processes and working out how to give them the reliability that automotive requires. It is perhaps the most extreme market of all, requiring extended temperature ranges, 20-year lifetimes, high performance, and low(ish) power.

The biggest challenge is enabling vehicles to see. There are four ways to do this:

  • Cameras: Medium range up to 150m, poor in rain and fog, needs IR for nighttime
  • Ultrasonic: Short range under 10m, mainly used for parking, etc.
  • Radar: Long range 200m, works in rain and fog, unaffected by darkness
  • Lidar: Medium range around 100m, affected by rain and fog, unaffected by darkness

Combining inputs from all of these is known as sensor fusion. There are many tasks to be done from recognizing traffic signs, pedestrians, lane markings, roadway edges, other vehicles, traffic lights. For now, the assumption is that the vehicles will get no help from the infrastructure. Even if a decision is made to, for example, make all traffic lights broadcast their status wirelessly to vehicles, it will take many years to convert all the traffic lights. "How many traffic lights are there in the US?" sounds like one of those apocryphal Microsoft interview questions. Googling it, the answer appears to be 311,000 (in 2014).

Another key enabling technology that is advancing fast is neural networks, sometimes called deep learning. While training can be done in the cloud, autonomous vehicles must not only be autonomous in the sense of requiring no driver, they must be autonomous in not requiring continuous low-latency network connectivity to the cloud. The algorithms need to run in the vehicle.

Automotive is a slow growing market with CAGR of just 3%. There are roughly 100M vehicles produced per year. But there is massive growth in ADAS and so the slow growth of the market itself is misleading. The growth in automotive electronics is several times bigger. In fact, the market could shrink dramatically as vehicles become truly autonomous, if many of us decide that we don't need to own our own car any more than we need to own our own airplane (a handful of billionaires excepted). The two big features of ADAS driving adoption are AEB (automatic emergency braking) and LKA (lane keeping assist), which are likely to become mandatory for high safety rating. In fact, AEB is likely to be mandated in US by 2022. Watch it work in the real world (15 seconds):

Geoff Waters, NXP

Mike was followed by Geoff Waters of NXP. Now that Freescale is part of NXP, they are the leader in automotive semiconductors (assuming Qualcomm hasn't acquired them, about which there are rumors as I type this). His talk was titled Processing for Intelligent Transportation Systems Infrastructure.

 intelligent vehicles and intersections

He started by pointing out that NXP doesn't just care about the V, the Vehicle, but also the I, the Intelligent Intersection.

He discussed ITS-G5 for Europe and WAVE in the US, which is dedicated short-range communication (DSRC) that is an important part of both vehicle-to-vehicle communication (v2v) and vehicle-to-infrastructure communication (v2i). It has a dedicated spectrum at 5.850GHz to 5.925GHz, a range of up to 2km, less than 50ms latency, and security in the form of messages signed with public key infrastructure. He showed a proof-of-concept system for intelligent traffic control.

 beige box

The initial functionality would be to detect crossings (vehicles entering the intersection and leaving, pedestrians crossing), count vehicle and pedestrian crossings, send statistics to the cloud, detect vehicle queues, detect vehicles queuing for lights, sum the queues. Subsequently use this same infrastructure for communicating with vehicles to take it to the next stage.

 intelligent intersection

During the Q&A, Geoff was asked about trucks. He said that he thought we were close to having a model like cargo ships, where we have pilots. When you see a big cargo ship sailing in through the Golden Gate, it is a local San Francisco pilot, not the crew that sailed it across the Pacific, who is in charge. In the same way, the trucks would drive autonomously, or in convoys, on the freeways across the US or Europe. When they reached the destination, either the driver would take over or a local would come and take over, drive the truck through city streets and back it into the loading dock. He called it the harbor pilot model.

Learn more about Tensilica's processors for imaging, computer vision and convolutional neural networks. Get further details on Cadence's automotive Ethernet family. Explore Cadence's products for automotive functional safety (ISO 26262).

Next: Segars and Son

Previous: How Does Virtualization Work?