• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Automotive Sensors: Cameras, Lidar, Radar, Thermal
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Automotive
embedded vision
lidar
radar
ISO 26262
automotive summit

Automotive Sensors: Cameras, Lidar, Radar, Thermal

14 Dec 2018 • 9 minute read

 breakfast bytes logoYesterday I wrote a sort of overview of the Cadence Automotive Summit that took place in November, in the post Automotive Summit: The Road to an Autonomous Future. Today, the focus in on a key part of automating driving, namely sensors.

 One of the first non-Cadence presentations of the day was Manju Hegde, the CEO of Uhnder. It turned out to be a blessing that his company is still mostly in stealth mode, since it meant that his presentation had none of the "my company's solution is the best" aspect. Instead, he gave a wonderful overview of sensors, focused on what he calls the core sensors: camera, radar, and lidar. He did say that he was doing radar, but admitted that when he was raising money, most VCs said to him "Why aren't you doing lidar? Radar is a solved problem."

Requirements

Menju had a top ten list (actually eleven) of critical aspects of a sensor:

  1. Range. sufficient to accommodate the vehicle speed.
  2. Field of view (FoV): sufficiently wide to encompass the whole scene. Some of this is driven by regulation for automatic emergency braking: AEB 2018. But 2020 and 2022 are more stringent and require avoiding collisions at intersections, requiring a wider FoV.
  3. Angle. Angle detection and resolution sufficient to detect relevant features. One issue is that human being are relatively weak targets for radar (hey, so are fiberglass sailing boats, so they have a special reflector at the top of the mast—I propose special hats).
  4. Velocity. Measure the speed of and resolve moving objects. Radar can do this, but most lidar can't do it directly (they can do it indirectly by measuring the difference between frames over time).
  5. Classification. Radar does this poorly. The US military has radar signatures of every enemy tank and plane, and use machine learning. It is difficult to identify object type (tree, pedestrian, car etc) at range.
  6. Color. It's less important than you might think; colorblind people drive perfectly well. But it is very important for traffic lights in particular. I had a colorblind friend who told me he can't tell if a light is red or green until he is close enough to see if it is the top light or the bottom light, which means that at night he has to assume all lights are red until he is close.
  7. Processing overhead. There are large amounts of signal processing and image classification involved in getting from raw sensor data to "it's a child" or "it's a fire-hydrant".
  8. Operation. The big challenge is full functionality in all lighting (day and night) and in #9 which is...
  9. Adverse weather: Rain, fog, snow. Even humans have trouble when it gets bad.
  10. Interference. There are two types, environmental, and from other sensors. A camera gets dazzled more easily than an eye. In radar and lidar there are multiple units on the vehicle. Radar sends a signal out that deteriorates with the usual square of the distance, and the reflection that comes back also deteriorates at the square of the distance (from the target), so a total of a fourth power. So a car coming towards you is flooding you with radar at R2 and you are detecting at R4, so it is easy to get "dazzled." Then add that you might have 4-6 radars on each vehicle.
  11. Cost. The predictions of when self-driving cars are going to be available depends partially on cost. Google/Waymo can run a few cars in Phoenix and not care about cost, but low sensor cost is critical for volume production. Eventually this needs to be technology that we can put in a $20,000 car.

Sensors

The three sensor types are vision (cameras), radar, and lidar.

First vision. The CMOS image sensors (CIS) needed for automotive are different from consumer electronics (which is mostly focused on making images Instagram-good). Automotive needs high dynamic range, improved low light sensitivity through larger pixel sizes, lower resolution, faster response time, and performance needs to work at higher and lower temperatures (smartphones live in our pockets, cars live in Minnesota and Arizona). Obviously, one big disadvantage is that like humans, at night they are limited to the vehicles headlight illumination. Cleaning is an issue, the lens needs to be kept clear of muck.

Next Lidar. There are a huge number of different types of lidar (even more than the number of ways to capitalize lidar). For automotive use, during the current prototype phase they use mechanical scanning, but the focus in the longer term is on solid-state lidar, either using MEMS mirrors, or an optical phase array. One subtle issue is that the lowest cost comes from near infrared 850-940nm wavelength light, but this is limited by eye-safety requirements, and solar background. The higher performance is 1550nm which has 5 orders of magnitude (500,000X not 5X, although a later speaker said "just" 120X) more allowed energy, 10X less solar background, but is expensive. Its big plus is great angular resolution, but has a problem with very non-reflective objects and fog/rain/snow. Cleaning is an issue.

Radar. This is current radar since Uhnder is doing next-generation radar which Menju gave some sneak information about later. All current radar uses analog FMCW (frequency modulated continuous wave). Transmit a chirp, mix with the reflected wave received, downshift to baseband, use a low pass filter. One issue is that you have multiple radar units on a vehicle, but only one can be transmitting at a time, so it is necessary to do time division multiplexing (basically, rotate around each radar one at a time).

Strengths and Weaknesses

We can put each sensor type onto a spider diagram as above. Radar on the left, lidar in the middle, vision on the right.

 The reason that these are all important is that if we overlay them, the strengths of one make up for the weaknesses of the other. Radar can't tell if a traffic light is red or green, but vision can. Vision can't see well in fog but radar can. And so on. But there are stil substantial gaps.

Next-Generation Radar

What if we transformed the radar sensor? Today's radars were all designed for detecting large targets.  Lots of companies are working on this. Needless to say, it's what we're doing.

What if we added more range, more resolution (both vertical and horizontal), very sensitive velocity detection, better bright to dark target ratio, and more interference-resilience. That would transform radar roughly as in the above spider diagrams.

 More importantly, you fill the gaps. In fact, they are so well filled in that lidar is of marginal value. There will still need to be lidar on the front for fast driving to detect things like a piece of wood in the road that is not radar reflective and can't be detected by a camera in time, but you will need a lot fewer lidars.

Of course, Menju is still in stealth mode. But it is clear the attraction of what Uhnder is doing (if it works, and it works better than the competition, yadda, yadda) is that next generation radar is a great solution, and from a cost point of view, adding it to a vehicle can be "paid for" by removing many of the lidar sensors. His image of the future (green is Unhder radar of course) is:

 Thermal

Chuck Gershman of Owl Autonomous Imaging added a fourth sensor type to the mix: thermal. I won't cover everything he said, a lot was a similar analysis of the strength and weakness of various sensor types. But the big advantage of thermal is that it can detect living objects, and it works the same in day and night and has all-weather operation.

Lidar alone is not good enough since it sees but doesn't understand. Plus lidar range is seriously reduced in bad weather. 905nm lidar, in particular, goes blind in fog and rain. Owl has a focal plane array (FPA) 2-color detector that they have delivered to the Air Force.

As Chuck said, wrapping up:

Unlike in the Sixth Sense, we see live people.

The Wall Street Journal on Sensors

As it happens, The Wall Street Journal just ran a piece on automotive sensors. I say "a piece" but it was more of a weird animated graphic (may require subscription). It has some odd misunderstandings in it, or at least an odd way of putting things:

Cameras aren’t as effective at capturing the environment during low visibility conditions, including after sundown. Lidar and radar are unaffected by darkness, however, because they collect information about the environment from electromagnetic wavelengths higher and lower than that of visible light.

Well, that's true. Except for the "because". It's true about eyes too. We normally address that by putting high-intensity visible wavelength light generators on the front of our vehicles and seeing what comes back. These are known as headlights. Lidar and Radar are not unaffected by darkness due to the wavelengths used, but because they already do something similar, putting out pulses of electromagnetic radiation and then seeing what comes back.

Another quote:

Driving on uneven surfaces can compromise the calibration of lidar and cause excessive wear on the ball-bearings that stabilize the sensor atop the car. The more often a vehicle encounters these conditions, the more frequently the lidar sensor will need to be replaced.

I assume this is true. But those big spinning Lidars on the vehicle roofs are not seriously proposed as a solution to enter commercial volume production. They cost more than the rest of the car. These are experimental platforms. Lidar will require solid-state solutions for economically-viable commercial production, as discussed above. Solid state lidar does not have ball-bearings and all the rest.

The WSJ's final sentence:

And however flawed humans may be, they’re still the best drivers on the road.

And who could be against motherhood, or apple pie? These "best drivers on the road" killed 40,000 people in 2017 in the US alone (1.25 million globally in 2013, the most recent number I can find). For more details, see my post In Other News, 100 People Were Killed by Cars Driven by People written when the first fatality caused by an autonomous vehicle was reported. In October, Waymo (Google) reported that their cars had reached 10 million miles of driving. That sounds a lot, but actually, despite the appalling numbers I just mentioned, humans really are pretty good drivers and only have 1.18 fatalities per 100 million miles, so we don't have enough data one way or another to say whether humans are the best drivers on the road. And, having had two teenage drivers in my family, I can say the bar can be pretty low. But to be fair to my kids, I don't think it was so much that they were 16, but that it takes a couple of years to build up automatic reflexes—I'm sure we've all had the experience of finding our foot on the brake pedal before we consciously realized why. New drivers don't do that.

Learn More

I'll repeat the links from yesterday to my posts this year on automotive:

  • Overview: Automobil Elektronik Kongress 2018
  • Cadence's Automotive Solutions:CDNDrive Automotive Solutions: the Front Wheels and Rear Wheels
  • China: Trends, Technologies, and Regulations in China's Auto Market
  • Safety: CDNDrive: ISO 26262...Chapter 11 and The Safest Train Is One that Never Leaves the Station

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.