• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Driving Dangerously
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Automotive
deep learning
tesla
ADAS
neural networks

Driving Dangerously

8 Apr 2019 • 5 minute read

 breakfast bytes logoI've written a few times before about the fragility of neural networks, for example in last year's post Fooling Neural Networks. There is an unstated assumption underlying the training of neural networks that the environment is benign, and there is no adversary trying to fool the neural network. Microsoft started on very limited systems like the original PC, and also assumed a benign environment. Excel had macros, and you could do all sorts of things...like read entries in the contact address book...or send an email...all from within a macro that ran automatically when the spreadsheet was opened. What could possibly go wrong?

In the post I linked to above, I showed the above stop signs, with little bits of white and black tape added. The neural network that they were testing says that these signs are 45mph speed limit signs. An error like this is pretty significant because it is obvious to a human that these are stop signs with bits of tape on, it's not a borderline case like mixing up a 35mph speed limit sign with a 55mph speed limit sign in foggy conditions, the sort of error that humans make.

Most of the work on producing these types of error have full access to all the weights in the neural network, and so can reverse-engineer what it takes to fool them. The speed limit signs are actually more difficult than the static images you've probably seen, where all sorts of images look like ostriches to the neural network. Speed limit signs are moving across the camera field as the vehicle moves, which gives a lot of additional information and it would seem that it would be harder to fool with just a few bits of tape.

Tesla Autopilot

 At a recent Black Hat security conference, Tencent Keen Security Lab presented some work under the title Experimental Security Research of Tesla Autopilot. There are actually three sections to the work. The first is breaking into the system and gaining root access to the main Autopilot processor (without cheating and connecting anything to the vehicle wiring). However, that's very similar to almost any vulnerability on servers and PCs. The architecture of the system is in the block diagram above: two NVIDIA Tegra processors, the LB (which stands for "lizard brain") is an Infineon chip, and there is also another GPU.

More interestingly, they decided to see if they could fool the vision systems on the car and get it to do things it shouldn't. This was a Tesla Model S 75.

They started with something simple, turning on the windshield wipers. Tesla has three cameras behind the mirror: main, narrow, and fisheye (ringed in red in the image). As well as using it for driving, Tesla uses the fisheye camera to see how much water is on the windshield, and then turn the wipers on at one of two speeds. They started with the traditional way of fooling vision networks by adding noise, and they found that they could make it seem like there was water on a dry windscreen. They did this by changing the feed from the camera to add the noise. But when they tried to do something similar, with a pattern on the windscreen, they couldn't get pixel level alignment.

What turned out to work better was to display the noise on an iPad or a display on the back of the car in front (and taxis in some countries such as China have a lot of advertising displays on the rear of the vehicle). The images above would work to cause their Tesla to turn its wipers on when displayed in front of the vehicle.

Okay, it's not the end of the world if your wipers get turned on by something on the car in front. But those cameras are also used for lane centering. They wondered if they could do something similar to make the vehicle swerve into oncoming traffic—obviously life-threatening.

They tried all sorts of approaches, such as making a line disappear, which they could do successfully, although it required extensive modification to the line patterns on the road. But I'll skip to the end of the story. Instead of trying to make lane markings become invisible to the Autopilot systems, they switched to trying to add something that the software would consider a lane marking, as in the above picture. The car is meant to go straight on, following the blue arrow. But by adding three carefully placed dots on the road surface, the car would think that the lane was turning sharp left and follow the green line into oncoming traffic. The final picture shows the view from the car, with the dots on the road (which you can't really see, to be honest) ringed in red.

The Tesla followed the fake lane across the road.

 Conclusion

Computer vision systems can be fooled, and not just in perfect laboratory conditions with a neural network controlled by the researchers. This was done in the real world with real production neural networks in a shipping product. I doubt that this type of attack is specific to Tesla, it's just that they are the obvious "test vehicle". It seems to be a problem that all neural nets are vulnerable to at the current state of the art.

I'll just quote the last couple of sentences of the paper:

We analyzed APE’s vision system in deep through static reverse engineering and dynamic debugging. Based on the research results, we did some experimental tests in the physical world and successfully made Tesla APE behave abnormally in our attack scenarios.

This proves that with some physical environment decorations, we can interfere or to some extent control the vehicle without connecting to the vehicle physically or remotely. We hope that the potential product defects exposed by these tests can be paid attention to by the manufacturers, and improve the stability and reliability of their consumer-facing automotive products.

More Details

This is probably more detail than you want, but here is the original paper. It is a 40-page pdf.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.