Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
Autonomous vehicles are coming. In a statistic from the U.S. Department of Transportation, about 37,000 people died in car accidents in the United States in 2018. Having safe, fully automatic vehicles could drastically reduce that number—but the trick is figuring out how to make an autonomous vehicle safe. Internet-enabled systems in cars are more common than ever, and it’s unlikely that the use of them will slow or stop—and while they provide many conveniences to a driver, they also represent another attack surface that a potential criminal could use to disable a vehicle while driving.
So—what’s being done to combat this? Green Hills Software is on the case, and they explained the landscape of security in automotive systems in a presentation given by Max Hinson in the Cadence Theater at DAC 2019. They have software embedded [FS1] in most parts of a car, and all the major OEMs use their tech. The challenge they’ve taken on is far from a simple one—between the sheer complexity of modern automotive computer systems, safety requirements like the ISO 26262 standard, and the cost to develop and deploy software, they’ve got their work cut out for them. It’s the complexity of the systems that represents the biggest challenge, though. The autonomous cars of the future have dynamic behaviors, cognitive networks, require security certification to at least ASIL-D, require cyber security like you’d have on an important regular computer system to cover for the internet-enabled systems—and all of this comes with a caveat: under current verification abilities, it’s not possible to test every test case for the autonomous system. You’d be looking at trillions of test cases to reach full coverage—not even the strongest emulation units can cover that today.
With regular cars, you could do testing with crash-test dummies, and ramming the car into walls at high speeds in a lab and studying the results. Today, though, that won’t cut it. Testing like that doesn’t see if a car has side-channel vulnerabilities in its infotainment system, or if it can tell the difference between a stop sign and a yield sign. While driving might seem simple enough to those of us that have been doing it for a long time, to a computer, the sheer number of variables is astounding. A regular person can easily filter what’s important and what’s not, but a machine learning system would have to learn all of that from scratch. Green Hills Software posits that it would take nine billion miles of driving for a machine learning system of today’s caliber to reach an average driver’s level—and for an autonomous car, “average” isn’t good enough. It has to be perfect.
A certifier for autonomous vehicles has a herculean task, then. And if that doesn’t sound hard enough, consider this: in modern machine-vision systems, something called the “single pixel hack” can be exploited to mess them up. Let’s say you have a stop sign, and a system designed to recognize that object as a stop sign. Randomly, you change one pixel of the image to a different color, and then check to see if the system still recognizes the stop sign. To a human, who knows that a stop sign is octagonal, red, and has “STOP” written in white block letters, a stop sign that’s half blue and maybe bent a bit out of shape is still, obviously, a stop sign—plus, we can use context clues to ascertain that sign at an intersection where there’s a white line on the pavement in front of our vehicle probably means we should stop. We can do this because we can process the factors that identify a stop sign “softly”—it’s okay if it’s not quite right; we know what it’s supposed to be. Having a computer do the same is much more difficult. What if the stop sign has graffiti on it? Will the system still recognize it as a stop sign? How big of an aberration needs to be present before the system no longer acknowledges the mostly-red, mostly-octagonal object that might at one point have had “stop” written on it as a stop sign? To us, a stop sign is a stop sign, even with one pixel changed—but change it in the right spot, and the computer might disagree.
The National Institute of Security and Technology tracks vulnerabilities along those lines in all sorts of systems; by their database, a major vulnerability is found in Linux every three days. And despite all our efforts to promote security, this isn’t a battle we’re winning right now—the number of vulnerabilities is increasing all the time.
Check back next time to see the other side: what does Green Hills Software propose we do about these problems? Read part 2 now.