Get email delivery of the Cadence blog featured here
The Internet of Things (IoT) could be a big number...20 billion things... 50 billion things... or Arm's favorite number all week at TechCon, a trillion things. Or it could be a lot closer to zero if people don't trust the things in the first place.
We don't usually think of them as part of IoT, but once people start to distrust security, that can extend to smartphones, and to autonomous cars. It is only a slight exaggeration to say that all devices contain Arm processors, with over 100B shipped, and another 100B expected in the next four years. Just like vaccination doesn't need to get to 100% of the population to be effective, if Arm can get the industry to move on security, then almost all devices will be secure. That is important to all of us since insecure devices do not just affect their owners, they can be used for creating botnets for taking down other sites, such as the Mirai attack (see my post Video Cameras: No Service for You) and the current incarnation, the even more powerful and potentially dangerous Reaper, which has already infected over 1M IoT devices although it seems to be lying dormant for now.
At Arm TechCon, Simon Segars spent over half his keynote talking about security (see my post from Wednesday Simon Segars: It's the Security, Stupid). Then he sat down with Mary Aiken for a discussion. She is a cyberpsychologist (I bet you didn't know that was a thing) and also an academic advisor to the European Cyber Crime Centre (EC3) at Europol. Later that day, Jessica Barker gave a fascinating presentation on social engineering, which is basically attacking security through human weaknesses rather than technical weaknesses. See my post, Social Engineering, yesterday for details of her presentation.
Arm launched two major security initiatives during the week of TechCon. First, on the day before TechCon started, they announced PSA, the Platform Security Architecture. This is something that they are putting into the public domain and will be releasing open-source code under a very permissive license. I guess, in the same way as Google figures it will get a good share of the additional search if internet access is improved, Arm figures it will sell more processors if IoT security encourages the growth of the segment. See my post Putting the Bad Guys in an Arm Lock covering the PSA announcement.
Then, when we entered the hall for Simon's opening keynote on the second day of TechCon, there was a nicely printed Security Manifesto on every seat. You can download your own copy in pdf form on the Arm website at the Securing a Trillion Devices page.
The manifesto contains three sections:
I will focus on the manifesto itself, rather than the technology vision.
The manifesto itself is just six bullets long:
Simon's position is that we, the semiconductor ecosystem and the technology industry more broadly, need to embrace our responsibilities under what he calls the "Digital Social Contract." We need to endeavor to protect users no matter what. Simon's piece discusses the Mirai botnet that I mentioned above (although the latest version of Mira, Reaper, is scarier still). Mirai focused its attack on Dyn, a DNS-server, and resulted in sites like Netflix becoming inaccessible. As a result of this, Dyn lost nearly 15,000 customers, despite the security weakness not being in Dyn's systems (nor sites like Netflix that went down) but in almost innocuous things like home Wi-Fi routers and video security cameras.
Security is hard. It is unrealistic to expect that every design group doing a video security camera is going to understand it, especially as it doesn't affect their business. The customers who bought the security cameras that Mirai exploited wouldn't even have noticed since their camera will have continued to work normally. On the other hand, Mirai didn't try to recruit iPhones because they are secure, and Apple has teams of security experts who can respond if a vulnerability comes to light.
What Arm is proposing is to make secure-by-design technologies readily available so that it doesn't affect time to market, and so that design teams without their own security experts can deliver secure products. We already do this with one of the nuts and bolts of security, encryption. Nobody tries to write their own implementation of encryption algorithms like AES or RSA, they use libraries. But any programmer can implement an encrypted channel without becoming an expert on encryption algorithms. In the same way, IoT devices need to be deployed with chips that are "born secure", self-heal or quarantine, and be treated if necessary.
We are used to our smartphones updating their apps and the operating system, IoT devices will need to do the same. One part of the problem with the Mirai botnet is that there is no way to patch all those video security cameras, despite now knowing that they are insecure. Even if there was a way for a user to update the camera, and even if the camera manufacturer could send every user an email telling them to update, I doubt that more than 20% of cameras would get their firmware updated.
Security is inevitably a fast moving world where the hackers can pivot faster than the product makers. So "born secure" is never enough, there needs to be a secure way to keep code up to date in the face of new threats. Although standards might be important, standard bodies and governments move notoriously slowly. It has been only 10 years since the iPhone was announced, and only 13 years since autonomous vehicles could not go even 10 miles (with no traffic around). Standards take that sort of length of time from inception to standardization. Who knows what standards we will need in another ten years time?
Just in time for the security emphasis of this year's Arm Techcon, Krack arrived. Krack is a security attack on Wi-FI connected devices. It "stands for" Key Reinstallation attACK.
Krack exploits a vulnerability in Wi-Fi's WPA2 protocol used by... well...everything. It is possible, but difficult, for attackers to eavesdrop on data between devices, such as your smartphone and your Wi-Fi router. The vulnerability is in the protocol itself, not specific implementations, and so affects all Wi-Fi networks. This also means that the only solution is to update your firmware. If you have a device that updates its operating system or firmware automatically, then this will probably have happened by the time you read this (in particular iPhones have been updated, unless you manually turned updates off).
For a video with more details about how Krack works, watch this video (6+ minutes):
I had already received an email from Netgear, who manufactured my Wi-Fi router, with instructions to update my firmware. In fact, to give them credit, I received this email before I'd even heard of Krack, and I did what I normally do with emails like that. I ignored it. Once I heard about Krack a couple of days later, I realized what it was patching and so I decided to stop ignoring the message and update my firmware. This required me to log into my router, which required me to use a password I've not used for a couple of years. Supposedly, I could recover my password using the serial number of the router (on the base) but that didn't work. Luckily, I discovered that the default username and password were "admin" and "password". However, I'd been smart enough to change the default password. Luckily, I guessed what I would probably have changed the password to, and the firmware update went smoothly. Somehow, I don't think that the archetypal "man in the street" is going to understand the intricacies of updating firmware or have the competence to do it (unless that "street" happens to be University Avenue or Castro Street).
There is a full description (very technical) of Krack on a special website created by Mathy Vanhoef, who originally discovered the vulnerability. For a less technical version, see Wired's piece Why the Krack Mess Will Take Decades to Clean Up.
But the bottom line is: update your firmware. Now.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.