• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Black Hat: Glitching Microcontrollers
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
microcontrollers
black hat
glitch

Black Hat: Glitching Microcontrollers

15 Aug 2019 • 5 minute read

 breakfast bytes logo As a paid-up member of the semiconductor community, the most interesting presentation I saw at Black Hat was about glitching microcontrollers. One way that chips are vulnerable to security issues is that all our chip knowledge tells us that things are impossible, or at least that the threat is wildly over-exaggerated. You might be able to tell something about a chip by monitoring its power supply, but you aren't going to get anything really valuable out. The first time I heard about this approach, known as differential power analysis (DPA), that was how I thought. Then, a few years ago, at EDPS, I watched some security researchers actually read out the AES encryption keys from a commercial chip by doing just this. When Dave Patterson, the father of RISC, first heard about the Spectre vulnerability in high-performance processors with speculative execution, his immediate thought was "a few bits per century." Although that's an obvious exaggeration for effect, he had no idea it might be the actual number, more like 1MB per second. Both these vulnerabilities, DPA and Spectre, were discovered by Paul Kocher (among others for Spectre). As he put it in a presentation I attended, "this sort of thing should not be discovered by me in my spare time, since I was bored having quit my job."

For details on DPA, see my post EDPS Cyber Security Workshop: "Anything Beats Attacking the Crypto Directly". For my post about Paul Kocher's presentation, see Paul Kocher: Differential Power Analysis and Spectre.

Glitching Chips

So the presentation, by Thomas Roth and Josh Dalko, co-founders of Keylabs, was titled Chip.Fail—Glitching the Silicon of the Connected World. They said that their motivation for the work was that "this is easier than you think". It is also mostly not dependent on having just the right chip, it is the worst sort of vulnerability, BOBE, or Break-Once-Break-Everywhere. As a result, the secure chips in "secure" devices might not be that secure.

They pointed out that professionals say "fault injection" but hackers like themselves just say "glitching". This covers things like changing the clock frequency, or cutting the power for a very short time. Their focus was on voltage glitching. Think of something like trigger on boot, waiting for the boot loader to complete, then change the power supply voltage briefly during the firmware validation check. As they said:

Flash reads take a lot of power, so if you insert a glitch just at "is the firmware valid, in which case boot" then we can boot a compromised program.

The first step is prepping the device. Chips often run on multiple voltages (e.g., I/O at 3.3V, CPU core at 0.7=1.2V, WiFi at 1.3V). Often there is a block diagram that shows how this is done with a single power supply and voltage regulators. To keep the power clean, often the internal power supplies are brought out of the chip so that decoupling capacitors can be added. The first step is to remove the capacitors. But that signal coming out of the chip, so that the microcontroller has nice clean power, can be used to directly power the microcontroller and glitch it. As they said:

that capacitor pin bypasses the regulator and gives direct access to the CPU core. It is often even called something like VDDcore.

Typically the chip won't run without the decaps, so the solution is to use a cheap programmable external power supply with open-source firmware to supply VDDcore directly. Then glitch that power supply.

They built their own glitcher with an Artix FPGA ($70 board) with USB and UART, very small. They had a DPS3003 power supply. The advantage of the FPGAs is that they provide super-precise timing at the level of a clock cycle for a 100MHz microcontroller.

Using their programmable glitcher, they could try different pulse delays and lengths.

The key to glitching is trying a million different delays

Often 2000 attempts were all that was necessary to successfully glitch the chip, without requiring any expensive equipment. They decided to test the approach on "real" devices. Their criteria were that they should commonly be used in IoT devices, a modern chip not something outdated, and a selection of different vendors. They also needed to be available on a development board.

All the chips they picked were successfully glitched, and I see no reason to assume that other microcontrollers are not vulnerable to this approach. This is not a walk of shame, this was just the first four chips they picked. The chips they selected were:

  • Nordic Semiconductor nF52480
  • Espressif Systems ESP32
  • Atmel SAM l11
  • STM32F2

They were all on normal development boards (although with a few capacitors removed).

Results

The Nordic chip required removing six capacitors, and attaching a jumper wire. They were successful after 1.5 hours. Then they got it "stable", meaning it would glitch once in every hundred attempts.

They thought Espressif would be a challenge since it runs at 240MHz compared to their cheap 100MHz FPGA. This chip actually contains a Cadence Tensilica processor, although none of what is being done here really depends on the processor itself. They first got success after three hours. It was "stable" with 10,000 attempts. Meaning they could glitch it successfully in about that many tries.

 Next the Microchip SAM L11. It is so secure it won security awards, it has brownout detection (low power supply). But it only took five minutes to glitch. They could bypass the secure reference boot loader. There is a disclosure in progress and they hope to release details soon. That means they have told Microchip and won't release details of the vulnerability for a period to allow fixes to be implemented. Bypassing the secure boot loader means that you can load anything and have complete control.

 The STM32F2 has an Arm Cortex-M3 processor inside, and this chip is often used in bitcoin/crypto wallets (it is in reference designs for Trezor). It has readout protection but they managed to dump out the boot ROM. Eventually, they could insert a glitch at just the right moment in the boot ROM to enter the debugger (this obviously "should not happen" in a production device). They have worked with Trezor to mitigate this and so have published the details.

Defense

So how do you defend against this kind of thing?

  • First, brownout detection does not equal glitch detection.
  • Test your designs on development kits doing this sort of thing.
  • Write glitch-resistant code (I am a PhD computer scientist and I have no idea how you would do this in practice).

The conclusion:

All chips we looked at were trivially glitchable. It can be done on the cheap. Don't forget that just because a chip is glitchable, that does not equal an exploit. But it is high-risk if you can get a “stable glitch”. 

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.