• Home
  • :
  • Community
  • :
  • Blogs
  • :
  • Breakfast Bytes
  • :
  • Verifying Processor Security, Part 2

Breakfast Bytes Blogs

  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
  • All Blog Categories
  • Breakfast Bytes
  • Cadence Academic Network
  • Cadence Support
  • Computational Fluid Dynamics
  • CFD(数値流体力学)
  • 中文技术专区
  • Custom IC Design
  • カスタムIC/ミックスシグナル
  • 定制IC芯片设计
  • Digital Implementation
  • Functional Verification
  • IC Packaging and SiP Design
  • In-Design Analysis
    • In-Design Analysis
    • Electromagnetic Analysis
    • Thermal Analysis
    • Signal and Power Integrity Analysis
    • RF/Microwave Design and Analysis
  • Life at Cadence
  • Mixed-Signal Design
  • PCB Design
  • PCB設計/ICパッケージ設計
  • PCB、IC封装:设计与仿真分析
  • PCB解析/ICパッケージ解析
  • RF Design
  • RF /マイクロ波設計
  • Signal and Power Integrity (PCB/IC Packaging)
  • Silicon Signoff
  • Solutions
  • Spotlight Taiwan
  • System Design and Verification
  • Tensilica and Design IP
  • The India Circuit
  • Whiteboard Wednesdays
  • Archive
    • Cadence on the Beat
    • Industry Insights
    • Logic Design
    • Low Power
    • The Design Chronicles
Paul McLellan
Paul McLellan
19 Nov 2019

Verifying Processor Security, Part 2

 breakfast bytes logo This is the second post about Eli Singerman's keynote at the recent Jasper User Group. The first was Formally Verifying Processor Security. In the last couple of years, high-performance processors (not just Intel's) have been shown to be vulnerable to various side-channel attacks. These typically rely on using speculative execution to run code that should not be run, and then, despite the speculative execution being abandoned, some leak of data occurs. The first two such attacks were announced to the public at the start of last year with the names Spectre and Meltdown.

Eli is working on using formal approaches to prove that a future processor is immune to side-channel attacks, and also for analyzing the effects of mitigation on current processors (going back seven years). This is not just hardware that needs to be analyzed, but core firmware (in the processor itself) and other firmware (in devices like cameras and accelerators).

One area of concern are all the registers that can be altered by compromised firmware. As Eli said:

Control registers make up a big surface of attack. If you can get to them, you can do bad things, even like burn up the CPU.

As examples of the type of properties that he is looking to prove, he listed:

Confidentiality and Integrity of secure data across domains, such as leakage and access to secure data across different operating modes. This goes in both directions since we don't want unauthorized leakage, nor alteration of data by an insecure agent. One challenge is to have modes that are independent of the operating system (since Intel doesn't control the operating system) and guarantee there is no way to get data or later data across the domains.

Memory ranges such as SW/FW allocation of system memory for devices. The previous day, Arm had talked about verification of memory ranges but said Arm is not worried about the software, they have another verification approach for that. 

I am worried about the software. We need to guarantee the control registers are set correctly, and ranges don’t overlap. I’m worried about hardware, and firmware, and the combination.

Critical data structures contain only legal values, index overflow/underflow.

Critical data structures must only contain legal value. A lot of this is defined by firmware, which writes the value. I want to formally verify this. We are trying to protect or defend against known or future attacks. We are trying to secure our microarchitecture.

Control Register (CR) protection such as during the boot flow. Hardware protection mechanisms need to be independent of the software. For example, with some registers there are only a few places where the firmware can alter it. They want to keep them in ROM so they cannot be altered, but that still leaves verifying who can access what register at what time. A lot of critical registers are write-once, they get initialized with a value at boot time and then it can never be changed. That can be enforced that with hardware.

There are control registers that are open, and we write them in the BIOS and in the common BIOS for the community. OEM is restricted to what they can alter. They should not even know that they exist. But not that easy to differentiate in a BIOS as to what you can do and what you should not. In a boot flow, a lot of these values are being written and it is critical they are correct or you can get control of the entire machine and can do whatever you want. We have seen in overall system configuration where we have to load and authenticate the various firmware, this is one of the riskier processes that we have. It gives exposure to many things.

The correctness of mitigations and solutions to different types of attacks, such as avoidance of stale data in speculative execution (in side-channel attacks like Spectre).

If we fetch data from memory and then don’t use that data, we still have access to it for long enough time. Solution for some of this is guarantee whenever a microoperation gets into a memory cluster and gets a fault, that we write back only zero. We can formally verify that the solution is sound in all cases.

Formal Solutions

Some existing formal solutions that they are trying to use are:

Symbolic simulation for hardware and software/firmware. This is a technology needed for other approaches below.

Software/firmware concolic test generation for improved coverage in critical areas.

Concolic is a cross between concrete and symbolic to overcome capacity barrier to reachable execution path. It is starting to succeed. We are trying to improve coverage, and it is measurable, so not too difficult to convince management.

Software/firmware model checking. Eli admitted that in hardware we've made a lot of progress over the last decade or two, but software has been going more slowly, which is surprising in some ways since formal work originally started in proving software correct and hardware came later.

Hardware model checking (e.g., with Formal Property Verification, FPV), also used for verifying mitigations for some of the existing side-channel attacks. "This is what you all know" Eli told the  attendees.

Information flow analysis, such as taint propagation. Taint is where you mark values in some way and propagate through the machine and see where it reaches to, and make sure it doesn't get to anywhere it should not.

Challenges

Eli wrapped up with what he sees as future challenges, and a call to action.

  • We need a security spec language that ideally serves both formal and non-formal methods, analogous to system-verilog assertions.
  • We need to formalize hardware/firmware interfaces in a scalable, automatable manner (polling, interrupts, side-effects...).
  • We need ease of use and assimilation into existing non-formal validation work flows, so we can avoid costly duplication.
  • Cross-layer analysis.
  • And, as always, capacity and scalability.

Security verification is a critical and growing domain. But we need thorough verification methodologies and solutions, and formal can play a role here. But it will require a lot of effort from both academia and the EDA industry to make this a reality.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Tags:
  • Jasper User Group |
  • formal |
  • Jasper |
  • JasperGold |
  • Formal verification |