Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
Welcome back to this account of the IP Security Panel at the Accellera Luncheon at DAC 2019. We’ve covered who’s seated on the panel and the moderator’s questions in the last installment (link to first blog)—now it’s time to talk about what the audience had to ask.
The first question asked was related to that morning’s keynote speaker, and that speaker’s one criteria for IP developers with regards to security: do they have a plan for the post-hack environment? How do you regain control of a hacked device? The person asked: is this a good strategy?
Intel’s Brent Sherman seemed to think this was a good strategy, going so far as to refer to it as the “holy grail.” He rephrased the question as “how do I un-compromise my design if it’s been compromised?” Being able to “cure” a security breach quickly seems like a good middle ground between current security measures and a mythical future-proof secure design that closes every possible security hole and anticipates every attack. The panel agreed that it was a good idea but seemed to think it was a distant prospect. Right now, all the security measures do is protect the critical components of a design under the designer’s control—there’s very little in place to stop a potential hacker from getting inside the design through acquired IP or later in the design chain. To the question of how much effort it’s worth devoting to stop attackers from even getting inside the design, the panel agreed that it was a matter of economics: a lawn sprinkler needs less money devoted to its overall system security than a missile defense system due to the macroeconomic impact of the attack.
Next, an audience member responded to Lei Poo of ADI’s statement that we need to create a “culture of security”—they asked: how does one make having a “culture of security” fun as Lei said? Lei replied that it was a challenge, but getting people engaged was the key. All verification engineers know a little bit about security architecture, but not a whole lot of them know it extensively. To create this “culture of security”, the verification engineers have to see the big picture—they’ve got to understand how security permeates every stage of the design life cycle. Generally, engineers find new challenges fun. By presenting security as a new problem to solve, the hope is that engineers will tackle it with the same vigor they would tackle any other intellectual challenge, rather than as a chore they need to tick off on the way to tapeout.
Another question from the audience: How do we ensure that we don’t need to reinvent security with every new design? There has to be a methodology to ensure that security isn’t a monumental undertaking every time. Tortuga Logic’s Andrew Dauman said that the automation technology is emerging—but getting to a security methodology is the end-goal. Threats are still too diverse and there’s too many gaps in designer’s understanding of security requirements. A piecemeal methodology is possible, but nothing that gives the users requiring tight security an off-the-shelf methodology that would generate added confidence. Lei offered a suggestion about a security assurance process. What can we reuse from previous security design cycles? What can we learn from each iteration? When IP, systems, or components are reused in different designs, some amount of the information regarding security can be reused as well.
What should system integrators expect from IP providers in terms of security? Interfaces can only show so much, after all. Brent said this is one of the issues the Accellera IP Security Assurance Working Group is addressing. For example, having secret test modes that end consumers don’t get to see is fine, but if they’re invoked at the wrong time, they can cause breaches. Those test modes aren’t in interfaces (hopefully), but they’re often required for development. Someone integrating third-party IP needs to know about all of the test modes, so they can adequately plan around their existence. Lei expanded on this, saying that integrators should get to see the testbench used to verify third-party IP as well. Andrew offered that people who buy pre-packaged secure IP do so because they generally do not have the manpower, knowledge, or time to create their own secure IP—so it’s extra important that the items mentioned by Brent and Lei are present. It was also important to the panel that the IP can be confirmed to be untampered with in transit.
Following that was a brief discussion about the relevance of the verification testbench for security. Verification says that an IP does what it’s supposed to do, but security cares about an IP is actually doing what it’s not supposed to do. “Negative” verification was proposed—this would include fault injection, where an IP is subjected to all sorts of stimuli to try and make it do incorrect things. The panel again stressed the importance of scaling security requirements to the task at hand—don’t break the bank securing your microwave but you may need a large investment where there is greater macroeconomic impact.
The last question was about the differences and similarities between security and functional safety, which recalls the earlier point about ensuring that a system doesn’t do the wrong things when in the presence of certain stimuli. Andrew mentioned that these issues cannot be separated; while not the same, they are closely related. He mentioned the Chrysler Jeep security breach that occurred a few years back—that was a security problem that turned into a functional safety problem. Brent offered this statement: “If it’s not secure, it’s not safe, if it’s not safe, it’s not secure,” with regards to the relationship between security and functional safety. Security can piggyback off the ISO 26262 standard but there’s still a way to go. Again the ever-present economic specter arose: hacking into cars right now is a target, not because of the incentive to disrupt one car, but because of the larger impact on consumer confidence. What happens if the incentive to hack a car rises faster than the relative investment to close the holes? This led to the panel’s closing statement: there is no end point in security. Threats will always exist, so new defenses must always rise to meet them. We were left with a question: are the tools themselves secure? Who secures the tools? What tools are used to secure the tools?
Down the rabbit hole we go.
Security is not a solved issue, and it likely never will be. As technology improves, nefarious deeds will become harder and harder to disrupt—so the defense and detection technologies, while not necessarily the flashiest nor the most directly revenue-generating, must rise to meet the occasion. Is the solution to provide an economic incentive to prioritize the security of new IP, or is it to create a “culture of security” that incentivizes engineers to focus more on security on their own? In all likelihood, the answer is some amount of both—but as of right now, it’s still unclear.
It’s likely that this panel will occur again next year, so as the time for DAC 2020 rolls around, keep an eye out. To say that security is a hot topic is underselling it—it’s not so much a hot topic as it is a necessary and pertinent topic that concerns the safety of devices we use every day. As more and more research comes out of academia and industry, new marketable technologies to help alleviate these security concerns will surely arise, and next year the panel will go into even more depth addressing these concerns.
Join us again next year for another meeting of security’s top heroes!