Get email delivery of the Cadence blog featured here
From the hacking of VTech electronic learning devices to leaks in Juniper firewall equipment and the massive Takata airbag recall, 2015 was, unfortunately, a headline year for safety issues. Such high-profile product failures also put the vulnerability of electronic components in the spotlight.
Concerned consumers and businesses will continue to place a premium value on safety and dependability, which, in turn, will accelerate the development pace of electronic-centric safety solutions. For design engineers, this momentum represents a call for more rigor in design and verification methodologies.
“You want to make sure the design works as intended, that the device works before you ship it,” notes Steve Carlson, product management group director at Cadence. “You want it to fail as gracefully as possible when it does.”
With safety compliance, various industries have applied a fair amount of rigor to their design processes, much related to providing some level of traceability in each step. Traceability allows manufacturers to more accurately pinpoint the root cause of problems. For example, the automotive industry is guided by ISO 26262, which is all about the functional safety of electrical and electronic systems in series-production passenger cars. The ISO 26262 standard covers many aspects of safety-related automotive software production, including the qualification of tools used in the development process. Considering the prevalence of electronics in most major automotive components—and the frequency of vehicle recalls—having such standards is a step in the right direction. But, it’s still not a panacea.
According to Carlson, we’re now moving into an era where design engineers are focusing on the level of verification achieved via metric-driven verification. They’re seeking answers to questions like: What constitutes a well-tested design? How well did this design process cover the litany of tests outlined at the start?
From a design architecture standpoint, says Carlson, there’s plenty of opportunity to improve on the idea of “failing gracefully.” We can now do more thorough testing than ever before. We’re becoming better at understanding resiliency to failure and testing for unexpected behaviors in the presence of failures, he noted. And, as we learn more about the behavior of designs, we should be better able to design stronger security at the SoC architecture level, he said.
Often overlooked, however, is the associated reliability analysis that can spotlight likely points of failure. “It’s an art and experience kind of thing,” said Carlson. “Cadence continues to develop a platform for this multi-domain analysis, providing an ability to look at performance, power, thermal, reliability—all of these things together. Dividing and conquering has been a very powerful approach, but that compartmentalization leaves blind spots.”
Investing in a rigorous metric-driven methodology, where you co-verify the chip as well as the software delivered with the chip early on, is critical to ensuring electronic design safety. Also important are performing analysis of different failure scenarios, such as through fault injection, tapping into the exhaustiveness of methods like formal analysis, and using tools for rapid prototyping and coverage-driven test development.
There’s always the risk that aggressive market-window pressures could detract from the focus on safety and security. But having a toolset and methodology that can automate some aspects of design verification can go a long way in fostering the customer loyalty that reliable, well-designed products can bring.