Get email delivery of the Cadence blog featured here
There’s no better way to demonstrate the value of a verification tool than to find a killer bug in the customer’s design. When the verification engineer looks up from the screen and says, “Wow – we might never have found that!” we EDA vendors are equally happy. Finding a bug is a great way to conclude a technical evaluation with a prospective customer or drive wider usage of a tool at a current customer. If the bug is one that might have caused a chip turn, so much the better.
Of course, events are rarely quite that simple. One of the things I’ve seen again and again during my 16 years of verification applications and marketing is that many engineers are reluctant to admit that our tools found a true bug. Design errors that are found pre-silicon are dubbed “issues” or perhaps “problems” if they’re serious enough. You know the cliché about bugs that are missed and end up in the chip: these become “features” in the data sheet. Even if an error is deemed a bug, there’s still the question of whether it might have been found eventually by some other tool.
Whatever the term used, the goal for verification vendors is help our customers find as many bugs as possible, as early in the development cycle as possible. These may include logic errors in the RTL design, timing errors, mistakes in analog and mixed-signal circuits, or lower-level implementation bugs. In this blog we focus on “functional” bugs, which for the most part means cases in which the RTL implementation of the digital portion of the design does not match intended functionality as expressed in the design specification.
In past blog posts, I have given some real-world examples of functional bugs that I encountered during my years in design and applications engineering. Most of these were found pre-silicon but one, alas, slipped through and contributed to a chip turn. Some bugs simply can’t be classified as features without compromising the end product. If the chip wouldn’t accomplish the tasks for which it was designed, or match up to competitive devices, these bugs must be fixed despite the time and cost involved.
This same statement is true for another category of bugs: performance shortfalls. At the MTV workshop covered in my most recent blog post, Amol Bhinge from NXP asked that standardization activities encompass performance verification as well. This reminded me of a point that I make frequently: a performance bug is a functional bug. After all, not meeting the performance goals in a project specification can render a chip uncompetitive and unsaleable as surely as breaking some part of the intended functionality.
Another result of using EDA tools to verify a chip is finding “issues” or “problems” not in the design itself, but rather in verification files and models. Just as designers make mistakes when writing RTL, verification engineers make mistakes in assertions, testbench code, test suites, reference models, embedded software, automation scripts, and so on. Any verification failure is fundamentally a disagreement between the design and some sort of model. Our tools find many problems with verification code, and fixing these is an essential part of completing the design verification phase of a project.
In my admittedly biased view, we vendors should be given credit for finding a genuine “bug” in many of these cases. Debugging the verification environment is part of the process, and if we can find as many problems as possible, as early as possible, we provide a real benefit to the project even if no killer design bugs are found. As you would expect, this is always a lively topic when discussing evaluation results or assessing the value a verification tool provided on an actual chip project.
Finally, I will note that portable stimulus and software-driven verification are excellent at finding all the categories of functional bugs I’ve discussed in this post. Future posts will discuss how we can stress-test a design thoroughly enough to get realistic performance metrics and give examples of the types of bugs we have found in both designs and verification environments. As always, your comments and thoughts are most welcome.
The truth is out there...sometimes it's in a blog.