At CDNLive in Bengaluru last week, Michal Siwiński gave a technology update on verification to everyone. Well, the PCB folks had already gone off to another room for their own update. The first thing that he pointed out is that verification is starting to become more varied, depending on the application.
"It would be a lot easier if there was just one big circle in the middle, so we just need one methodology," he said. Different types of systems obviously require different IP and subsystems, but they also require subtly different verification approaches.
One big issue is that software and verification makes up around 80% of the cost of an SoC. However, only 20% of the spend on design automation goes to verification. Customers are actually underspending on automation and overspending on people. They should spend more money on tools and less money on projects by having shorter projects with lower headcount.
The above picture shows the entire verification portfolio on a single slide. The foundation is the four best-in-class engines that underlie everything:
The next important factor is to make them, as much as possible, all work the same. Read the same RTL, read the same vectors (except for formal, obviously), have the same definition of what code-coverage means, and so on.
Cutting the other way across the diagram are the flow-driven engines such as VIP (verification IP), vManager for metrics, Indago for debug, and Perspec for software-driven test. These should also "work the same" in the sense that they can work with any of the underlying technologies (and in some cases with more than one at the same time).
I won't spend any time on the basic capabilities, they are well known. I will focus on what is recently new, or, in some cases roadmap items that have not yet been completed.
JasperGold: New is SuperLint, Clock-domain-crossing (CDC), and Safety. On the roadmap is covergroup support for FPV/COV/UNR, JasperGold COV integration, and merging the metrics between formal and simulation.
Incisive: The big new thing is the addition of RocketSim. For roughly the last ten years, it has gotten increasingly hard to improve simulation speed in anything more than incrementally. RocketSim is the beginning of a new era, with speedups of 2-6X for RTL, 5-10X for gate-level, and 10-30X for gate-level DFT. Currently, Incisive and RocketSim communicate through an API, but work is going on to integrate them more tightly for further increases in performance.
Palladium: The Palladium Z1 enterprise-class emulator was announced last November. It is doing very well in the marketplace with a YoY 250% increase in installed emulation capacity. What makes it attractive is not just its raw performance but the number of ways in which it can be used, the number of simultaneous users, and (for an emulator) the low total cost of ownership.
Protium: FPGA prototyping has always been hampered by the time to bring-up, which has historically been measured in months. The Protium flow uses front end of emulation, and emulation has already been optimized to work like simulation. So what used to take three to four months is now a couple of weeks.
The next opportunity is to build bridges between the engines so that they can "cooperate". The above diagram shows some of this. For example, the proof engines that underlie formal can be used to do unreachability analysis for simulation coverage. Or the capability to hot-swap a design between emulation and simulation so that you can, for example, use emulation to get fast to an area of interest and then switch to simulation to debug a problem.
I won't talk about Perspec much here since it is the topic of tomorrow's Breakfast Bytes post. But it provides software-driven testing with portable stimulus. In fact, voting takes place on Wednesday on the Accellera Portable Stimulus Standard, largely created by Cadence.
The digital world is not an island and something like 80% of chips are mixed signal to some extent. So there is ongoing work on mixed-signal verification, such as getting Virtuoso Verified metrics into vManager (work in progress).
ADAS and autonomous vehicles, along with the ISO 26262 standard, have driven(!) a huge increase in the importance of safety and reliability, with their long hardware/software test cycle and a focus on functional safety. One challenge is that historically automotive has used old processes that have had years of being characterized and qualified. The compute requirements for ADAS means that they need to use more advanced process nodes, but these processes are inherently less well characterized (especially with respect to aging) and are fundamentally less reliable. There is a lot of potential for extending JasperGold into formal safety analysis.
So we are entering a new era in verification where markets like consumer and IoT require a huge amount of power verification and mixed signal. In the middle, servers and networking build the biggest chips and software loads, needing the most powerful engines, and to the right, with automotive and mil-aero it, is all about safety and compliance.
Next: A Perspective on Perspec
Previous: 5G, Coming Soon to a Phone Near You