In a Jan. 10 announcement of new Silicon Realization verification capabilities, Cadence has promised to "extend metric-driven verification from digital simulation to formal model checking." Here's some more detailed information about what that means and how it can help design and verification engineers meet their verification goals.
Many IC design teams use metric-driven verification (MDV) for simulation. The concept is simple; you capture your verification intent in an executable verification plan (vPlan), and as verification progresses you track coverage metrics, most notably code coverage and functional coverage. The plan can also track assertions, checks, and time-based data points. This makes it possible to determine when high-quality verification closure has been achieved, answering the toughest of all verification questions, "am I done yet?"
The four stages of MDV - plan, execute, measure, and react - are depicted in the diagram below. A recent Cadence whitepaper provides more information about MDV.
Speaking Different Languages
To date, however, MDV has generally been limited to simulation. Separate teams typically tackle formal verification, and if they achieve a partial or complete proof on a block, this may not be reflected in the verification plan. The end result may be a lot of unnecessary or redundant simulation. Avoiding that extra work is a key motivation for extending metric-driven verification to formal model checking.
In a recent Q&A interview Alok Jain, who directs Cadence's R&D efforts in formal verification, noted that formal coverage metrics and simulation coverage metrics are very different. With formal coverage, you are trying to understand how good your constraints are or how complete your proof is. The "metrics" used for these purposes are often fairly crude or ad-hoc, and have seen limited use.
In the simulation world, code coverage and functional coverage are widely used. There is no clear connection to formal coverage. Alok noted that "formal and simulation speak two different languages as they describe different aspects of the verification intent." One goal of the new Cadence verification announcement is to get formal and simulation technologies to speak the same language.
Speaking One Language
How can this be done? The new Cadence verification solution uses formal analysis in the Incisive Enterprise Verifier to generate simulation metrics. Formal analysis generates a witness trace for coverage points in the design. A witness trace is a sequence of values that ensures a coverage point can be hit. This witness trace can then be replayed in the Incisive Enterprise Simulator. This approach generates stimulus that can activate coverage points, and determines whether coverage points are reachable.
"So to put it another way," Alok said, "Incisive Enterprise Verifier has become a smart testbench generator. You generate testbenches that can hit all your functional coverage points. You replay these testbenches in the simulator and get simulation-oriented metrics."
With this connection, the verification plan can add two elements: the reachability of coverage points, and the proofs provided by formal analysis. As Alok noted, simulation alone can't exhaustively prove an assertion -- it can only track whether it passes or fails. Formal analysis can provide a complete proof by ensuring that there is no possible counter-example.
In support of the new capabilities, the vPlan provided by Incisive Enterprise Manager can track coverage point hits from both simulation and formal engines, and has added a column that shows whether coverage points are reachable.
Another Step Toward Silicon Realization
So what does all this have to do with Silicon Realization? Recall that the EDA360 vision of Silicon Realization calls for holistic solutions based on three requirements -- unified intent, abstraction, and convergence. "Intent" applies to verification as well as design, and metric-driven verification provides a way to represent verification intent and then monitor progress towards it. Now that capability has been extended into the formal realm. The end result will be greater convergence on that perennial question, "am I done yet?"
I assumed that a module getting verified with formal will not have a dynamic simulation TB. That's where the thought process was in one direction only.
Thanks much for the insight.
Yes the playback of trace from IEV on IES-XL does not require a separate testbench. In a sense, IEV automatically generates a testbench. One of big benefits of this approach is that it lets you find bugs early in the design process before even the testbench is ready.
In some specific situations, where the blocks are "formal friendly", you could even completely avoid writing a block-level testbench! Instead, you could close on this block with just IEV-based coverage and limit writing your testbench to higher (cluster or SOC) levels.
In other situations, you will still need to write a block-level testbench. However, in this case you can use the IEV-based coverage and merge it with testbench coverage; and you will still use the MDV solution. The added benefit is that you are also taking coverage contribution from the formal environment (that you wrote to find bugs early in the design process) and thus avoid some un-necessary or redundant simulation.
Thanks much for this interesting entry.
Assuming that playback of trace from IEV on IES doesn't need a separate test bench, I am thinking if the above technique avoids 'unnecessary or redundant simulation' or 'avoids rewriting a test bench for dynamic simualtion to collect coverage'? Given an TB, I can always limit the no. of dynamic simulation based on the contribution to coverage using eManager. Since MDV involves running coverage every now & then the limitied redundancy will be minimal. Your comments?
Gaurav Jalan (whatisverification.blogspot.com)