Venkatramanan, CTO of CVC, set the stage for the discussion by describing the
verification challenges faced by his customers, the role that random tests play,
and the limitations of this approach. Then each of the other three panelists
gave a short presentation with his perspective on the problem and possible
solutions. I began by presenting the following diagram, which represents a
simplified view of the evolution of the testbench:
verification (CDV) is the dominant advanced technique in use today and is well
supported by the new Universal Verification Methodology (UVM) from Accellera. However, achieving
100% coverage by this method alone can be quite difficult; coverage tends to
increase dramatically early in the project but flatten out asymptotically
toward the end. I said that, from my perspective, there were three major
approaches beyond CDV to try to close any coverage gaps. The simplest, and most
established, is further automation of the CDV process by trying different
random seed values, biasing the pseudo-random distribution of inputs, varying
input constraints or using "soft" constraints as guidance rather than firm
approach is well understood, but not very effective and lacking any feedback
loop. In contrast, I said that I had heard positive reports about the
effectiveness of the "overlay" approaches, such as the graph-based technique
used by Breker Systems to traverse the full coverage space more predictably. This
requires a non-trivial effort to specify the intended behavior using a graphical
format or language of some kind beyond the existing testbench constraints. Breker
founder Adnan Hamid presented their approach later in the panel, stating that
their customers found their descriptive format well suited for system-level
focused on the middle approach in the above diagram, in which formal
technologies are used to target the coverage gaps. Many coverage metrics,
including code coverage, SystemVerilog cover properties and PSL cover
properties, can be directly used as targets for formal analysis. This presents
an alternative way to achieve coverage goals, and has the unique benefit that
formal can also detect unreachable coverage. This is important to know since
otherwise constrained-random simulation can run on forever trying to reach
coverage that can never be hit with any stimulus. The only
downside of using formal is that it is likely to be more effective in
sub-blocks of the design rather than a full-chip DUT.
of Cadence Incisive Enterprise Verifier (IEV) get several novel benefits in
this flow. First, IEV uses a combination of formal and simulation-based
techniques and so can hit coverage, especially in a large DUT, that would be
hard for a pure formal tool. Second, IEV can run simulation stimulus with all
passive elements from the testbench included. This allows IEV to hit additional
and SystemVerilog testbench coverage that cannot be directly targeted by
formal. Finally, all the metrics gathered by IEV are combined together with the
results of constrained-random simulation so the verification engineer has a
single, unified view of status.
as I pointed out on the panel, there is a gap in this approach.
Constrained-random stimulus is generated using testbench constraints, while
formal analysis requires formal constraints, which are derived from assertions defining
the legal inputs for the DUT. Formal tools cannot read most testbench
constraints and so a user who doesn't already use assertions will have to write
at least enough of them so that formal considers only legal input values and
sequences. I then suggested that the next panelist, NextOp CEO Yunshan Zhu,
might have a suggestion on how to close this gap. Yunshan presented the following
generating assertions and coverage from an existing simulation testbench,
NextOp offers a unique link between the worlds of simulation and formal. The proposed
combined solution from Cadence and NextOp has the potential to address coverage gaps without having to learn any new
languages or develop any new models. Yunshan and I presented to
the DVCon BoF audience and generated some good discussion with the audience.
I'm presenting the proposal here in the hope that you will think about it, ask
questions, make comments, and generate some ongoing discussion. I'm listening!
The truth is out there...sometimes
it's in a blog.
For case 1, the stimulus is still good. NextOp can generate high quality assertions and cover properties. In addition, the coverage properties may point out coverage holes that guide the user to add test vectors.
For case 2, a. NextOp assertions from neighbor blocks may trigger and expose the bug (sometimes an assertion trigger can point to a RTL bug in its neighbor modules) or b, NextOp may generates a property that "does not make sense" to designer, but in reviewing the properties, the bug is exposed.
We've seen many real life examples of case 1, case 2a and case 2b.
Hope this helps.
Yuan Lu, NextOp
Thanks for your excellent question. NextOp does not claim that its assertion synthesis can generate all assertions or catch all bugs. It is certainly possible that its assertions could miss a bug.
However, the presence or absence of checkers in the simulation testbench does not matter. NextOp’s tool reads only the design and the simulation results, so as long as the simulation does a good job of exercising the design then high-quality assertions will be generated. In particular, it’s quite likely that NextOp will generate some assertions that do not have any corresponding checkers in the testbench.
Thank you for the blogs on Functional verification.
I have a basic question on generation of SVA & Coverages using a tool.
My concern is that this assertion synthesis may leave out a hidden bug in the design, by just reading the RTL & simulation database -> to generate the SVA.
What if there is a bug in the RTL & there were NO checkers associated with in the Constrained random environment?
This results in SVA generated by the tool, using a faulty RTL & a missing checker CRV environment simulations.
Please calrify on the same.