Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
As outlined in a prior post, new advances in formal and multi-engine technology (like Incisive Enterprise Verifier or "IEV") enables users to do complete verification of design IP using only assertions (i.e. no testbench required!) -- especially for blocks of around 1 million flops or less. Given this premise, it's natural to ask: "OK, but how does formal and multi-engine assertion-based verification (ABV) fit into the coverage and metric-driven work flow that I am (A) familiar with, and (B) know is effective in measuring my progress?" In the following series of blog posts, I'll answer these important questions. To spare you some suspense: conceptually, the coverage-driven verification terms and methodologies you are familiar with when writing testbenches and/or constrained-random stimulus in e or SystemVerilog - terms like "constraints'" "code coverage," and "functional coverage" -- have essentially the same meaning in the formal-centric ABV world.
Taking a step back, coverage collection -- regardless of whether you are collecting code coverage, functional coverage, assertion coverage, or simultaneously combining all of them together -- is a continuous process. Furthermore, at different phases of verification, you typically ask different questions depending on what insights you need from the coverage. Long story short: in a formal verification context, it all boils down to the following 3 primary questions:
We will address the first question below -- the other two will be addressed in subsequent posts.
How good are my formal constraints?
First, consider the following 3 segment partition of the phases of verification illustrated in the figure below.
Figure 1: Phases of verification and coverage collection
Working from the top down of the block diagram, the first phase, "constraints development and design exploration," is quite simply the development of constraints on the universe of possible stimulus. As noted above, this is conceptually similar to a constrained-random dynamic simulation stimulus setup. However, in this process, the user describes valid behavior of inputs using Property Specification Language (PSL) or SystemVerilog assertion (SVA) properties. As you might expect, if the inputs you want to drive are for popular standard protocols (OCP, AHB, etc.) then standard verification IP components are likely available to give you a running start (Cadence offers multiple assertion-based verification components for this purpose).
Once your PSL/SVA properties a/k/a "constraints" are written, you apply the Formal tool to generate traces from these constraints and start running them on the design. Again, this is similar to running tests in simulation -- it just in this case it's the formal engines doing the work vs. an event-driven simulator.
Once engaged in the process, inevitably you will need to correct and refine your constraints as design problems/bugs become apparent. In particular, once you are up to speed it's clearly desirable to assess exactly how good or bad your constraints are. Specifically:
1 - Are constraints hiding some legal behavior ("over constraining")?
2 - Are constraints allowing some illegal behavior ("under constraining")?
Let's address these questions one-by-one.
Key Question 1: Are constraints hiding some legal behavior (Over Constraining)?
This question is answered by reachable coverage collection. In this process, covers (or "coverage points" - whatever term you prefer) are automatically generated by the tool, and then the tool goes on to generate a "witness trace" generated from your constraints (i.e. a machine generated "directed test", if you prefer) to see if all the lines of RTL design can be reached and all states of finite-state machines are traversable. These covers are called dead-code (for lines of RTL) and FSM state (for state of RTL finite state machine) checks in IEV. The best part (and the advantage of formal over dynamic simulation): because formal algorithms are mathematically exhaustive you can conclusively fail a dead-code check. For example, formal can tell you with mathematical certainty that there is no legal stimulus possible that could hit that line of code in RTL. Fortunately, the tools enable the user to debug such scenarios and fix this "over constraining." (In the next blog, we will provide details on debugging "over constraining"). The aforementioned FSM-state checks are similarly executed and reviewed.
If all dead-code checks and FSM-state checks pass, then we know that constraints are allowing sufficient traces to be generated. However, the constraints could still be too "loose" and allow some illegal traces to be generated. This issue is addressed by following question.
Key Question 2: Are constraints allowing some illegal behavior (Under Constraining)?
This question is answered during "Checks development and bug hunting" phase. In short, if your constraints allow some illegal behavior then your property checks will fail. In this case, modify the offending constraints to fix this issue. Fixing of constraints and DUT bugs are treated in same way during this phase. Of course your fixes could go too far and create an over constraining situation as described above. Thus it is recommended that one does reachable coverage collection on regular basis. To rephrase, both "constraint development and design exploration" and "checks development and bug hunting" phases participate in reachable coverage collection.
In summary, reachable coverage collection is done using automatically generated covers for dead-code checks and FSM-state checks. Formal, simulation and mixed engines participate in finding traces for these coverage checks, and they ensure that constraints allow sufficient legal traces to be generated. Conversely, the tool also uses the user checks to confirm that no illegal traces are being generated from the constraints.
Coming up in Part II of this series I'll address the next logical question in ABV coverage: "How good is my formal verification proof?" Until then, happy verifying!
Vinaya SinghArchitectCadence R&D
On Twitter: http://twitter.com/teamverify, @teamverify