• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Verification
  3. Top 5 Stumbling Blocks In FPV Adoption
archive
archive
Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
verification strategy
Verification methodology
Functional Verification
Formal Analysis
Model-checking
Testbench simulation
Coverage-Driven Verification

Top 5 Stumbling Blocks In FPV Adoption

15 Oct 2008 • 5 minute read

My first post served as a context for this blog. It also telegraphed my intention to set down a few reasons for the initial difficulties faced by long-time simulation users, specifically verification engineers, in applying formal property verification (FPV). Here is my Top-5 list in no particular order.

1. Procedural Versus Declarative Expression Of Intent

Simulation languages like Verilog, SystemC, e and the rest are procedural languages (blurring the lesser differences between OOP, AoP and procedural code). Simulation testbenches that model the verification environment are typically modeled using procedural code, though there is an increasing shift to declarative idioms with constraint specifications in sophisticated environments.

FPV environments on the other hand are almost completely declarative (PSL, SVA and OVL), whether they are assertions that state something about the design behavior or constraints that model the design environment.

I have found that, much like the procedural versus declarative debates in the AI community, the debate between FPV and simulation evangelists (the former in shorter supply) is ultimately irrelevant. I think there is increasing consensus that each has strengths that are complementary in most cases. The point remains that the two are distinct ways of thinking about verification problems and knowledge of one (idioms and patterns) does not always translate to the other.

2. If Not Simulation, Then What?

If only I had a penny for every discussion I have had about whether formal analysis is basically like an under-the-hood simulation within the set of constraints. While such a constraint-based simulation is definitely possible, FPV does not use simulation in converging assertions to proofs. Instead it relies on clever representations and algorithms to get mathematical proofs for the assertions about the design.

Considering that there are about a handful of people that can talk intelligently about the algorithms used in FPV, this watered-down answer does not satisfy an engineer in the trenches.  This is in contrast to the simulation world where most engineers understand the mechanics of event-driven simulation and there is no notion of not converging to a desired result.

Verification engineers are paid to suspect and are properly suspicious of anything that is presented to them as black-magic. In cases where I have encountered this sort of thing, I like to point to that poster-child of formal methods - equivalency checking. Formal tools use algorithmic techniques to analyze a state transition only once, and can examine many state transitions simultaneously. Contrast this with simulation in which there are numerous repetitions of state transitions. It is in this sense that formal analysis increases coverage per unit time.

As in simulation, there is feedback from the analysis that allows the fine-tuning of the verification environment. Only, this feedback is different from that in simulation. It is a matter of training and application to get comfortable with any new technology and FPV is no exception.

3. The Capacity Issue

Simulation users make an implicit assumption about their environments - that they will scale with their design as long as they are able to crank the requisite number of simulation cycles. This is typically done by throwing more machines at the problem, or even relying on hardware accelerators for larger designs.

This is typically not true for FPV. Complexity in formal analysis is exponential in design size. Despite rapid strides in improved performance and capacity of formal engines, commercial FPV tools do not scale like simulation.

New users can get discouraged with FPV as a result. Almost every instance in which I have seen this happen, the problem has been the imbalance of design choice versus user experience. Choosing a target design (or function within a design) commensurate with the experience of the verification engineer is a critical component of successful FPV application. Which leads us to...

4. Inadequate Planning

Like simulation, FPV requires thorough planning to focus the verification effort. Unfortunately, it is common to encounter situations where FPV is applied without a set of clear goals. Perhaps this has to do with the economy of expression in property specification languages, that allows easy setup of an FPV environment. Be that as it may, experience shows that successful application of FPV, that justifies the investment, is deterministic only in the context of a verification plan.

A plan should address the following questions amongst others -

  • Can I split the design into functions and sub-functions?
  • Does the design have functions that are amenable to FPV?
  • What functions of the design do I intend to formally verify?
  • For each function, what are the assertions and constraints I require?
  • What functions in the design will not be verified with FPV, if any?
  • Am I looking for proofs, bounded proofs, bug-hunting or a mix of the three?
Note how the emphasis in the planning is not on describing coverage goals and test scenarios. Instead it is a hierarchical plan that decomposes the larger design problem into tractable sub-functions. Verification planning is an important topic in FPV and something that I plan to take up in future posts. I will only note that almost all FPV tools espouse a methodology that directly addresses planning.

5. Verification Closure

Most simulation users are familiar with the use of metrics such as line, expression, toggle and functional coverage. Unfortunately, there does not exist a comparable set of widely used and understood metrics for FPV. This is an area of active research and there are a bunch of ideas that look promising. But the truth is, the coverage metrics as traditionally understood by the simulation-savvy engineer do not translate very well to the FPV world.

Lacking a universally accepted coverage closure metric, FPV application tends to get tracked in terms of properties that have converged to proofs. Sometimes, a property may not converge to a proof. This can be frustrating to the verification engineer because there is no clear way in which the associated effort can be quantified in the project planning. In this sense, FPV almost becomes a hit/no-hit process that is so (correctly) abhorred by verification managers.

I think current notions of coverage, in the context of formal methods, can be considered in two dimensions -
  • Coverage metrics for pure formal analysis - some form of structural coverage or metrics for assertion completeness
  • Have formal methods contribute to simulation coverage
It is not a stretch of imagination to foresee a standard technology-independent notion of verification coverage emerging in the next few years.

In conclusion, the above is my Top-5 list of stumbling blocks in FPV adoption. There is a lot to be said about each of them and my intention is to delve further into each in future posts.

Please feel free to post your Top-5 in the comments section. I have done this exercise with those new and experienced in FPV and am almost always surprised by some of the things that get mentioned.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information