2009 is the tenth year that I've spent at least a portion of my time responsible for formal analysis products. As per my profile, my two most recent employers before Cadence were 0-In (now part of Mentor) and Synopsys. Between these jobs I consulted for eight other EDA companies, four of which offered formal products.
You would think that I've learned a few things during the past decade, and I believe that I have. Here are ten lessons I learned along the way, one for each year:
I learned the first nine things from my direct experience with customers and clients prior to arriving at Cadence three years ago. IFV was already a well established product at that point, so I realized when I joined that Cadence had also learned these important lessons even if some of its competitors had not.
Of course, IFV has continued to evolve since then, and I'm proud to have been a part of its ever-growing success. Now we've introduced Incisive Enterprise Verifier (IEV), offering novel links between simulation and formal analysis. We also have automatic assertions, high-quality assertion VIP products, coverage links, and the methodology and support for wide deployment.
The other day I sat in a Sales review at which I heard that just one Cadence customer, at just one of its sites, is actively using more than 200 IFV and IEV licenses. I could only dream of this sort of deployment in my past involvement with formal. I guess that we’ve all learned a few things in the past ten years!
The truth is out there...sometimes it's in a blog.
Mostly I see designers starting with partial constraints and adding more as needed to eliminate negatives (counter-examples) that are due to under-constraining rather than to actual design bugs. Many designers are happy to accept whatever assertion proofs they get along the way; others will continue to add constraints to try to get 100% proofs. Either way, it's important that the block-level input constraints be run as checking assertions in all higher-level simulation testbenches to try to catch any over-constraining.
Hi Tom: Thanks for the answer, it makes good sense. One comment is that the quality and completeness of the assertions on the design inputs would impact the verification closure. If the assertions are incorrect, or incomplete then you would get false positives (overconstrained inputs), or false negatives (underconstrained inputs). In your experience do block level designers spend the effort to develop assertions at the block inputs to verify their assumptions, particularly early in the design cycle ?
The ideal way to deploy formal early in the project is for each designer to specify assertions as he or she is writing the RTL for the design. Assertions on the design inputs capture his or her assumptions about the legal input space (environment). Assertions internal to the design and on its outputs capture the intended behavior of the design. Once these assertions are ready, the designer can run IFV or IEV even if no simulation testbench is available. Does this answer your questions?
Tom - thanks for a decade of strong contribution to formal, and a big impact on design and EDA. I wonder about your first statement (1). Typically some type of testbench is required to accurately analyze the feasible input space (environment). How does IFV help in the early design phase ? It would seem that planning and efforts to develop assertions and environmental assumptions should be made as part of design process. When would users actually by running formal analysis ?