What kind of metrics are Formal Analysis users collecting? Below are some metrics I am or have thought about collecting for our designs. • The module that was verified • The version of the module that was verified and all associated files used in the run. • The HDL parameters that were used in the verification • The properties that were verified o Time it took for each property o Engines used for each property (time for each engine) o Explored Depth for properties not proved o Trigger: pass/fail, depth o Status: pass/fail depth • Percentage of state elements touched by proved properties Often times metrics are not kept and we lose track of what was verified, what is left to be verified, and how to best run the tool for future modules.
Hi, You almost left nothing for others :-) My 2 cents: I like to use Cover properties in a formal engine to look for tough corner cases that I couldn't hit in simulation. In that context here are few: * How many cover properties * How many got covered, can I write out HDL/HDVL tests from the formal tool for easy re-run/regress? * Integration of Formal + Coverage analysis (similar to what TransEDA seems to promote - not sure how much it works in reality). During our PSL book work, we explored few more interesting cases such as "latency analysis" etc. But tools were too immature then to handle a "battery of PSL code". Will be nice to get some recent experience on that front. Regards Ajeetha, CVC