Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Verification Quest for Bugs – The Constrained-Random Predicament

Author

Anika Sunda
Anika Sunda

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Verification. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
compression
throughput
machine learning
Hard to Hit Bin
Coverage Closure
Regression
simulation

Quest for Bugs – The Constrained-Random Predicament

14 Jun 2022 • 2 minute read

Functional Verification for complex SoCs is a resource-limited ‘quest’ to find as many bugs as possible before tape-out or release. It can be a long, difficult and costly search. The search space is practically infinite, and 100% exhaustive verification is an unrealistic and non-tractable problem. The goal is to deliver the highest possible quality, achieving that in the shortest possible time, with the lowest possible costs.

In addition to this quest, the biggest challenge is to find the answer to the question “Am I done yet?” as the consequences of missed critical bugs, can be potentially catastrophic. Complexity continuously increases, and the functional verification challenge gets progressively harder. How do you find most of the bugs? How do you find all the critical bugs?

Over recent decades, constrained-random verification methodologies have become the norm. We acknowledge that it is impossible to identify all possible testing scenarios. The probabilities of hitting these unknown scenarios are increased by the volume of testing. If we run enough cycles, we will eventually hit it… probably, we hope! But who has infinite time?  With metric-driven verification (MDV) technology we use coverage (bug rate) as the goal to know when you are done and constrained random test benches to reach into the state space. However, this strategy eventually leads to a saturation point, where we are no longer finding new bugs. At this point, we may still have a decent bug rate and yet have a lot of wasted cycles because of doing the same process over and over. Nutshell we are draining both resources and time. 

Everyone wants to run “good” verification cycles which means running meaningful verification cycles that lead to some kind of progress in your verification process by either finding new bugs or by increasing coverage. Machine Learning (ML) can help you deliver the product on time and to the right quality level.

Cadence Xcelium Machine Learning Solution utilizes proprietary ML technology to help customers reduce regression times by optimizing the regression suite. The technology understands the design behaviors and guides the Xcelium randomization kernel to achieve the same coverage with a reduced number of simulation regression cycles while hunting bugs by stressing specific points of interest. It fits seamlessly in Xcelium constrained random verification technologies and flows and is an important new tool for verification engineers to use as part of the overall shift left solution. Users can apply the Xcelium ML solution very early in their design and verification cycles - as early as when there is no functional coverage written. This will help DV engineers to get more confidence and help expose latent/cousin bugs. Stay tuned to learn more about how Xcelium ML can help you increase the hit count of rare bins and do early bug hunting in our upcoming blogs.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information