• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Machine Learning in JasperGold
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Jasper User Group
Jasper

Machine Learning in JasperGold

15 Oct 2019 • 4 minute read

 breakfast bytes logo When I was in Tel Aviv for CDNLive Israel, I sat down with Ziyad Hanna to discuss what Cadence calls "Smart Formal Technologies". This is using deep learning techniques to improve formal verification with the JasperGold Formal Verification Platform.

I liken some aspects of EDA to mathematical differentiation: there are rules, you turn the crank to use the rules, and you get an answer. Others are more like mathematical integration: there are dozens of approaches you might try, and which turns out to be correct is...whichever one works. If the answer you come up with, however you got there, differentiates to what you started with, then it is the correct solution. One method I still remember, despite having not integrated anything for about forty years, is that the integral of a product is the first times the integral of the second minus the integral of the integral of the second times the differential of the first. Formal verification is like integration in that it is trying to prove some property, and provided one of the approaches succeeds, then it doesn't matter that the others failed. That doesn't mean to say that it doesn't matter which approaches you try—you can waste a lot of time making poor choices, and correspondingly save a lot of time by making smart choices. That's where the first part of Smart Formal comes in.

The JasperGold platform contains lots of engines with multiple ways to control them. People have invented techniques over the years to make the search space feasible. However, users have lots of things to check for, and multiple sets of property classifications. Of course, Cadence's R&D engineers are adding more all the time, and application engineers are working out the best ways to use them.

Over time we've tried to codify this process with four steps. It is proof automation by optimization and learning.

Developer Optimization and Learning

The developers tried to figure out the best ways, with predefined proof strategies and heuristics. But then we go to customer #1 and it is great, and then to customer #2 and it doesn't converge. So we try and improve it with better strategies.

Expert User Learning and Reuse

Each time we put out a new release, we want to allow our customers to do what our engineers can do. So we delivered the JasperGold Expert System to customers. Each customer has its own formal experts and can capture what is specific to them. So rules from experts—if you have such and such a waveform you can apply this rule, etc—enable better productivity. Now there are over 300 rules in compilation, debug, and complexity management. This is nothing to do with machine learning, it is is simply adding the user to the loop.

Online Reinforcement Learning

On-the-fly exploration and on-the-fly exploitation. This is the JasperGold Expert System with the user in the loop. We can train, then use the ProofGrid where the tool gets a list of properties, knows the engines, and starts to learn the patterns of properties and engines. Then the machine learning predictors (weights) are used to make JasperGold more adaptive. Ziyad said it's a bit like having a strategy for Vegas: you have two machines, you need to decide which machines to try and for how long, how much you want to invest, when to switch, how to invest your $100 and maximize your return. With formal, you may have a budget of 50 cores and two hours and you want to maximize the number of properties proved. There is also on-the-fly learning where you an train on a subset of say 100 or 1000 properties, and then extrapolate to the rest of the properties.

Here is a little more detail on how it works. There are advisors being brought in for different aspects such as convergence or configuration. There are tens of them. The composer swaps advisors in and out to achieve the best results. 

Offline Reinforcement Learning

This uses a large dataset for training, then the machine learning weights are used to add prediction for the future. Today the investment is all inside the tool to improve the algorithms under the hood. Basically, the training picks some number of properties, runs them on default engines, keeps some, kills some, changes the property order, all the maximize throughput, measured as proven properties in time. 

In training, there is a supervised inference phase. Speedups are 2X to 10X, as in the graph:

The dataset is over 500 designs, each runs for four hours with a 1200 maximum trace depth. For now, this training is on Cadence data, but in the future JasperGold will allow companies or design groups to use their own data for training. This has two advantages, accuracy and confidentiality. A company's own data is more likely to be good for training that the more generic data Cadence has access to. Plus you won't need to provide your data to Cadence, the entire process is private to the company or design group.

Jasper User Group 2019

Attend the world's biggest gathering of commercial formal verification users—this year's Jasper User Group conference will be on the Cadence San Jose Campus on November 6 and 7. Ziyad's Technology Update will be the last presentation on the second day (so you have to stay to the end!). In the past, this presentation was too sensitive to be published in full along with the rest of the two days' of presentations, so probably the only way to find out what he will say this year is to be there. Registration is now open.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.