• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. The 2019 Jasper User Group
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Jasper User Group
formal
Jasper
JasperGold
Formal verification

The 2019 Jasper User Group

11 Nov 2019 • 4 minute read

 breakfast bytes logo Last week was the Jasper User Group meeting, the biggest annual gathering of formal verification engineers. This is the 13th meeting. Ziyad recalled the first, held at the TechMart, to which about 15 people attended. This year was the sixth since Cadence acquired Jasper. I have actually been going to these meetings for longer than that since Jasper was a Semiwiki sponsor assigned to me. By then the meetings were in that hotel in Cupertino that keeps changing its name. (In a weird coincidence, my son-in-law did a stint as executive chef there when he was between jobs.)

Obviously, at any user group meeting, there are people attending from companies that compete with each other. But formal verification has a different type of feel to it. It seems more like it is the elite of formal verification evangelizing formal to unconvinced verification engineers and managers. Some of the presentations had a lot of deep technical depth, of course, but often there was a lot of discussion about presenting the results of formal to management to show success.

The uses of formal seem to fall into three main buckets:

  • Use of apps by non-experts to check things like connectivity, register connections, clock domain crossing, and so on
  • Bug hunting: Use of the full power of the tool to drill down into areas looking for bugs. Especially attractive are areas that simulation might have missed. Formal is famous for finding corner cases that nobody had thought of before
  • Signoff: Use of the full power of the tool to prove that the block is correct, as an alternative or as an addition to using simulation

Tom Melham

 Tom Melham of Oxford University gave the first-day keynote on Verification Challenges of Deep Neural Networks. But since he has been involved in formal verification since the beginning, he started with a personal history.

 He started working in the area in the early 1980s at the inception of the field. In 1985, he was working on deductive theorem proving. "We didn't have the technology we have now...the field was just getting its start." In 1985 was the first conference. "People in industry thought we were mad. We lived on hope and energy."

This picture is the first design verified, an associative memory. That long printout is one proof that the chip is correct.

Then 1997-2010 were the golden boom years when formal took off and there was actual deployment in industry of automatic model checking. It was an amazing achievement compared to the early days, with major algorithmic improvements, methodology improvements, standardization of specification languages, and so on. Tom was involved to some extent since he was on the Jasper technology advisory board (TAB).

Then, after 2010, Tom got interested in moving up to hardware/software coverification, and then from 2017 verification of machine learning algorithms.

As he put it:

The next new frontier is machine learning. You have to be living in a cave not to notice it is trendy and students want to work in it. So we are looking at verification of systems that may contain machine learning as a component.

 One big challenge is that neural networks are not robust. Tom showed the above example of an adversarial neural network. The picture on the left is identified correctly as a panda. The noise in the middle is actually not really noise, it is carefully constructed. When a little (less than one percent) is added to the image, the image on the right results, which is indistinguishable from the original by a human. But the network now identifies it as a gibbon. I have written about this before, and there is more detail and more examples in my post Fooling Neural Networks.

 There has been tremendous growth in studying adversarial neural networks. This graph shows the number of papers on the topic by year, growing exponentially.

Tom has been working in this area, trying to create adversarial results from scratch, rather than just tweaking existing images (as with the panda example). These are known as generative adversarial neural networks (GANs). He wants to create images that:

  • Fool the target classifier
  • Not a perturbation of the original
  • Humans mistake them for real data

He is using the MNIST database of handwritten digits, which is the "hello world" of neural network image processing. Each digit is 28x28 pixels. There are 60,000 digits for training and a separate set of 10,000 digits for testing. The above picture shows some of their results from their GAN. The rows are the real label for the digits, but the classifier is fooled into picking the label across the top of the columns.

More

I will cover content from the meeting over the next couple of weeks.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.