• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. JasperGold: Stepping up to RTL Signoff
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Intel
Jasper
JasperGold
Formal verification

JasperGold: Stepping up to RTL Signoff

16 May 2017 • 7 minute read

 breakfast bytes logoedsger dijkstraWhen I was on my last tour of duty at Cadence in the early 2000s, we had a late afternoon seminar series. One of the speakers was Edsger Dijkstra. He was a legend in computer science, one of the pioneers of many important aspects of programming. Perhaps he is most famous for a 1968 paper called The Goto Statement Considered Harmful, written in an era when all programming languages used goto statements of some sort for change of control. He was also famous for a series of numbered short communications that were universally known as EWDs (his initials). After the seminar, I was one of the people who was invited to go to dinner with him. It was a bit like going to dinner with Erdős for a mathematician. As it turned out, he passed away a few months later, in August 2002.

His most famous quote must be:

Program testing can be used to show the presence of bugs, but never their absence!

Formal Verification

In semiconductor design, we have a similar problem, that simulation can be used to find problems in the RTL, but no amount of simulation can be exhaustive (except in trivially simple cases) and so, in a similar way, simulation cannot prove the absence of problems in RTL. Unlike in programming, where formal approaches have been disappointing, we have formal verification as well as simulation. In some areas, it suffers from the same problem as program proving, that there is no specification that is any simpler than the RTL, or that it is poorly defined ("the facial image must be recognizable").

At the highest level, formal verification gives you three answers to your question about whether a certain property is true. An analogy that I like to use is to decide whether you own a particular book. There are three answers:

  • You search the bookcases and the boxes in your garage and you find the book: For sure you own it.
  • You search every single place it could possibly be and you don't have the book. For sure you don't own it.
  • You run out of time before you get through all the bookcases and boxes. You might own it, or you might not. And you really have not done much to clarify it, since it may well be in the last box at the back of the garage.

In a similar way, formal verification has three answers to whether a particular property always holds:

  • No. It shows you a counter-example, some vectors that break the property. In other words, it found the "book", the counter-example.
  • Yes. In which case, it is proven, like a mathematical proof. It searched all the boxes. There is no need to run simulation just to check, or to make sure some obscure corner case is covered. In fact, formal verification is wonderful at covering obscure corner cases that for simulation you, or a verification engineer, needs to come up with.
  • Unproven. There are a number of reasons for this, but normally the tool doesn't have enough power to prove (or disprove) the property in the time allowed. It doesn't usually mean anything about whether the property is likely to be true or not. It searched some of the bookcases and some of the boxes, but there are more—perhaps much more—to search.

History

In the early days of formal verification, the tools were not—let's face it—all that good. In limited domains, they could give results. When they did, those results were good. But far too often the tools just didn't have the power to do what was needed and would run out of time or, often, memory. Common structures like multipliers were notoriously hard to handle. Of course, over the last couple of decades, we had some turns of the Moore's Law handle and computers got a lot faster, especially in the first of those two decades. However, the nice thing about formal verification is that, like proving things in mathematics, if one approach doesn't work, then you can try another. If that works, you are done, and the fact that the first approach was a dead end is irrelevant. As more and more proof techniques were added, the tools got more powerful. When I was at Compass Design Automation, we had an early formal verification product, VFormal, which was... let's just say too early.

Meanwhile, around the same time, Tempus Fugit was founded by Vigyan Singhal and Joe Higgens. It would eventually become Jasper Design Automation and, even more eventually, get acquired by Cadence. But since the product name, JasperGold, is so well-known, the "Jasper" name lives on. The technology has gone from academically interesting to a real workhorse of a tool. Formal verification won't replace simulation completely anytime soon—if ever—but here is something true: if you can prove something with formal verification, then there is no point in attacking that problem with simulation. Leave simulation for the other stuff.

One of the most visible drivers of formal verification was the Intel Pentium floating point bug. Simulation had missed the fact that one of the lookup tables used for floating point divide was missing an entry. It cost Intel a write-off of $475M. Internally, as Bob Bentley recounted at a Jasper User Group meeting in 2012, Intel management said: "never let this happen again." Soon they started proving that their floating-point units were correct using a mixture of internally developed technology and commercially available tools. Remember, if any tool proves a property is true, it doesn't matter if another tool cannot.

Another high-visibility success, since they talked about it at a different Jasper User Group, was ARM using formal verification to prove that their cache coherent interconnect was coherent under all circumstances. JasperGold showed that it wasn't, that in a certain weird set of conditions, there was an escape when the whole cache could deadlock. As I said above, formal verification is great for finding corner cases, partially because it doesn't treat them any different from the common cases.

RTL Signoff

A few years ago, Jasper split JasperGold up into a number of separate Apps that could be used with different people, with different levels of expertise. Some of the highest value formal verification successes are actually not the deep geeky corner cases, but simply catching registers hooked up wrong, and simple things like that. It is analogous to the same thing in software, where most bugs are simple off-by-one errors on loops or other d'oh mistakes.

There has always been an expectation that design would move up to the next level from RTL, maybe C-based design. Some of that has, indeed, happened. Cadence has a high-level synthesis product called Stratus, for example. But the biggest driver of productivity has been the move to IP-based design. Some IP is acquired (hey, Cadence has lots) and some is internally developed. So the groups assembling the IP into SoCs are not the groups designing the IP blocks. This makes doing verification at the RTL level very important. There are two reasons why. First, the netlist will be different each time the IP is instantiated, so verification at the netlist level needs to be repeated each time. But the main reason is that the groups developing the IP want to be able to do the verification at the level they are doing the design, and pass of pre-verified IP to the SoC groups.

cadence verification portfolioToday at CDNLive Europe, Cadence announced the latest expansion of the family, with the introduction of Superlint and clock domain crossing (CDC) Apps, which are important formal technologies that address RTL signoff requirements.

A lot of signoff checks were previously done at the netlist level, but the same IP implemented on different designs has a different netlist. Traditional static lint tools and CDC tools have not been effective in ensuring that the RTL code is of the highest quality.

jaspergold rtl signoffThe result of using these new tools is a reduction of up to 80% in late-stage RTL changes and cutting a month off IP development time compared to existing solutions. Using intelligent debugging reduces "violation noise" whack-a-mole, which is one of the most pressing RTL signoff challenges, where each time one problem is fixed it causes some sort of knock-on effect (similar things happen with static timing where it is all too easy to cause a new timing violation by fixing an existing one).

Hobson Bullman of ARM gave the keynote at last year's DVCon Europe. As you can read in more detail at my post: DVCon Europe: What You Missed, he is GM of ARM's Technology Services Group (TSG). They have been using the Superlint App for more than a year and had success improving RTL signoff and shortening time to market. "We’ve reduced late-stage RTL changes, which enables the team to save additional time when we get to the functional verification stage," he said summing it up.

If you are reading this and are attending CDNLive Munich, then I'm sure that in the Designer Expo we will be showing the product. It will also be covered in the technology update on the IP and SoC Verification track.