Get email delivery of the Cadence blog featured here
There is an Accellera working group that is developing a portable stimulus standard. Like any Accellera group, it contains vendors and users so that both communities are represented.The working group is chaired by Faris Khundakjie of Intel, the vice-chair is Tom Fitzpatrick of Mentor, and the secretary is Cadence's Tom Anderson.
I think that this will become an important part of taking verification up to the system level, in the same way that OVM was an important piece of moving to IP-based design. Good standards, like good coooking, take time, and it is unclear when Accellera will be ready with the first version. Everyone is optimistic for an initial release in the first half of 2017, but the official schedule has to come from Accellera.
The first thing to understand is that, despite the name, the portable stimulus standard (from now on, PSS) is not a standard for portable stimulus. I am not making this up, as Dave Barry used to say. It is a portable representation of test intent. It is at a high enough level that lower level stimulus can be generated for a wide range of purposes, although the actual vectors generated will not be the same from vendor to vendor, or purpose to purpose. The right way to think about this is that it is like RTL, which captures the design at a particular level. The netlist will be different for 28nm compared to an FPGA. Or 10nm. The netlist will be different depending on the synthesis tool used. And the version of the synthesis tool. And the switches. But there is a sense in which that RTL is a "portable netlist representation" although we don't call it that.
The motivation for creating the standard was the usual one in that several vendors were creating tools that operated at this level. The Cadence one is called Perspec, and without standardization these would all operate differently and a lot of effort would be duplicated. Plus, the different tools would only be interoperable with cumbersome translators. By having a standard, the goal is to make everything more efficient, and drive up the level of automation to a new higher level.
There was also an explicit wish to avoid the problems of formal verification, which spent a decade in the wilderness where you needed a PhD in formal verification to use it. The PSS approach should be usable by anyone who can write UVM (in the hardware world) or C++ (in the software world). At the recent JUG meeting, Erik Seligman said that his book on formal verification contains no greek letters. I don't think it was an explicit goal, but the same philosophy is going into PSS: no greek letters.
There is also an explicit aim to push further than just pre-silicon verification and out to post-silicon bringup. Nobody really expects to simply bring up Linux or Android on the first day that they get silicon back from the fab (although a story that we had to keep secret at the time was that Microsoft booted 64-bit Windows-NT on the first day they got 64-bit silicon from AMD, having ported it using Virtutech's virtual platform technology). However, they need simpler tests that build up on each other as more and more of the SoC is exercised.
UVM and the VIP ecosystem have transformed verification of IP blocks. Whether a block is developed internally or purchased, design groups expect VIP that complies with UVM. However, that approach doesn't scale cleanly up to the system level where there are processors and complex buses and more. In the rest of this post, I'm going to assume we are designing an SoC that contains a microprocessor with a software load. Nothing in PSS stops it from applying to large FPGA-based projects (it's overkill for small ones), nor for those rare semiconductor components that don't contain processors.
The best way to think of the level that PSS works at is the use case. “If I have a cellphone, take a picture on my camera, upload to Facebook, does all that work? What if a text message arrives at the same time?"
Since PSS runs across the whole lifecycle of the chip, it needs to serve everyone from the architect, who works on the chip before design even starts, to the post-silicon validation engineer whose primary involvement is after manufacture. In between are the design and verification engineers for the chip and for the software load that will run on it. There are tools that have been developed for system architects but they are only really useful if they can be used throughout the rest of the design flow. Architects on their own are not a market, there are too few of them and their wishes too varied. It was an explicit aim of PSS from the beginning to be useful all the way through, architects and designers, hardware and software, verification and validation.
The next dimension that PSS needs to accommodate is the platform used for verification (or validation). Verification can't live with a single platform. In the early stages of the design there isn't enough detail—most obviously, you can't run RTL simulation before you write RTL—and at the late stages you want performance high enough to boot operating systems and run billions of vectors for power analysis and so on. So PSS has to support virtual platforms and simulation. Then, later in the design cycle, perhaps emulation and FPGA prototyping. Once silicon is manufactured, it has to be put onto some sort of board to be validated. Of course Cadence has products in all of these spaces but I'm not going to put all the names here because the whole point of a standard like this is that you can mix and match. You don't have to use our solution for everything. It is our job to make all of our solutions so good that you would want to, of course, but there is inevitably a lot of inertia. If you've already invested a lot in a particular technology, then that is what you are going to be using.
However, there is another layer between the PSS, namely what representation and verification environment is used. The biggest gap is probably between the software and semiconductor design engineers. The software people are not going to learn SystemVerilog and the design engineers are not going to learn industrial strength C++. Under the hood there are actually two flavors of the standard, one oriented around hardware which is called DSL (which stands for Domain Specific Language) and one around C++. Of course they can be mixed.
If you put all that together, you end up with the picture below.
The lime-green box is where the portable stimulus itself lives. Actually there is a layer missing in the sense that it is expected that users will have a selection of tools, mostly graphical, to allow them to create the lime-green box. The idea is not that you open up a text editor on a blank file and start typing.
Once you have the PSS (actually the DSL file), then you can use various vendors' tools (of course I have a recommendation if you are undecided!) to create tests that can run in the various verification environments on the various platforms.
The next level down is obviously complex and detailed: How exactly do I model the subsystems on my SoC? How do I specify tests? I am not going to go there in this post, but there are two earlier posts of mine:
Here is a more marketing view of this sort of use cases, ranging across a wide range of activities. One senior engineer commented recently about their experience with Perspec that he was happy with seeing a 10X productivity gain number on a slide but "the reality is that it is so hard to do that we would never have done these tests in the first place."
For more details on the standardization effort, to join or to contribute, here is the Accellera page for the portable stimulus working group.
Next: Protium: FPGA Prototyping the Cadence Way
Previous: IEDM in December—7nm Announcements