Get email delivery of the Cadence blog featured here
Existing verification automation techniques – such as metric-driven verification, constrained-random test generation, and the Universal Verification Methodology (UVM) – have greatly eased block-level functional verification. But growing SoC complexity, both in terms of hardware and software, are calling for new approaches at the SoC and system levels. In this interview, Cadence fellow Mike Stellfox talks about the pros and cons of existing approaches and the need for new technology to facilitate use-case, software-driven verification.
Q: Mike, what kind of work are you doing at Cadence, and what’s your background in EDA and verification?
A: I’m a Cadence fellow, and I’m working with customers to develop and deploy solutions around system-level verification. I’m focusing a lot on ARM-based SoCs.
I started my career as a chip designer at IBM. I got into EDA at Viewlogic, and I worked on verification tools including the VCS simulator, which we had acquired from Chronologic. I moved to Verisity in 1998 when they got started, and that’s when I began to focus purely on verification. I came to Cadence with the Verisity acquisition in 2005.
Q: You’ve been working with functional verification for many years, then. What are the biggest changes during that time?
A: I think the first change, which we drove at Verisity, was turning verification into a discipline with a lot of automation through Specman and constrained-random simulation. That led to SystemVerilog as another language with the same kind of approach. Constrained random, coverage-driven verification is well suited for bottom-up verification of hardware, but is not a good fit for system-level verification at the SoC level, especially when you need to verify hardware and software together.
If you look at the [Specman] e language and SystemVerilog approaches, the first thing that challenged people is how you build testbenches. So Verisity developed the eRM [e Reuse Methodology], which showed how to build reusable verification components and testbenches in those languages. That’s the foundation from which OVM [Open Verification Methodology] and UVM are derived.
Q: UVM certainly seems to be successful for block-level verification. What about system-level verification?
A: UVM was built mainly for verifying IP or perhaps a subsystem, where that IP or subsystem may go into many SoCs. As an IP developer, you may not know the SoC context it will end up in. But if you want to verify a specific SoC in the context of the application that it needs to support, UVM really isn’t the best approach.
There are two reasons for this. First, it’s very difficult to translate the use cases you need to verify into the kinds of constrained-random sequences you would write in UVM. Second, most SoCs are verified not only in simulation, but also in emulation or prototyping platforms, as well as hybrid approaches that leverage ARM Fast Models and combine virtual prototyping and emulation. UVM is really not built for high-speed platforms.
Q: How are engineers approaching system-level verification today, and where are the pain points?
A: I would say that block-level or IP-level verification is really a solved problem. At the SoC or system level, the challenges are very different. There are three high-level requirements that must be addressed for SoC-level verification. These include the horizontal reuse of test across platforms from virtual prototypes to post-silicon; vertical reuse of tests from the IP block to SoC level; and use case reuse, which allows multiple users from different disciplines to capture use cases and share their use cases with others.
The big challenge you have is integration – you have all these IP blocks from different sources, and you need to integrate them and make sure they all work together properly and also can operate within the system context. The integration of IP blocks in the context of SoC power management is one example of this requirement – and it illustrates the need for vertical reuse. At the SoC level, creating tests or use cases for complex system operations is difficult. The time required and ability to develop tests is limited at best. This drives the need for use case reuse.
Another issue is scale – a lot of SoCs have hundreds of IP blocks. You need to leverage emulation or FPGA prototyping to run more cycles, and this drives the need for horizontal reuse.
Q: How do customers verify the embedded software that runs on an SoC?
A: It tends to be pretty much ad hoc. The software is usually layered. There are layers of software that are closest to the hardware, such as firmware and drivers. Then you get up to the OS level, then middleware, and eventually the application. What we’re seeing is a kind of partitioning and verification of the software with the hardware, layer by layer. For example, you can verify the driver and the software stack for USB in isolation from the whole SoC.
Q: What issues get in the way of hardware/software co-verification?
A: One issue is how you do the verification and generate the tests. Another is debug, which is where people spend the most time. We find that some customers spend 80% to 90% of their time in debug. It gets really complicated when you have hardware and software, as you may have a dozen cores running software concurrently.
Q: What is use case verification, and how can it help?
A: Use case verification is really the only viable way to verify an SoC. An SoC has thousands of different potential states and state transitions, and even if it was practical to verify them, many aren’t legal for the specified operation of the SoC. Therefore, using use cases is the best way to verify the SoC operates as specified.
For a given SoC, you want to look at the customer requirements and the applications it needs to support. The way people specify that is in use cases. For example, with a mobile phone, you may be running a GPS program and displaying data on a screen. Then you get a phone call, and you need to make sure the SoC can handle concurrent operations.
Describing the use cases is really a way to specify the primary ways that an application is going to run on an SoC. You want to check that the use cases will work, not just functionally, but also meeting performance and power requirements. If you have performance bugs or limitations, you may get screen flicker, or your video may not keep up with bandwidth requirements.
Q: How do you define the use cases and describe them to the verification tool?
A: That’s the tricky part. Today there are two ways that people do use case verification, and both have challenges. One is that engineers try to translate a use case description into some kind of test they will run. They write some tests that try to mimic, in a naïve way, that use case. The problem is that it is really difficult to write a C language test with multiple cores. The tests that you write are really basic. As an example, writing tests to find problems with today’s interconnect fabrics with cache coherency is very difficult. It keeps multiple processors busy while checking cache coherency.
The other approach is to boot an OS like Linux and bring up some production software, and test with real software to determine whether a use case could actually work. But that’s complex, and you usually don’t have all the software working until really late. We are already helping with some of that by accelerating the time to a given point of interest, but there is room for more improvement. To really develop tests in a top-down way, you need to use the key information you have about the design and its IP blocks as inputs to drive test generation.
So, we’ve been working with customers and developing technology around what we call software-driven verification. Our goal is to bring a more systematic approach to doing use case verification.
Q: What does software-driven verification involve?
A: You’re verifying an SoC through its embedded cores running software to exercise different use cases. If the software doesn’t rely on external resources then you can run the software at speed on any of the platforms. You’re taking the bare-metal software, the drivers, and the lowest levels of the hardware abstraction layer, and verifying the SoC through software APIs. Initially you don’t have production software, so you may start with diagnostics or some basic bare-metal software. At the beginning you focus on hardware verification, and over time you start integrating the production software.
Q: What are the benefits of software-driven verification?
A: The first thing is that it provides a more automated approach to verifying the SoC through the software APIs. You could do this manually, but you would have to hire a lot of people. A second benefit is quality – you can test your system much more thoroughly and find any corner cases in the hardware, embedded software, or between the hardware and software. The third thing is that you are reducing the time it takes to integrate the production software. If you can verify the lower levels of the software with the hardware through a set of use cases, you can integrate higher levels of software much faster.
Related Blog Posts
Software-Driven Verification – a Hot Topic for 2013?
Applying Software-Driven Development Techniques to Testbench Development