Get email delivery of the Cadence blog featured here
Short answer: Nope, not kidding. You can get value from applying code coverage with hardware-assisted verification by focusing on actionable data. Longer answer, keep reading below to learn more.
Functional coverage is a technique to verify that a design conforms to a specification – it helps engineers examine whether a design does what it is intended to do. Code coverage provides insights into testing completeness. Collectively, they comprise a set of coverage metrics required for an educated assessment of verification closure.
Traditionally, coverage is a verification technique applied predominantly in simulations and formal analysis. Applying similar coverage techniques in hardware-assisted verification has not been a prevalent methodology. Certainly, assertions have been natively supported in emulators since around the turn of the century, but code and functional coverage with covergroups hadn’t seen the light of day in emulators until 2010. Still, after being positively surprised that one can run covergroups and code coverage natively in hardware-assisted verification, most verification and validation engineers still wonder, “What value can I derive by running code coverage in an emulator at a system level?”
Traditional code coverage techniques can be quite difficult to manage at the system level. You can enable coverage collection and find yourself flooded with new data. System verification engineers may struggle to find meaning in all that data, particularly coming from the depths of the design. Secondly, it can be very difficult to influence the behavior of low-level logic from high-level system tests. So, what are effective ways to deal with this data overload? One solution is to focus code coverage testing on actionable data, limiting the data to a manageable amount, while being able to act upon the information. Over the next two blog posts, I’ll discuss two different techniques for accomplishing this.
In this post, let’s discuss the first technique, which is simply to focus on the higher level connectivity, since that is what is typically new and being integrated at the system level. This technique is most useful when the lower level blocks have been thoroughly verified. The good news is that coverage results from the top few levels are understandable and actionable by system verification teams. Typically, users apply toggle coverage on the ports of top-level blocks as there tends to be less logic and more interconnect at these levels. Block or line coverage is generally not as interesting at these higher levels. You might have new, small pieces of controller logic that get pulled in at the system level, so it may make sense in your case to enable coverage on a few additional modules or an extra level of logic or two.
Figure 1: Apply code coverage on higher level connectivity
The tables on the right of Figure 1 show the number of toggle signals at a few select levels of hierarchy for a sample GPU design, sized at around 26M gates. You can see that the number of signals in the top few levels is manageable. But, you can also see how quickly the signal count jumps by four orders of magnitude in just a few levels of hierarchy.
Look to my post tomorrow to learn about the second technique for focusing code coverage efforts on actionable data. And I welcome your feedback on your experience with the technique discussed here.