Get email delivery of the Cadence blog featured here
In yesterday’s Part I blog post, I talked about a technique for focusing code coverage efforts on actionable data—namely, focusing on higher level connectivity. Here, let’s discuss a second technique to support system-level code coverage with hardware-assisted verification. This technique involves performing deep analysis only in particular regions of the design.
Figure 1: Apply code coverage with localized focus
In today’s environment, SoCs are comprised of blocks with varying backgrounds. Some are being reused from prior generations, some come from third-party providers, and some are recently developed in-house. In Figure 1, you’ll see a generic SoC design with a CPU, some unique core functionality, and a bunch of peripherals. Let’s say the CPU is a third-party intellectual property (IP) block and most of the peripheral blocks are well-tested and are being re-used from previous projects. Then, you’ll have core function units that are new or significantly modified. Also, say there is a new, complex interconnect fabric for this design.
To manage the magnitude of the coverage analysis, it can be useful to focus on these new and lesser tested areas of the design. Since these are new portions of the design, it is easy to get access to the designers to review coverage data. If you are focusing on specific instance hierarchies at different times in the verification process, it is still feasible to merge coverage from these different regions into a single complete view.
So, in these two blog posts, we’ve briefly discussed a couple of techniques that can help you focus on actionable code coverage data in order to make more effective use of code coverage at the system level. However, many users actually do take on the task of generating and analyzing coverage deep into the design across the width of the various subsystems. They set very high code coverage goals, like 100%, and in any regions of the design that don’t meet those goals, they will get the responsible design engineers to review the holes in their modules. What is an effective technique for analyzing and improving coverage in this scenario?
When you do deep analysis of this kind, you will invariably find coverage items that simply are not going to be used in the context of the design. For these situations, coverage analysis tools support the concept of exclusion lists. By excluding certain coverage items, they will not be included in the coverage score.
Figure 2: Excluding irrelevant coverage items
This little example shows a module that supports 8-, 16- and 32-bit accesses. If this particular application only uses 32-bit accesses, the 8- and 16-byte access blocks will be uncovered. The example shows that when you exclude the two uncovered blocks, the block coverage score for the module increases from 83% to 89%. Note these types of exclusions are generally stored and re-used from run to run, and from chip generation to chip generation. You only have to do the detailed analysis once.
Functional Coverage: Another Technique in Your Verification Toolbox
Code coverage is just one of the coverage techniques. The other is functional coverage. Together, these coverage types provide a more complete set of information to help answer some of our customers’ critical system-level questions. Figure 2 tabulates what we’ve heard from several users on their most commonly asked questions for which a combination of code and functional coverage can help answer.
Figure 3: User explorations and coverage type
The recent posts merely touched on the first two rows. If you want to hear more about the other use cases highlighted in Figure 3, check out this webinar, “Effective System-Level Coverage Use Cases for Functional Verification.” Also, read this Tech Design Forum blog post on focusing coverage for system-level integration.
Here are some FAQs that you may find interesting: