Get email delivery of the Cadence blog featured here
If you're anything like I am, you listen to webinars with one ear, occasionally checking your computer screen if a graph or image is referenced, perhaps catching up on email or articles while the webinar is running in the background. I have always struggled to know whether it is the timing of the event or boring content that has caused this. However, it is strange that I am the one who made the choice to attend at that time after I reviewed the content to be covered. While I keep trying to make most of it, I know somewhere it is the lack of personal interest and zeal that is causing my attention to disappear.
But, wait a minute, if I get a topic worthy of my interest, and the speaker has the power to hold my attention, grab my interest, talk to me through his images / graphs, then nothing can stop me from consuming that learning completely. And today, and in a few upcoming blog posts, I am going to talk about a few such webinars. And I will share with you archived webinar references.
The first one of them is on Effective system-level coverage use cases for functional verification. It was made very clear to me that this webinar would help me learn how I can extend my coverage verification techniques to sub-system and system levels with hardware-assisted verification. I would also gain insight into new coverage use models that are being applied at the system level. I was also told that this webinar would showcase customer case studies involving system-level coverage. So, I was ready to attend with full vigour.
For people like me who love to hear about hardware-assisted verification, here's an interesting statement from one of the speakers in the webinar: "When packets are being dropped, the conditions that cause them to be dropped may be missed for thousands or millions of cycles and the testbench may not recognize the problem until weeks have gone by", said Raj Mathur, director of product marketing at Cadence, during the webinar.
If you are a verification engineer, verification manager, or designer, you know that getting effective coverage for a large, complex SoC is one of the biggest issues in verification right now, leading some teams to consider early tapeout to get a platform on which they can run effective system-level vectors and even applications. Many of the errors caught using taped-out silicon tend to be complex integration bugs where the interconnect suddenly fails to behave as it should. This webinar will show an alternative approach - using hardware acceleration to extend RTL verification techniques to the system level.
The key, explained Raj, is to focus the SoC-level effort on conditions that affect integration rather than the internal functions of individual blocks, although assertions continue to be used on that logic to ensure continued correctness as errors are rectified.
Raj and Eric Melancon, staff product engineer, then described approaches that focus on the combination of transaction-based modeling in combination with hardware acceleration, allowing the use of live hardware interfaces to help create real-world conditions for verification.
On a system-level, Eric suggested that the key is to focus coverage on the areas that matter, and to use techniques such as hierarchy management to isolate signals that have an effect at the chip level and take the focus away from those that will have been verified at the block level.
"Another technique is to use deeper analysis in particular regions of the design. You may have a CPU, some unique functionality in the core, and a bunch of peripherals," Eric said. "Let's say the CPU is licensed and the peripherals are being reused. Some of the core functions may be new - they are the meat of this SoC design. And let's say there is a new, complex interconnect being used. It can be useful to focus on these less-well tested parts of the design.
Eric then further elaborated on "Coverage for optimization". He added "We are also finding opportunities to use coverage for optimization," pointing to the use of software during hardware-accelerated verification to track down potential bottlenecks. "Looking at coverage on a FIFO, if you see that FIFO usage is low, maybe you can reduce its size. Or if it's unexpectedly high, maybe expand the size of the FIFO or perform software changes to make better use of the FIFO."
Now, let me stop here, and encourage you to go and find out more from the webinar directly. Here is a link to access the archived session:
Archived Webinar - Effective System-Level Coverage Use Cases for Functional Verification
Note: You will need Adobe Connect to view to webinar.
I always have a habit of asking for more when I am given something free. So, let me flip the coin and give something more: a short demo on "running assertions and coverage in Palladium XP". You just need to log in to Cadence Online Support to view this.
Video - Demo on running assertions and coverage in Palladium XP
In summary, you will learn: