Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
The EDA industry provides impressive tools for block-level verification test generation, but has so far produced limited automation for debugging. As a result, debug is becoming a major bottleneck in IC functional verification. Shlomi Uziel (right), vice president of engineering at the Advanced Verification Solutions Group at Cadence, is among those who believe that a new approach to debug is needed.
In this interview Uziel talks about why debug has become a bottleneck, the challenges customers are experiencing, and the requirements for a better solution.
Q: Shlomi, what is your background in verification and debugging?
Uziel: My introduction to the verification world started 19 years ago when I worked as an intern for Digital Equipment Corporation doing functional verification for their network chips. When I graduated, I realized that I liked this area, so I joined Verisity, where I worked on some of the foundations that later evolved into eRM and UVM. I also helped develop metric-driven verification tools and methodologies. I came to Cadence with the Verisity acquisition .
Q: How much time are customers spending in debug compared to the overall verification process?
Uziel: When we started looking at debug, we started with a rule of thumb of 50%. We were surprised to find that customers either agreed with that number or suggested higher numbers, sometimes as high as 70%.
Q: Why has debug become such a bottleneck?
Uziel: I think this is a natural progression of the developments we have made in hardware verification over the past several years. In the old days, the primary methodology was directed tests. Verification engineers spent most of their time crafting these tests and trying to make them do what they wanted the tests to do. That made the test construction intensive, but debug was easier because you knew what you were looking for.
Constrained-random approaches, metric-driven verification, and UVM all provided much more automation in creating the actual test. So now we find more bugs faster, but the environment is more complex and the stimuli is random. As a result, debug is becoming more of a bottleneck and debug automation needs to improve. We need to make it easier for engineers to discover the root causes of bugs that they find.
Q: What debug techniques are most common today? Are people looking at waveforms? Are they still using printf statements?
Uziel: It depends on the people you ask. If you talk to designers, what appeals to them the most is RTL debug with waveforms. That’s the practice they know and are familiar with. If you talk to verification engineers, debugging the testbench is much more like software behavior. Therefore they are using much more interactive debug tools. If you talk to software engineers, they are using embedded software debug tools and going step by step to find the problem.
We have been surprised to discover, however, that good old printf is still proliferated throughout the industry. Many engineers just go back and put these print messages close to where they think the problem is, and go through multiple iterations until they narrow down the problem. Sometimes it may be necessary to iterate three to five times, or even more.
Q: I imagine one of the challenges engineers face is a massive amount of data produced by debug tools. Is that a big problem?
Uziel: SoC designs keep growing as we speak, so now we have more engineers contributing to the final SoC. When you find a bug in the system, it becomes a whodunnit show, and it takes a collaborative debug effort to go through piles of code and figure out what was going on. This is not just about the size—it’s about integrating multiple IP blocks representing multiple knowledge domains from multiple, remote teams.
Q: As design goes up in abstraction, does debug get easier—or harder?
Uziel: If you’re writing your code in Java or C++, and you need to debug at the assembly level, the fact that you created your design at a higher level of abstraction doesn’t help when you debug. If we could come up with tools that enable abstraction and debug at the same level, it would be helpful.
Q: Today debug takes place in separate domains such as digital, mixed signal, custom/analog, low power, embedded software, and so forth. Is this separation a problem?
Uziel: Debugging an SoC takes a team effort and it requires collaborative feedback. You need different sets of expertise to nail down the cause of a particular bug. Different domains may require unique tools, and some of those domains, such as embedded software for example, are trickier than others. But regardless of that, engineers need to have an efficient way to share their findings so they can collaborate to find the source of the bug.
Q: There are also different kinds of hardware/software development engines—virtual prototypes, formal, RTL simulation, emulation, and others. Do we need a consistent debug environment across these engines?
Uziel: Yes. You want to choose different verification engines according to the needs of your design. You want to easily move from one engine to another, and you don’t want the debug environment to change.
Q: Modern SoCs include a dozen or more protocols for interfaces and memories. How do these complicate debug?
This is both a problem and an opportunity. It’s a problem because almost every SoC today uses commercial design and verification IPs. Those are typically encrypted, and that makes debugging much harder. But it’s also an opportunity, because those protocol standards are well known and therefore suggest an abstraction that will make the debug much easier, even for someone who is not a protocol expert.
Q: Otherwise, what are the major changes and improvements that need to happen with debug?
Uziel: Clearly we have made a lot of progress on many fronts to help our customers tape out on time. But what we need to leapfrog is debug. Since it has become the bottleneck, we need to look for better techniques to automate it. On top of that, we need to accommodate the changes that are happening in the design world, with a focus on SoCs. We want multiple stakeholders and domain experts to collaborate easily to debug the same design. And of course like everything in EDA, it needs to scale!
Q: Finally, how can engineers get to the root causes of the bugs they find?
Uziel: When you debug properly, you want to find the source of a specific bug, fix it, and prevent it in the future. Most debug tools today just provide raw data at a particular point in simulation time, and let you figure it out. The burden of finding the relevant data at the right point in time, drawing conclusions from it, then tracing it step-by-step until you get to the source of the problem is all on the user. We need tools that will move some of this burden to the machine.
Today a team kicks off its nightly regression run expecting several bugs that will materialize on the following morning as a set of failures. The long manual and tedious path of root-causing each of those failures to concrete and well-understood problems is today’s biggest bottleneck to tapeout.
So assuming that introducing bugs during development is not going away any time soon, then at least what we can do is to make sure that when the verification team comes to work in the morning, debug software identifies failures, takes engineers to the right point in time, shows them the most relevant information, provides them with a set of suggestions where the problem can be, and then guides them through the debug process to root-cause the problems much faster. This is how we can help our customers reduce the debug bottleneck.
Related Blog Posts
- DVCon 2015: Engineers Discuss Verification Debug Challenges and Strategies
- Archived Webinar: New Technology Attacks the Verification Debug Bottleneck