Get email delivery of the Cadence blog featured here
According to some reports, debugging has become the biggest bottleneck in IC functional verification. At a Cadence-sponsored lunch panel at the recent DVCon 2015 conference, three engineering managers explored verification and debug challenges, and looked at solutions including formal technology, the Universal Verification Methodology (UVM), and a “shift left” to earlier hardware/software co-verification.
The panel was titled “Mastering Verification and Debug Productivity” and was moderated by Brian Bailey, technology editor at Semiconductor Engineering. Panelists were as follows, as shown left to right in the photo below:
Bailey began with the “scoop of the century” – which was about a supposed simulator that runs orders of magnitude faster than anything else, and is available for free. Of course, no such simulator exists, and if it did “it wouldn’t help us that much,” Bailey said. “Simulation is just one part of the puzzle. There is absolutely no point in finding bugs faster than people can fix them.”
The panel, Bailey said, was convened to “talk about the problems people are having in debug and the strategies they’re using to cope with the problems, and hopefully that will be way more valuable than a simulator that doesn’t actually exist.” Following are some of the questions and answers from the panel discussion.
Q: Please introduce yourselves, and then tell us about the biggest debug challenges you are facing today.
Montecillo – I lead the formal team at Broadcom. For us, it’s all about early bug detection and higher quality RTL. We start early in the design process, and Cadence provides a tool called Visualize that empowers our design team to explore a design as soon as RTL is coded. Formal is also a good fit for post-silicon debug, where we know all the permutations of the corner cases and can get to the root cause of a problem faster.
Goodenough – I’m vice president of engineering systems at ARM. What we need around debug is two-fold. First, we need to reduce the latency of the time-to-triage and root-cause analysis and subsequent fixing, because we all know we’ll get a bug two weeks before tapeout. The second issue is that once we triage and analyze a problem, we don’t want to make the same mistake again. We need to feed results back from late-arriving debug sessions.
Lacey – I work on ASICs for HP servers. The biggest challenge we’ve been focusing on recently is increasing the productivity of our engineers, and that certainly applies to the debug area. We are trying to move to new methodologies and we recently moved to UVM-e. That brings up multi-language challenges. We have UVM SystemVerilog components we have to integrate. How do we debug in a mixed language environment?
Schirrmeister – I head product management for the [Cadence] System Development Suite. We hear from customers about three issues related to debug. One is the ability to reuse verification environments from IP to subsystems to the chip level. Another is the notion of being able to reuse across engines, such as virtual platforms, RTL simulation, and emulation. Third is verification reuse across disciplines, such as hardware and software.
Q: Do you find it more difficult to debug on the design side or on the verification side?
Goodenough – All of the above. You’re integrating a lot of IP from different people in different design centers in different geographies. This is a knowledge management problem.
Lacey – There are different challenges in different domains. Those challenges are addressed with different kinds of tools. I think tools on the verification side are advancing quite rapidly. But when I look at RTL designers in our organization, many are “old school.” They may find it harder to debug because they’re not using more advanced tools.
Q: What kinds of problems can you find with formal techniques?
Montecillo – Implementation problems. Architectural issues. It’s all the same issues found in simulation, but we feel we can find these much easier and faster at an early stage. The later a bug is caught, the more expensive it is.
Q: It sounds like a very important philosophical change is happening in verification. Formal used to be something you’d do after simulation and find bugs simulation could not. But now you’re bringing formal verification in early.
Montecillo – Exactly. It’s one of the shift-left programs we deploy. We’re trying to empower a design team to be able to employ these formal techniques.
Goodenough – There is a formal use model that has nothing to do with verification – it’s about using formal tools for better quality design. It’s a mistake to have verification and validation engineers do that. You use formal to avoid bugs from design teams. We have been unsuccessful when we made formal a validation problem, and successful when we made the use of formal technology a design problem.
Q: John, you talked about triage. What tools help you in that process?
Goodenough – There are a number of techniques, from promotion of assertions to use of abstractions. Rather than have the software guy deal with waveforms, give him access to transactional logging. The problem is knowledge management, and one of the key things in knowledge management is communication.
Schirrmeister – It’s almost like marriage counseling. You have to get hardware and software guys into the same room.
Lacey – We may run 10,000-15,000 tests overnight for a given block. We may have a low failure rate of one percent, but that still leaves 100-150 failures we need to look at. We have scripts that will immediately begin to “bucketize” those failures, to give them a first level of organization. The engineer can immediately dig into that triage and find out what the problem is.
Montecillo – For us formal people, debug is easy. Formal can generate counter-examples in the shortest amount of trace. We’re looking at milliseconds of traces and trying to find the root cause. We have all the traditional tools available to us including waveform tracing and a hierarchy browser.
Q: I’m a believer that you get what you measure. I’m curious how you are measuring debug productivity.
Lacey – We don’t have any formal metrics, but we do talk about it, and we get a general sense and feel as to whether the things we’re trying to deploy are helping us and making a difference. Moving to UVM is a debug enhancer for us. We have consistency across all our components now.
Goodenough – We measure time to root-cause analysis. We do that for big problems, not little ones. We also track metrics about triage, which is not necessarily root-cause analysis. But it’s a cultural problem – we employ extremely intelligent people who resent not being trusted. The challenge we have is showing people how metrics can enhance what they’re doing.
Q: I love the idea of formal – the ability to verify a lot of things without stimulus. But it seems to me formal has its own limitations for complex networking protocols. Normando, do you use formal for these kinds of complex protocols?
Montecillo – Oh, absolutely. We have various techniques for complex blocks – we do partitioning, and we do abstraction, to reduce the complexity of design. In addition to bus protocols or interfaces, we verify transport blocks and control-centric blocks.
Goodenough – Formal isn’t a free lunch. It is just as much work to write a formal validation environment as a constrained-random environment. We do a lot of formal and we deal with complexity through abstraction. You don’t reason at the wire level about every signal, you reason about aggregated transactions.
Q: Design teams may complain that test bring-up on a block is too slow. Have you investigated that problem?
Lacey – Once we moved to UVM, we were able to get a new environment up very early. Where it used to be a week or two, we now have it up in less than a day for most of our block environments.
Montecillo – If your design team is just sitting around waiting for a testbench, use formal. Visualize is an incredible tool. You don’t need to know about constraints or checkers, you just define your scenario, and the tool will calculate the trace for you. You can manipulate the trace and look at what-if scenarios.
Other DVCon 2015 Blog Coverage
DVCon Accellera Panel – What’s the Key to IC Design Efficiency?
DVCon 2015 Panel: Is IC Verification an Art or a Science?