Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
Debugging is becoming the biggest bottleneck in the IC functional verification flow, and no wonder—many verification engineering teams are spending at least 50% of their time in debug. Cadence this week (April 28, 2015) is seeking to ease that bottleneck with the Indago™ Debug Platform, which is based on two concepts that are being applied across many industries: root cause analysis and Big Data.
Part of the Cadence System Development Suite, the Indago Debug Platform can reduce the time to identify bugs in a design by over 50% compared to traditional debug methods. In addition to the overall platform and a common GUI, the Indago platform provides three independent debug apps focused on specific tasks:
In Wikipedia, root cause analysis (RCA) is defined as “a method of problem solving used for identifying the root causes of faults or problems.” A factor is considered a root cause if its removal from the problem fault-sequence prevents an undesired event from recurring. In IC verification debugging, said Kishore Karnane, product managing director at Cadence, the root cause is “the underlying bug that helps engineers find the real cause of the problem.”
Wikipedia defines Big Data as “a broad term for data sets so large or complex that traditional data processing applications are inadequate.” In IC functional verification, Karnane said, Big Data means capturing a complete debug database (such as messages, waveforms, source execution order, call stacks, and threads) in a single verification iteration so users don’t have to re-simulate as they run different scenarios. Big Data provides an alternative to sampling-based methods, which focus on carefully pre-selected data sets, require up-front thought, and in most cases require multiple iterations to find the bug.
Isn’t RCA an obvious goal of most debug sessions? Ultimately, yes—but today, a lack of automation makes it difficult to track down the source of a bug. Most engineers today are debugging with post-process RTL waveforms paired with log files. This methodology requires an engineer to know, up front, where to add print statements and what signals to record. Engineers must rerun simulation to get all the information they need. This is why, on average, it takes engineers three to five iterations through the debug loop to isolate and fix a bug.
However, as shown in the figure below, these iterations may miss an underlying bug. In this example, the security field for a data packet must be 16 bits in order to avoid errors. We inject random values at S1 and pick up a few errors with the upper bits incorrectly set. We trace back to S1 and reset the constraints that generated the erroneous values. However, if we don’t test S2, S3, and S4, we may never recognize that the underlying bug is the 16-bit requirement and that it may cause similar errors in other blocks.
What happens when we apply Big Data and RCA to this problem? Engineers look for the underlying bug by analyzing waveforms, log messages, and code execution. Engineers use automated technology such as Smart Log, reverse debugging, and multi-engine data. The design is modified and rerun, but only one verification iteration is required. Finally, the actual root cause is identified—a bug in the datapath.
A Big Data approach to debug samples the entire debug data set once, including waveforms, messages, source execution order, call stack, and active threads. As shown below, a more traditional sampling-based approach will focus on waveforms. This flow requires many iterations. With Big Data, engineers can quickly run through a number of verification scenarios without re-simulating. They can ask deeper questions about what happened during the run, highlight causality, and discover correlations that might go undetected through a sampling-based approach.
“With a combination of Big Data and RCA, we guide the customer to finding the real cause of the problem,” Karnane said. “Otherwise, you basically have to know up front what kind of bug you are looking for.” The Indago platform, he said, is saving some early customers “man-months of effort.”
So how does the Indago Debug Platform automate Big Data and RCA? For one thing, Indago records the complete execution order of your source code. At any point in the RCA process, you can examine the complete call stack, active threads, and local variable values, and can single-step forwards or backwards in time to replay the simulation result. Here are some other features in the Indago Debug Platform:
One important capability is integrated debug for both testbenches and RTL code. When a bug occurs, engineers may not know whether the bug is in the testbench or the RTL. It is not uncommon for the testbench code to be larger than the actual design code.
In summary, the Indago platform presents a unified environment for debugging VIP, hardware, and software. It provides an automated approach to RCA. And by leveraging Big Data, it minimizes the need to re-run simulation and go through multiple iterations to discover the source of a bug.
Further information about the Cadence Indago Debug Platform is available in a whitepaper and product landing page.
- Q&A: Breaking Through the Verification Debug Bottleneck
- DVCon 2015: Engineers Discuss Verification Debug Challenges and Strategies
- Archived Webinar: New Technology Attacks the Verification Debug Bottleneck