In a recent webinar on increasing functional verification performance, the point was made that gate-level simulation usage is increasing. Wait a minute, I thought - haven't we spent the last two decades talking about raising the abstraction level for design and verification? While some IC verification teams are indeed moving up to software-driven verification and transaction level modeling (TLM), it turns out that there are increasingly compelling reasons to run gate-level simulation, as revealed in a recent Cadence customer survey.
As shown below, gate-level simulation is run after RTL code is simulated and synthesized into a gate-level netlist. Static timing analysis (STA) and logic equivalency checking are also run following RTL synthesis, but by themselves, these static verification methodologies don't cover everything. Equivalence checking, for instance, doesn't consider timing or detect X-state optimism (explained below).
One reason for running gate-level simulation is design for test (DFT). Because scan chains are inserted after the gate-level netlist is created, gate-level simulation is often used to determine whether scan chains are correct. Another motivation for gate-level simulation is that technology libraries at 45nm and below have far more timing checks, and more complex timing checks, than older process nodes.
Gagandeep Singh, Cadence staff R&D engineer, mentioned the survey at a Dec. 4, 2012 webinar on improving verification performance (see my recent blog review here). I spoke to Singh and Amit Dua, senior staff product engineer, to get some more details. The survey involved verification engineers from 7 major Cadence customers located in North America, Japan, India, and Europe. Process nodes mostly ranged from 28nm to 45nm (note: the Cadence Incisive verification platform supports 20nm as well). Respondents cited the top reasons for running gate-level simulation as follows:
A separate question about DFT simulation revealed that about half of respondents use this technique to verify scan chains.
Survey respondents said that gate-level simulation may take up to one-third of the simulation time, and could potentially take most of the debugging time. While far more bugs will be caught in RTL simulation, Singh noted that "gate-level debug is far more complex and time-consuming than RTL debug." Unlike RTL batch/regression runs, Dua noted, gate-level debug runs cannot be enhanced by simply adding more compute power, since manual effort is required to debug problems.
When is gate-level simulation run? That's a tricky balance, because a bug caught late in the verification cycle is an expensive bug to fix. On the other hand, gate-level simulation isn't very useful until the RTL is reasonably stable. "It cannot be done too early and it should not be done very late in the design," Singh said.
Speeding Things Along
Since gate-level simulation (especially with timing) runs much more slowly than RTL simulation, it potentially has a significant impact on the verification closure cycle. Thus, there's keen interest in speeding gate-level simulation. Applying more zero-delay simulation is one way to do this. The survey respondents reported that they're using more zero-delay simulation than timing simulation at the gate level.
Singh noted that zero-delay simulation is adequate for most functional verification, and that it runs 3-4X faster than timing simulation. All major simulators have some option for turning off timing, but different simulators provide different features. The Cadence Incisive Enterprise Simulator, for instance, offers delay mode control and built-in features that can help designers run zero-delay simulations more effectively. This is useful because zero delay mode can introduce race conditions into the design.
Incisive also offers a timing file that lets you turn off timing for particular instances in a design. And if you really need speed for untimed simulations, the Palladium XP accelerator/emulator can offer speeds 10,000 times faster than simulation.
Incisive also lets engineers provide limited debug access to certain portions of the design, so they don't end up dumping waveforms for areas they're not going to debug anyway. If full debug access is needed, a switch can provide it. There's also an option (-ZLIB) that can compress snapshots and save disk space, while letting users set the level of compression.
So in short, an old technology - gate-level simulation - is enjoying a revival as we move down the process node curve. New methodologies and faster simulation performance will be necessary to avoid creating a new bottleneck.
Richard, I completely agree with you on this.For interested readers, a comprehensive list of need for GLS and relevant stuff is available below -whatisverification.blogspot.in/.../gate-level-simulations-necessary-evil.html
Until recently, designers have focused mostly on static, stuck-at-1 and stuck-at-0 defects. At 45nm, Vachon noted, delay faults begin to become important. At 28nm and 20nm delay faults dominate the defects that customers see. Delay faults (or transition faults) can result in "slow to rise" or "slow to fall" defects. "The tests required to detect those kinds of defects are complex, and they require at-speed test clocking," Vachon noted. "This drives the need for special test clocking IP during DFT insertion."