Get email delivery of the Cadence blog featured here
Simulation acceleration and emulation technology has been commonly used to run faster large blocks and system level configurations and to verify software against very fast and accurate RTL hardware model. With current system design capacities in the multi millions gates, simulating these designs at 100 -100,000 times the speed of a simulator provides already a huge benefit to system verification teams across the globe.
But is “running faster” the only metrics by which you measure acceleration/emulation benefit?
I think that acceleration speed will continue to be an important factor, but not the only factor moving forward. The true metric is how fast you can reach the “completion point” of your verification, in other words knowing that all bugs are have been “flushed out” before product is out the door. To accomplish this goal, you need to accelerate not only your simulation run, but your entire system level verification process.
While some of the most prevailing verification metrics considered in addition to acceleration speed have been fast compile and efficient debug, some new metrics need to be looked at: Are your runs on the accelerator and emulator getting you to the desired “completion point”? Do you apply the right tests to your accelerator and emulator resource to verify system level scenarios in the most effective way? Do you run your accelerator in the most effective way?
Verification acceleration towards your “completion point” entails good planning of your verification modeling strategy, effective management of your simulation and acceleration/emulation resources and use models, and a good verification coverage metrics telling you that your desired completion point has been reached. These make the main difference in my mind between “simulation acceleration” and well planned “verification acceleration” with the “completion point” end-goal in mind.
I never understood why users placed such a high importance on the raw speed in benchmarks between emulators. I saw things like "emulator1 runs my design at 700 kHz and emulator2 runs my design at 900 kHz so it makes sense to pick emulator 2".
Look at my blog entry called "Verification Hierarchy of Needs". It's easy to measure RUN. Historically most EDA spending in verification goes towards RUN. The DEBUG part requires a combination of tools and human thinking so it's hard to measure things like productivity and how long it takes to find and fix a bug.
VERIFY is even harder to measure because it requires a human to think about more than 1 bug, but to think about all the bugs that might be in the design.
To achieve true "verification acceleration" it seems we need a way to measure the productivity of tools and methodology that are combined with human thinking.