Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
At CDNLive in Taiwan last year, MediaTek presented their experiences with Perspec System Verifier. This post assumes that you know the basics about Perspec. If you need an introduction then look at my earlier posts:
The basic flow of Perspec follows five steps:
MediaTek used Perspec with the goal of improving verification efficiency on a CPU subsystem. They wanted to increase the number of mixed and random C test cases that they could produce per day. They wanted to reduce debugging time. And they wanted to increase coverage. These are actually very general goals that almost any verification team is aiming for, too.
The above diagram shows how MediaTek split the flow between the test modeling engineer and the test writer. One important aspect of Perspec is that it can also add randomization so that, for example, tests are repeated many times, on different cores and with the memory caches in different states. That way issues are not hidden by the limited amount of testing, or by the determinism present in the test would not be present in real hardware.
The first set of results (above) show the increased efficiency of using Perspec compared with manual directed tests. The biggest differences are functional coverage (up from 2.5% to 100%) and the number of test cases per hour (up from 1 to 100). There is some up-front cost: it took 10 days to set up the Perspec environment versus 5 for the manual environment. Part of the reason for the increased coverage is simply that Perspec generates much more complex test cases that exercise more of the design, up from 0.5ms of simulated time to 50ms, and increase of 100 times. In a typical design, there are many issues that will not be discoverable if the maximum test case only runs for 0.5ms.
The second set of results show the power of Perspec for doing hardware/software co-verification. Software/hardware interface errors, which Perspec found in 10 minutes, took anywhere from hours to much more than weeks to find manually.
One of the results that people generally seem to discover with Perspec is not that they can do what they would normally do a little bit faster, although that is true. It is that it makes it possible to do tests that are so extensive and complex that you would never attempt to create them by hand.
MediaTek summarized their experience in a few bullet points on their final slide:
More details are on the Perspec System Verifier page.
Next: ACE Awards: Palladium Z1 Team Is Design Team of the Year
Previous: IEDM: Coventor Panel on BEOL Challenges