Get email delivery of the Cadence blog featured here
Technology leaders like IBM continuously seek opportunities to improve productivity because they recognize that verification is a significant part of the overall SoC development cycle. Through collaboration, IBM and Cadence identify, refine, and deploy verification technologies and methodologies to improve the productivity of IBM’s project teams.
Tom Cole, verification manager for IBM’s Cores group, and I took a few minutes to reflect on verification productivity and discuss what the future holds.
Tom, can you describe the types of products your teams verify?
Our groups develop IP cores for IBM internal and external customer SoC projects. Among these are Ethernet, DDR, PCIe and HSS communications cores and memories. Our projects tend to be on the leading edge of performance and standards.
What are some of the verification challenges your teams face?
Our verification challenges fall into three major categories – mixed-signal, debug, and product-level productivity. All of our cores include PHYs, which makes mixed-signal intrinsic to their functionality, but we all know that transistor-level mixed-signal simulation is too slow for methodologies like OVM and UVM. OVM and UVM increase productivity because they reduce the test-writing effort, but they create another challenge in debugging the enormous amount of data they produce. A part of that data set - coverage - is a critical metric for us because it enables us to measure our verification progress. But it also leads to a capacity challenge due to the enormous data volume.
How are IBM and Cadence collaborating to address these challenges?
Several innovative projects are underway with Cadence to address these verification challenges. For example we have applied the metric driven verification methodology as documented in Nancy Pratt's video summary. Another project that has been running for more than a year models analog circuits with digital mixed-signal models, and shows an order of magnitude performance improvement in preliminary results. As a result, we were able to use the same models in our pre-silicon verification and in our post-silicon wafer test harness. As industry leaders, we also share knowledge derived from our collaboration through technical papers. One example is the SystemVerilog coding for performance paper delivered at DVCon 2012 and the constraint optimization paper we will deliver at DVCon 2013.
What’s next for verification productivity?
Given the complexity of verification, there are several opportunities to improve productivity. For example, a promising approach uses formal checks at the designer level to reduce the time to integrate the testbench and blocks of the design for verification. We are currently collaborating to place these static checks in our code for reuse throughout the verification cycle. This may catch unintended instabilities introduced by ECO design changes earlier in the verification process and further improve our overall verification productivity.
If you have questions for Tom or me, please post your comment and we’ll do our best to answer you quickly!
=Adam Sherer, Cadence