Get email delivery of the Cadence blog featured here
As manager of hardware development for the Graphics Competence Center at Fujitsu Semiconductor Europe, Raimund Soenning faces some tough challenges. He's responsible for the design and verification of complex graphics controller SoCs for automotive applications. His group develops graphics and video processing IP that can be used in many different configurations.
"With any graphic or data or video processing IP, you have a number of combinations of ways you can use the IP," Soenning said. "Verifying all these configurations together, along with timing on the interfaces, makes it hard for us." Even with the purchase of third-party IP, he said, there's still a need to do "integration verification."
Directed testing is not an efficient way to test many possible IP configurations. In a recent interview Soenning talked about how his group is transitioning to constrained-random test generation, what advantages it offers, and what the challenges are in adopting it.
Introducing a New Approach
It turns out Soenning is a ten-year Specman user, and after he joined Fujitsu in 2006, he introduced it there. Constrained-random test generation is one of the key features of Specman, which is now part of the Cadence Incisive verification suite. Soenning noted that Fujitsu uses random test generation primarily at the IP and subsystem level, while generally retaining a directed-test approach at the full-chip level.
What's the advantage of constrained-random verification? "You can get to interesting scenarios and find bugs in your design much more quickly. If the environment is ready, you can generate thousands of tests in a very short time, testing your designs in ways you would not have thought of." Also, maintenance expenses are lower. "In Specman I need to maintain maybe 5 or 10 tests for one IP. In directed testing I need to maintain hundreds of files," Soenning said.
Metric-driven verification provides another advantage. "Instead of thinking you have tested something, you really measure it," Soenning said. His group uses both code coverage and functional coverage metrics. He noted, however, that it takes a lot of experience for engineers to write a good functional coverage model.
Soenning also said his group has experienced a "tremendous productivity gain" with Specman verification IP for such protocols as AHB, AXI, and PCI Express. "All of the advantages of standard interfaces would go away if there was no verification IP," he said.
Making the Change
Since constrained-random test generation is now available with SystemVerilog, why use the Specman e language? Because e has been around for 10 years and is a much more mature language, and in an earlier comparison it appeared to use fewer lines of code than SystemVerilog, Soenning said. "Why go for, in our view, the second best solution, when we can go for the best solution?" he answered.
However, Soenning noted, there's a learning curve for traditional Verilog RTL designers or verification engineers. The aspect-oriented nature of e is often unfamiliar to them. He found he needed to bring in specialists to set up the test environment. Once that was done, however, engineers without a prior knowledge of Specman or the constrained-random methodology were easily up to speed and were successful. "e is a verification language that is targeted specifically for the task of verification. So, it is quite easy to use for verification," Soenning said.
The biggest challenge in moving to constrained-random testing, it appears, is convincing engineering teams that it's worth investing some time to learn the methodology and set up the initial verification environment. But once that's done and the automated tests are running, verification engineers end up catching bugs that they would not have caught with directed random tests. One result is a quality improvement in the final product - even for IP with many configurations.