Get email delivery of the Cadence blog featured here
It is nice to see when visions get closer to reality. When Cadence announced its vision for the System Development Suite back in 2011, offering a continuum of engines from virtual prototyping through RTL simulation, acceleration and emulation all the way to FPGA based prototyping seemed aggressive.
Or was it? Earlier this week I blogged about software enablement using virtual platforms as described by Dave Beal at DAC 2012, and today I am happy to report on Peter Ryser's presentation called "From RTL to Success with Emulation" given at the same venue. And there you have it. Not so far fetched, it is the System Development Suite in action! Xilinx used exactly the engines above for the development and software enablement of the Zynq platform.
Peter opened his presentation as a "story told by an engineering manager," providing an overview of how Xilinx used different prototyping/emulation approaches to develop and verify the Zynq-7000 silicon, how Cadence Palladium XP took an important role in the system verification process, what his experiences were to validate entire systems before tapeout, and how Palladium is used after silicon availability.
To make audience appreciate the verification complexity the Zynq development team was facing, Peter used the following graph:
The verification complexity is indeed quite daunting. The Zynq-7000 has a dual ARM Cortex-A9 subsystem, programmable logic, an operating system kernel and its high-level and low-level drivers, software libraries, and APIs that enable applications executing on it. All this has to be connected (and verified) to an ecosystem of software development tools, software and hardware IP, and hardware development tools with which the user adds custom logic, executing the "E" in Zynq EPP (Extensible Processing Platform).
Peter described a system verification approach combining prototyping and emulation. Peter presented the need for different development platforms using a graph outlining the relative cost of a bug. Normalized to the cost of a bug found and fixed in the architecture phase, it is 3 times as expensive in the design phase, 10 times as expensive during the block development phase, 30 times as expensive during system test and 100 times as expensive when a bug is found after the device is shipped to customers. So it is important to find bugs as early as possible!
Next Peter described the tradeoffs of the different development environments used by Xilinx during development - discussing the differences in visibility, speed and debug capability. His description started with emulation running at about a MHz with great visibility. Next the multi-FPGA prototype used during development (the picture showed six Xilinx FPGAs) runs at about 10MHz, is good at finding issues, but offers only OK visibility. FPGA prototyping extends the speed to about 50MHz, is excellent at finding issues but again only OK with respect to visibility. Validation on the actual silicon runs at the actual speed (800MHz), is best in finding issues but offers pretty low visibility. Still, all the development platforms have their place, together of course with the virtual platform described in earlier blog posts.
With respect to emulation and Xilinx's use of Palladium, Peter described two use cases:
1. The first use model is a classic verification use model. Starting with tests on the prototyping board and identifying bugs, the design is migrated to Palladium and the bug is reproduced and "root cause" analyzed there. Once the bug is found, the RTL is fixed and tested using simulation-based unit verification. Then a new verification cycle starts using the prototyping board.
2. The second use model could be called system verification or system validation. The fixes found using the use model described above are tested within the design's system environment, and sometimes alternative fixes are evaluated this way too.
With Palladium at the center of both use models, Peter summarized the value as providing high visibility into complex bugs, good trigger capabilities, and interaction with the external hardware through Speed Bridges. In addition, Palladium is valuable due to its simulation waveform generation, which is used by Xilinx hardware engineers to find root causes and fix issues and is even sent to 3rd party IP vendors for bug analysis. Furthermore, Palladium's quick compilation turnaround time allowed Xilinx to rerun bug scenarios to find the right subset of interesting signals, run experiments to find the best bug fixes, rerun complex system-level tests to verify bug fixes, and to look for shadow bugs and what Peter referred to as "rats nests."
In closing Peter talked about the successful bring up after silicon arrived and recognized Palladium's contribution to the post silicon success by allowing to re-run scenarios found in the real silicon with higher debug visibility offered by emulation. The DDR memory was working on the second day, Xilinx had SMP Linux booting on day 3, the first shipment to a customer happened on day 9, who had it running on day 11.
On day 18 Xilinx was able to connect a camera and see video, on day 20 the Ubuntu desktop was running, and after 35 days the evaluation board was ready and Linux ran with HD 1080p video at 60 fps. Peter attributed this success to the development approach of combining various prototyping techniques, and of course Palladium emulation was at the center of all that.
It is great seeing the vision of the System Development Suite coming alive. With Peter's and Dave's presentations Xilinx is living proof that its vision is finding customer adoption!