Get email delivery of the Cadence blog featured here
Frank Schirrmeister, group director of product marketing for the Cadence System and Software Realization Group, has been managing and marketing system-level design technology for over 15 years. He's a widely published and respected author on the topic, with a monthly blog at the Chip Design Magazine site, a column in Electronic Design, and regular contributions to the Cadence System Design & Verification blog.
In this Q&A interview he discusses how he got into system-level design, the pace of adoption today, the promises and challenges of virtual prototypes, the progress of high-level synthesis, and the range of prototypes needed for hardware/software integration.
Q: Frank, what does your job at Cadence involve?
A: I'm responsible for product management of the System Development Suite, which includes virtual prototyping, RTL simulation, emulation, and rapid prototyping. I started in August  but it's not the first time I've been at Cadence. Cadence recruited me and brought me to the U.S. in 1997 to run the Felix [system level design] initiative.
Q: How long have you been working with system-level design, and how did you get drawn into it?
A: I'm in my 15th year. I was a chip designer originally. In 1993, I designed an HDTV encoding system for Deutsche Telekom [German telecommunications provider]. It had 6 or 7 chips and they were very complex for the time. I was involved in the architectural aspects, and I was fascinated with how these motion estimation chips fit into a systems context.
Prior to that time, I was an EE student at the Technical University in Berlin. I did some embedded software development for musical instruments to finance my time at the university. I studied what they called Technical Computer Science, and microelectronics was one of my focus majors. So, I started on the software side and then I did the hardware side as well. I even did full custom layout for one of the [HDTV] chips.
Cadence recruited me in 1997 because they were looking for people who had chip design and software development experience.
Q: How do you define system-level design or ESL [electronic system level]? Are these still useful terms?
A: I think they are useful terms to describe the portion of the design flow before you get to verified RTL. Because we have been talking about it for so long, ESL has a little bit of a negative connotation today, so I refer to it as system-level design. To make the term useful, the key question is what you define as a system. You need to define the boundary of what you mean by "system" carefully. The designer's system - like a system on chip [SoC] - becomes just a component of the phone, which is a component of the network.
Q: System-level design, or ESL, was certainly slow going for a number of years. Are you seeing more interest and adoption today?
A: I certainly do. When I started in this area in 1997, leading edge customers were looking at it as something they thought they would have to resolve, but you could really only get into the leading edge customers. This has changed. About five years ago people started to resonate with it more. Today when we go to a customer, he no longer has to be convinced he has a problem. The question becomes, how can Cadence help?
System-level design is still far away from mainstream, however. I would say it's still in early stages with lots of room for growth.
Q: Are virtual prototypes (or virtual platforms) coming into more widespread use? What obstacles are standing in the way to broader adoption?
A: Adoption is growing, and we have more and more people considering it, but I wouldn't call it mainstream yet. Most projects are still running without virtual platforms. One obstacle is that the people who have all the information to build the platform are hardware people, but the software developers are the users who get most of the value. It's kind of like doing something so your neighbor's life is easier.
Another obstacle is that existing methods for software development haven't totally failed yet. It's hard to develop on a board, and the boards are latest in the development cycle, but people try to get by with traditional techniques because the cost to build the virtual platform is still pretty high. It's expensive today because it's not a by-product of the traditional design flow.
Q: Both virtual prototypes and high-level synthesis use SystemC transaction-level modeling (TLM), but the models for high-level synthesis require much more detail. Can we bridge the gap and connect virtual prototypes to the implementation flow?
A: That's the next step that we're eager to get to. We want to enable a flow that takes the same architectural intent and leads into both software enablement and hardware implementation with high-level synthesis. Virtual platforms are expensive today because it's hard to build new models, and there is a lack of existing models for the re-used IP. If high-level synthesis could create, as a by-product, descriptions of new blocks that can be used for software development, that would be great. Then, if every IP provider would provide models, the problem would become mostly a TLM IP assembly issue.
Q: What are you seeing in terms of adoption for high-level synthesis?
A: What I find impressive about high-level synthesis is that it is silently being adopted by more and more customers, which means that vendors have to be involved in fewer of the projects directly. It's really in production. We have customers who are approaching or have 50 or more tapeouts with it. Datapath is the traditional sweet spot, but they're also using it for control. High-level synthesis is not only for datapath any more.
Q: What's the role of verification in facilitating the move from RTL to TLM?
A: Verification is a key driver in bringing about a jump to the next level of design entry. If you can verify at a higher level of abstraction, you can run more cycles and also different verification tasks - like application verification -- because execution is faster, albeit with less detail. It's kind of comparable to what happened with the move from gate-level to RTL simulation.
Today we're seeing a lot more people using TLM verification, and they're also bringing in software. There are two trends. First, a lot of testbenches themselves run at a high level of abstraction, and customers verify at the TLM level first and then add more detail. Secondly, people are using embedded software-driven verification, where their testbench is actually embedded software running on a processor they would have in the system. So they're running directed tests in software but the intent is to verify the hardware.
Q: Besides virtual prototypes, what kinds of prototypes are needed for hardware/software integration?
A: As I wrote in a recent EE Times article, you need prototyping throughout the flow at different stages. With virtual prototypes you get good speed for software debugging, but you don't get the accuracy you need for hardware verification. So you first bring in RTL simulation, which is in a sense a prototype, but it doesn't execute software very well. That's when you bring in emulation. Now higher-speed software development and execution become feasible, you have better debug on the software side, and you still have good debug insight into hardware.
But at some point that is still too slow as well, so you may want to bring in an FPGA-based prototype where you have even more speed. But what you have to take into account for FPGA-based prototypes is that every change you make on the hardware side takes longer. So you want to use it at a more stable phase of the RTL. At Cadence, we provide technology that allows you to reuse what you've done in the Palladium emulator to make the FPGA-based prototype bring-up easier.
Q: How does the Cadence System Development Suite address the challenges of hardware/software integration?
A: The idea behind the suite is that you can connect the different [prototyping] engines and enable an easy transition between the engines. For example, the Incisive RTL verification environment and the Virtual System Platform are based on the same technology, so bringing together RTL models and virtual prototypes is a very natural undertaking. The same is true with Palladium XP emulation and the Rapid Prototyping Platform, where we are using the same front end and the same flow. To validate that the final FPGA prototyping netlist is functionally correct was traditionally a time-consuming issue. In the System Development Suite you can bring the netlist back to Palladium for verification where you have great debugging.
Q: Finally, any predictions for 2012? Is this the year of the move from RTL to TLM?
A: In 2012 I think we will make significant steps towards that. We are working towards using the same architectural intent for implementation and software enablement and verification. I think we'll see more hybrid approaches where hardware-accurate RTL in an emulator or rapid prototyping system is executed along with TLM models. Will we completely get there [TLM] in 2012 and be done with it? Probably not, but I think we will make significant progress.