Get email delivery of the Cadence blog featured here
Chris Tice is the senior vice president and general manager for System Design and Verification at Cadence Design Systems. In this interview, he discusses upcoming and ongoing developments with transaction-level IP design, virtual platforms, embedded software verification, and system-level low power design.
Q: Chris, what does your job at Cadence involve?
A: I’m responsible for R&D product creation across our systems space. This incorporates all of the hardware assets embodied in our Incisive Palladium and Xtreme products, which are used for system-level integration, verification and software development. It also includes a newly formed team we call the Systems Software Group, which consolidates all of Cadence’s system-level assets in one team underneath Mike McNamara.
The anchor technology in our Systems Software team is our C-to-Silicon Compiler, which is just now graduating from incubation to the product team and receiving a great initial reception from customers. We’ve also brought our SystemC debug and analysis tools into that team to build a SystemC TLM [transaction level model] flow for platform creation. And we also have Incisive Software Extensions, which are extensions to our verification environment that incorporate both software debug and coverage-based software verification in a hardware/software co-verification paradigm.
Q: How would you describe Cadence’s ESL, or system-level design, strategy?
A: We have two essential tenets to our strategy. The first is our focus around raising the abstraction level of the design to the system level, and we believe we have the tools and methodologies that will enable that. Our initial focus is to build a comprehensive, TLM standards-based IP creation and verification platform, with a natural flow into our RTL implementation and verification tools.
The second tenet is to embody those [TLM] platforms in virtual platforms through a combination of our SystemC capabilities, as well as partnerships. We want to enable hardware/software co-verification and platform integration. And that foundation includes our hardware technologies, Palladium and Xtreme. Many people don’t realize that about 50% of Palladium’s use is in systems integration, verification and software.
Q: Why move up to TLM for IP creation?
A: There are a number of reasons. The primary reason is that, with a higher level of abstraction, you reduce the complexity, you reduce the number of bugs that are injected into the design, and you reduce the time needed for model development. One of the powers of our TLM IP-based strategy is the ability to express a design algorithmically in C/C++ or SystemC, and then with a standalone constraint file, manifest that rapidly into various products depending on what your design space is.
Essentially you could warehouse IP, take the same video IP block, and use it for a cell phone, a set-top box, or a home theater. You’d just be changing constraint files.
Q: Transaction-level models can represent various levels of abstraction. What will be supported in the TLM-based IP design flow?
A: It could start at the highest TLM level with untimed models. Clearly, as you start to implement it, you need to add the notion of time, and you can add timing constraints with the constraint file. The high level synthesis tool needs that timing information to synthesize properly. With C-to-Silicon Compiler, you can selectively optimize for area, timing or power, or simultaneously for all three. C-to-Silicon Compiler also helps you to automate the process and create multiple levels of abstraction from a single source file.
Q: Does high-level synthesis play a key role in the TLM flow?
A: Absolutely. We believe previous attempts to try to build an ESL design space were lacking the ability to connect to implementation. That’s where C-to-Silicon Compiler comes in. There are a lot of companies creating models that are used for high-level platform creation, verification, and software validation. They have two Achilles’ Heels. One is the lack of IP, and the second is the lack of connection to implementation in a correct by construction fashion.
Q: Cadence doesn’t have its own virtual platform product. What’s the strategy in this area?
A: Because of our unique capability to connect to implementation, we believe we can form a hub from which we can connect to different virtual platform providers. So we have partnering strategy. At CDNLive! EMEA we announced a partnership with Virtutech. With Incisive Software Extensions, we’re adding a coverage-based capability to their Simics environment.
Q: The Virtutech collaboration brings a more rigorous methodology to embedded software verification. Is that something that’s been lacking?
A: Absolutely. You used to have a classic hardware team and software team separation, but as silicon function became abstracted into software, what used to be hard coded became programmable. The population of the team changed dramatically, and the techniques applied started to become much more blurred. The rigorous hardware process is sometimes up against a less rigorous software process, so there’s a big gap in terms of adding rigor to software development, particularly hardware-dependent software.
Incisive Software Extensions can simultaneously allow you to do coverage for hardware states and software states, and it also incorporates a very friendly debug environment. You can start and stop, rewind, replay, and watch hardware states and software states change.
Q: How do emulation and acceleration fit into a system-level design strategy?
A: Our system-level strategy is primarily a pre-RTL focus. However, we can’t forget that eventually we will have to create RTL, and we can use our system-level front end to bring up an RTL representation inside the Palladium platform at higher levels of detail. As you get closer and closer to hardware, and you drop from PV [programmer’s view or untimed] to PVT [programmer’s view with timing] to cycle accurate, the model creation process becomes so unwieldy that many companies like ARM have discarded it and just gone directly to RTL. And now you need a platform that can give you the speed of a cycle-accurate platform without having to resort to redoing the model. That’s what the emulation capability provides.
There’s a second capability. When your legacy RTL code isn’t abstracted, you can just use these platforms to host it. Another point is that both Palladium and Xtreme are very powerful at going from block to cluster to full-chip integration.
Q: Can transaction-level models run with the emulator?
A: Sure. That’s embodied in our support for SCE-MI 1.0 and 2.0 standards as well as our participation in the Accellera committee driving those standards. Connecting the TLM environment through the SCE-MI transaction-based interface provides a high-performance environment between the two.
We will also be working on techniques that use our high-level synthesis tool to synthesize TLM interfaces directly to the [emulation] system to get high performance. So, you could synthesize the TLM interface as a transactor, and connect to a DUT [device under test] expressed as RTL hosted in an emulator.
Q: What else are you working on in the systems space?
A: Design with power in mind is one of the most difficult challenges our customers face. We have a significant initiative to bring low-power design and power architecture up to the system level. That flow starts from power planning strategies at the chip level, before new IP has been identified, using the Chip Planning System. The constraints from this analysis are fed into C-to-Silicon Compiler, which does a high-level synthesis and power optimization of new IP created at the TLM level. Then the integrated SoC is analyzed with the embedded software and with real-world stimulus using Palladium Dynamic Power Analysis in order to verify that you hit your power budget and power goals. These three components of the flow communicate with standard .lib libraries and RTL Compiler in order to ensure a high level of accuracy.
Q: Looking out a few years, what’s your vision for system-level design and verification?
A: Integration of hardware and software IP, and verification and design complexity, are the key challenges of future designs. The vision we have is that design will be done at a high level of abstraction, with a tight connection to algorithms at the highest level and implementation at the lowest level. The bulk of the functional design will be done at this higher level, and you can move verification up to the system level. This will help our customers create, debug, verify and reuse their hardware IP faster. It will also help system engineers make their architectural tradeoffs early and simulate their hardware and software with higher performance. Finally, software developers will be able to develop their code using virtual platforms early in the design process.
It’s time for a transition. We’ve gone from transistors to gates, and gates to RTL, and there’s been a struggle in the industry over when to make the next transition. We think we’ve got the ingredients to do that -- quality synthesis, quality verification, and the methodology that goes with it.