Get email delivery of the Cadence blog featured here
When low-power design experts get together, much of the conversation turns to the system level. At least that was the case at the recent Low Power Technology Summit held at Cadence Oct. 18, 2012, where audience members questioned panelists about early power estimation, power modeling, and the role of software in power management.
I had the honor of moderating this panel. Providing perspectives from industry, academia, semiconductor IP, and Cadence, panelists included the following, as shown from left to right in the photo below:
The panel came at the end of a day-long event at which Subramanian, Jarrar, and Wang also offered presentations. The day began with a keynote speech by Jan Rabaey, professor at U.C. Berkeley, who called for re-thinking of the nature of computation itself in order to save power.
Opening Statements - Low Power Challenges
Subramanian: ARM provides licensable processor cores for many different market segments. We work on physical IP challenges with customers of different sizes and geographies. Some are early adopters of advanced process nodes, and some are still working at 90nm or even 180nm.
Wang: "Low power design is really a challenge. You need to think holistically, from IP selection to system-level architecture selection to verification methodology. You need to partner with experienced [EDA] vendors."
Jarrar: The first thing Freescale SoC customers talk about is their power budget. "That kind of tells you that, for the next 10-15 years, this is going to be a power-driven industry." Increased complexity is driving the need for more alliances with IP providers and "more importantly for EDA - having a good EDA partner is absolutely crucial."
Honnavara-Prasad: "We need to be able to do early architectural exploration to figure out the best combination of processors, and the best combination of techniques to get the best power for a given performance." A lot of power management features are controlled by software, creating a need for hardware/software co-optimization.
Kelson: The mission of BWRC is to do world-class research. "Almost by definition, we are working outside the envelope of the tools that are available and the design flows that are available today. We are pushing the envelope in power, performance and cost." Much of the BWRC research is in the analog/mixed-signal area. "I think analog has a big future in low power."
Q: What do firmware and software engineers need from hardware design teams in order to optimize and manage power?
Honnavara-Prasad: Firmware engineers have a register-level view of the design. They "live at a different level of abstraction" than the hardware teams and don't understand implementation details. Thus, there's a need for clear communications at a level of abstraction that firmware engineers understand. Software developers must be able to take advantage of the various low-power features in the hardware, which must be abstracted to a level that software developers can access.
Random vectors are not good enough to test the system - you have to test in the context of the software. "Without actually having the software and the driver, we are unable to test for use cases, and to optimize for power in that context."
Q: What are your experiences with early power estimation? Before the layout is finished, you don't have your power numbers. But at the same time, you have to have some sort of estimation number to proceed with your package selection and power planning.
Jarrar: Power estimation "is a very interesting topic and continues to bewilder us." Technology scaling is straightforward, but with new IP blocks "you really have to run them through the ringer to find use cases for them." Even so, Freescale still finds power bugs in products. A power bug is usually unnoticed in verification because it doesn't affect the functionality.
Honnavara-Prasad: "We really don't have the tools today to allow significant architectural exploration. Tools already expect you to have a clear knowledge of how many domains you have. They will not tell you that power gating this block is not likely to be useful."
Wang: ChipEstimate.com (owned by Cadence) provides a "rich IP pool" and its software allows designers to do some very high-level tradeoffs for different process nodes and foundries. The Cadence Encounter platform offers an early rail analysis to help with power planning. Designers can then run the layout, extract parasitics, and get a better estimation. "Methodologies are available, but you need to think of them as a whole."
Kelson: BWRC uses [Matlab] Simulink to create block-level power models. Simulink can also run a system-level simulation.
Q: For IP that we design internally, we don't see a very good solution for power modeling. Is there any way to quickly create power models, timing models, and noise models for complex IP?
Jarrar: "I think the next enabler of technology is having better power models. In simple terms, people think IPC [instructions per clock] is a very good term for measuring power, but this may not be true. If you run a bunch of math operations with a very high IPC, you get a certain power number. But if you have loads and stores with heavy memory access, a lower IPC could consume more current."
Q: This discussion seems hardware-centric. Where is the software in all this? What needs to be done to make system-level simulation easier?
Honnavara-Prasad: To do system-level simulation, you must have a simplified model of every block. With that you could do hardware/software co-simulation, emulation, or fast simulation. A lot of work needs to be done on high-level modeling in SystemC. We have performance models, but they are not tied to power.
Subramanian: "In my previous job at Marvell I was part of a modeling group delivering system-level models a few months ahead of tapeout. The purpose of that was for the firmware guys to do some co-simulation using the [Cadence] Palladium emulator. With this we could handshake between the hardware and software and come up with a system."
Wang: Power estimation depends on activity. The traditional method of using functional vectors does not give you full confidence, so you want to use the real data, the application data. "Right now, the only path to that kind of capability is the emulation platform."
Jarrar: "If you look at 28nm we are getting killed. If you look at all the variation you have to account for, you cannot make a design that will work and be area and power efficient across all the corner cases." Thus, worst-case corner design is no longer possible. The end result: "I think the software is going to save the hardware."
Related Blog Posts
Jan Rabaey Keynote: For Lower Power, Re-Think Computing
Si2 Talk: Why System-Level Low Power is Challenging