Get email delivery of the Cadence blog featured here
Today the IEEE announced the release of IEEE 2416-2019, a standard for unified power models. Last week, I talked to Jerry Frenkil of Si2, the Silicon Integration Initiative, to get the details.
As it happens, Jerry joined VLSI Technology a couple of years after me in the early 1980s. He was on the East Coast and I was in San Jose, so we couldn't work out exactly when we met, but at least 25 years ago. Jerry has been working in power for what seems like forever, since after VLSI he was one of the founders of the EDA company Sente, which did early power analysis. Far too early, as it happened. Sente was too early to market, before most users needed to do that. Far more EDA startups have failed over the years from being too early than from being too late. (That would make a good topic for a blog post one day, but not today.) Sente would go on to merge with Frequency Technology to form Sequence, which would be acquired by Apache, and is now part of ANSYS. The Sente technology actually made it through all that, and is in broad use today.
Fifteen years ago in the mid-2000s, power became a really big deal. Two standards were developed: CPF (the Common Power Format—since Cadence was one of the originators of the standard, it was sometimes jokingly said that the C stood for Cadence), and UPF (the Unified Power Format). I will spare you the details of the standard wars back then, but ten years ago we ended up with a fully unified power format that eventually became IEEE 1801-2009. We are now up to IEEE 1801-2015. Since that is a mouthful, it is usually just called UPF. [I got an email after this was publshed...it is now up to 1801-2018].
The motivation for UPF (and CPF) was two-fold. First was that in the early days of VLSI design, there was a single power supply (usually called VDD and VSS). Schematics just drew the signal connections and the fact that every gate also needed to be connected to a power-supply was implicit. That broke down once there were multiple power domains on the chip, since it was not obvious which power supply any particular gate should be connected to. The second issue was that power-reduction techniques such as separate power domains at different voltages, blocks that could be powered down, and DVFS (dynamic voltage and frequency scaling) added to the complexity, and also required other cells such as level-shifters, isolation cells, and retention registers that were also not explicit in the netlist. Rather than come up with a new netlist standard that addressed all these problems, and break every EDA tool ever written, a separate UPF file contained the power intent and made explicit all these things that were implicit or simply missing from the netlist.
The new standard published today, IEEE 2416-2019, complements IEEE 1801. Whereas IEEE 1801 defines the power state model for an IP block, IEEE 2416 defines the power data model. Together, they describe both the power states of interest and the static and dynamic power consumption for those states.
IEEE 2416 is a common modeling standard developed the Si2 UPM working group, the members of which were ANSYS, Cadence, Entasys, IBM, Intel, and Thrace System. This was subsequently standardized by the IEEE P2416 working group (the P stands for "provisional" and is used on a standard where a number has been allocated but is still in development). The members of that group were Arm, Cadence, IBM, Intel, and Si2.
The standard is a common modeling language that targets three classes of users:
It is a system power model standard. One of the top-level aims was to make the model independent of PVT (process corner, power-supply voltage(s), and temperature) so that there would be no need to build different libraries for every corner. In a leading-edge process, there can be literally hundreds of corners of interest.
Some key attributes of the standard:
The result is a single model that supports a variety of power applications.
In this context, contributors are things that contribute to power estimation, not companies that contributed to the standard.
Contributors are a concept originally developed by IBM and used for at least three generations of power processors. They contributed it to the standard. The basic idea is to use proxies for power and energy data. For leakage, a block can be reduced to a few leaky transistors, and for dynamic power to an equivalent capacitor. The key is that these are PVT-independent. When actual power analysis is done, the PVT values are supplied and the actual leakage and dynamic currents can be calculated.
The diagram above shows the conventional approach, with a library for each corner, versus the contributor approach, with a single library, and late binding of the PVT values for the corners of interest. One issue with power calculation is that temperature depends on power, and power depends on temperature (in particular, leakage is very temperature-sensitive). This means that some level of iteration is required to evaluate the power, then see the temperature, then re-evaluate the power, rinse and repeat until it converges.
One example of how effective this system-level approach works comes from some work at Northwestern University. They were building a processor that sat on top of five layers of DRAM (with TSVs for connections). They wanted to optimize the frequency of operation and the refresh rate on the memories. DRAM retention is very temperature sensitive, so they actually found that if they backed off the frequency of the processor (so it ran cooler) then they could go to a lower refresh-rate on the DRAMs, and ended up with a higher system performance. Counter-intuitively, lowering the processor frequency made the system run faster.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.