Get email delivery of the Cadence blog featured here
If we want truly energy-efficient servers and mobile devices, existing low-power design techniques are not sufficient, according to Jan Rabaey, professor of electrical engineering and computer science at the University of California at Berkeley. In an animated and provocative keynote speech at the Cadence Low Power Technology Summit Oct. 18, Rabaey said we need to re-think the very nature of computing itself.
Few people know more about low-power design than Rabaey. A professor at U.C. Berkeley since 1987, he's scientific co-director of the Berkeley Wireless Research Center, and his research interests include integrated wireless systems, low-energy architectures and circuits, and supporting EDA environments. He has authored three books on low-power design - "Low Power Design Methodologies," "Power Aware Design Methodologies," and "Low Power Design Essentials."
Rabaey started his talk with a look at the "emerging information-technology platform" characterized by three layers - the cloud, mobile devices, and the "swarm." The latter refers to tiny, inexpensive wireless sensor devices that might perform such tasks as environmental, traffic, or health monitoring. (One of Rabaey's research involvements is the Swarm Lab at U.C. Berkeley - a great story in itself). Rabaey observed that power is the "dominating factor" for each of these three layers, and is a "deal breaker" if we can't reduce energy usage. "The cost of computation is the cost of power, so making that cost cheaper is going to be very important," he said.
Conventional Solutions Not Enough
A number of low-power design strategies have arisen since the early 1990s, Rabaey noted. They can be summarized as follows:
But we're running out of options, Rabaey said. Trends such as power, performance, and energy efficiency are "flat lining." Technology scaling isn't helping much, leakage is getting worse, and variability is a growing problem. What to do? Here is the essence of Rabaey's "energy-efficient roadmap":
1. Continue to scale voltage by reducing supply voltage.
2. Explore new computer architecture ideas, including:
Lowering supply voltage is the"only option" and is something we have to do, Rabaey said. For any design, there is a minimum energy point, "the lowest energy you can run this thing on." That point is below the threshold voltage. The problem is that lowering supply voltage makes the circuit run more slowly, and there's more susceptibility to variability.
Jan Rabaey points to a perpetual sensor system with solar harvesting as an example of sub-threshold operation
There are two options for mitigating these problems. One is to "back off a bit" and run the supply voltage very close to the threshold voltage. The second option is to build circuits that are self-adapting. "Self timing is the right thing to do," Rabaey said. "When I'm done I'm going to go on to the next thing, not just sit there leaking [energy]."
New Architectures - and No Margins
One new architectural idea, Rabaey said, is "energy proportional systems." The idea is that the system consumes power that is proportional to the task at hand. "If you do less, the power scales, and you would hope it scales linearly. If you don't do anything you should not consume anything." Surprisingly, perhaps, most electronic devices still consume quite a bit of energy when they're doing nothing.
Rabaey observed that the energy proportional concept is taking root in data centers, but has not yet come to mobile devices. He believes a 10X power savings is possible in many cases.
The next architectural idea, "always optimal systems," includes system modules that are adaptively biased to adjust to operating, manufacturing, and environmental conditions. The device uses sensors to measure parameters such as temperature, delay and leakage. It re-adjusts supply voltage, threshold voltage, and clock rate. "You basically build a dynamic feedback system - you measure, control and act," Rabaey said. "If you do it right you can get tremendous energy savings, because you always run at the best possible energy curve."
Today's designs, Rabaey noted, are over-designed and constrained by timing and power margins. Tomorrow's designs will be based on a "sense and react" approach and will let designers "eliminate margins and always work at the edge." He acknowledged that "this is a very big paradigm shift, like building a bridge that's on the edge of collapsing all the time. It's kind of scary."
A concept that takes this metaphor further is "aggressive deployment," also known as "better than worst case design." It's based on the observation that worst-case conditions are rarely encountered in actual operation. With aggressive deployment, you might scale your supply voltage more than you should (according to conventional wisdom). Yes, you'll occasionally miss a timing edge, but that's okay if you can detect it and fix it.
A Heretical Approach to Computing
Are these ideas enough to "scale the power wall?" Not really, Rabaey said. What we really need to do is adopt non-deterministic or "statistical" computing for appropriate applications. And that means that the same set of input conditions might not produce the same output every time, an idea that at first glance sounds heretical in the world of computing.
"We have been so brainwashed in the Boolean, Von Neumann, and Turing based models [of computation]," he said. "We think computation has to be deterministic. Is that really true? Thinking about it might help us get to some interesting places."
Non-deterministic computing is not for every application - you don't want it for your bank account, for example. But for "perception based" applications like image processing, video, or classification, it can work and can save a great deal of energy. Rabaey noted that "anything that has to do with a machine-human interface is a statistical effort. A lot of problems don't expect correct or deterministic answers - as long as you're close, you're fine."
Rabaey showed an "Error-Resilient System Architecture" (ERSA) developed at Stanford, and pointed to a real-world application, an image classifier identifying cars from what could be a satellite image. The conventional approach would never miss a car. The ERSA approach could inject 30,000 random errors, achieve 90% accuracy, and use substantially less energy.
Can we do better? Rabaey talked about the energy efficiency of the human brain, noting that it has a computational capacity of 1016 computations per second at 1-2 femtojoules per computation. This is two orders of magnitude beyond what we can currently do in silicon. "It turns out the brain is purely a statistical engine," Rabaey noted. By studying the algorithms the brain uses, he said, we can make silicon-based platforms run more efficiently.
Finally, Rabaey noted that the brain does its computing in an analog fashion, and analog is inherently statistical, never producing the exact same answer every time. But analog doesn't scale well, so if you want extremely high resolution, digital computing is a better approach. However, most applications don't need high resolution or deterministic outcomes.
In summary, this was a fascinating talk that went far beyond the usual discussion of low power. Sometimes we get so lost in design techniques and power format issues that we lose sight of the larger picture. Rabaey is bringing that larger picture to the design community, and is challenging us to think differently about IC and systems design.