Get email delivery of the Cadence blog featured here
When it comes to pushing power as low as it can go, the answer is part technological, part cultural.
That was the conclusion of a panel of experts at the annual Electronic Design Processes Symposium in Monterey April 24.
The panel, moderated by Cadence blogger Richard Goering (pictured far left), was titled, “Can Power Go Any Lower, Or Have We Amost Hit the Floor, Especially for IoT Devices?” And experts from UC San Diego, Atrenta, Cadence, eSilicon, FinSix, and Synopsys wrestled with the question for nearly an hour by the Pacific Ocean.
And their consensus for how the industry designs ultra-low power devices and systems in the coming years was surprising in its breadth.
Reconsidering the Old Ways
Most of the panelists agreed, for example, that new ways of thinking are required to attack the power problem. These range from reconsidering how we optimize networking protocols that are three and four decades old to abstracting our thinking about power above the transistor or device level.
“The approach (should start) at the very high level, optimizing networking protocols that were written 20-30 years ago that have changed little, and then see if you could add low-power states into that and integrate that into the platform,” said Jim Kardach (second from right), longtime engineer with Intel who is now director of integrated products at FINsix.
“You need to architect the things to ‘do nothing’ efficiently and work efficiently and then you can rely on the tools,” he said. For example, he said a team could design an HCI controller that had been operating at 200mW down to 50mW in a subsequent generation. But even at 50mW, “It’s still going to be polling the rest of the system” consuming power, he said. “The guys who do that sort of thing need to be taken out to the shed and taught proper architecture about do nothing improperly,” he said.
The Zen of Nothing
This notion of “do nothing” efficiently was a topic of Kardach’s presentation earlier in the EDPS agenda and it’s something panelist Prasad Subramaniam (fourth from left), vice president of design technology and R&D at eSilicon, picked up on.
“We have memories today you can bring up when you need to read and write them and then shut them down,” he said. “You need to take into consideration the cost of bringing it up and down, but the same concept can be applied to logic.”
Bernard Murphy, CTO with Atrenta (third from left), said one way of thinking differently is to look at novel approaches such as neuromorphic architecture, a concept developed by Carver Mead that attempts to mimic neuro-biological systems in the human body.
“We try to drop power by a factor of two in what we do,” he told the audience. “With neuromorphic you’re talking about dropping it by three orders of magnitude. That’s a huge difference. You can’t, of course, apply it to everything, but it’s a very interesting thing to consider.”
For their part, two other EDA representatives—Steve Carlson (far right), from Cadence, and Patrick Sheridan (third from right) from Synopsys, said power gains can be wrung from focusing additional effort on process-wide optimization, from library selection to the back end of the design using dynamic simulation and virtual prototyping to illuminate potential tradeoffs earlier in the design process.
One of the technological challenges with power design today is that techniques don’t scale or necessarily translate well to different companies and different market applications.
“It’s important to understand the techniques that the apps processor guys use are very advanced. I am afraid that a lot of what they’re doing is out of reach of the vast majority of most design teams,” Murphy said. “I don’t really see it being scalable, especially to IoT devices.”
It was a question from the audience that vectored the panel discussion into a completely new and fascinating direction. The questioner pointed out that engineers usually are trained and work in silos and this reality is at odds with a holistic, system-level view of how to drive to the power floor.
“You might be the best circuit designer in the world, but if you don’t know how your design is going to be used by someone else or how it applies to the big picture you’re involved in, you’re not going to be able to use the most efficient manner.”
Where does the responsibility for improving this situation lie? Some look to academia, but there is no silver bullet solution there, according to Andrew Kahng, the noted engineering and computer science professor from UC San Diego. Kahng said he and his colleagues encourage students not to over-specialize, but much work remains.
“We don’t really teach or learn enough about cross-silo engineering issues and (how to understand) system optimization,” Kahng said. “We have to have corporate cultures, engineering cultures, and educational cultures that do that.”
Some panelists see either glimmers or bright sparkles of hope, though.
Synopsys’ Sheridan said a project that fails in the market because of a power design issue becomes “a huge motivator for next time.” “There are competitive dynamics in the marketplace that drive cultural change,” he added.
Cadence’s Carlson pointed out one key area where formerly high, rigid silo walls have begun crumbling: mixed-signal verification.
“We need anarchy,” he quipped. “You do see new engineering job titles. You see guys who are mixed-signal verification engineers. They specialize in the verification of a pulse.”
Subramaniam, picking up the theme, argued for something that is perhaps easier said than done, given the enormous complexity of designs today and the size of design teams.
“There needs to be a single owner for the product, who understands the complete picture. It’s the responsibility of this person to make sure the appropriate individuals who are specialists are communicating with one another. Maybe some of these problems can be broken into smaller problems.”
(Photo courtesy of Naresh Sehgal)
—EDPS 2015: Why “Hybrid” Platforms are Needed for Pre-Silicon Hardware and Software Development