Get email delivery of the Cadence blog featured here
Confused by those names? The conference is the International Reliability and Physics Symposium, IRPS. The panel is on IRDS, the International Roadmap for Devices and Systems, which is the new incarnation of ITRS (the International Technology Roadmap for Semiconductors) combined with the IEEE Rebooting Computing (IEEE RC) initiative. That's a lot of acronyms, and to make it more of a challenge, the Rs stand for different things—Roadmap, Reliability, Rebooting—and so do the Ss—Symposium, Semiconductors, Systems.
Anyway, now you have that straight, I can tell you that at IRPS there was a panel on IRDS.
First up was Bill Tonti who gave an introduction to IRDS. Then Geoffrey Burr of IBM, Matthew Marinella of Sandia Labs, and finally Eric DeBenedictis, also from Sandia Labs in his day job, and the author of the Rebooting Computing column in IEEE Computer magazine.
Bill Tonti gave an overview of IRDS. Since I happen to have just attended a keynote at another conference by the chairman of IRDS, Paolo Gargini, I already covered the basic background in The Gargini Roadmap for Semiconductors and I won't repeat it here.
Geoffrey Burr emphasized that Dennard scaling stopped 8-10 years ago. We faked it for a time with HKMG, dark silicon, and other tricks, but now we have to accept that if we don't change the assumptions about computing then the only path to more computational power is more cores. By "change the assumptions" he meant:
These are big moves away from Von Neumann architectures. He had a slide showing what we would miss about Von Neumann computing. Firstly, the way it is programmed makes it very adaptable compared to programming, for example, multicore. Secondly, there is a great cost model. You design one piece of hardware and everyone can use it, for every application, including ones that nobody even knew about. And finally the hardware model is very modular. You can buy a microprocessor from one vendor, memory from others, peripherals from others, and it all "just works" together. He emphasized that we will miss this a lot with other programming models that we are going to get forced into.
Matthew Marinella had a grand unifying theory that the entire history of computing has been driven by an exponential decrease in energy, going back to Eniac in 1946, with about 100J per computation, to the PC with about 100uJ, to the Sony Playstation 3 at 1nJ. The current record holder is the NVIDIA P100 at about 30pJ.
This raises the obvious question as to whether the trends will continue. The current feeling (and it is only a feeling) is "yes", there will be new devices and new ways to integrate them. The Landauer limit is 3zJ and so is the theoretical minimum driven from entropy principles. So there are several orders of magnitude left.
So looking forward, there are two key areas: new memories and new logic. The big change in new memories in the short term is storage-class memories, which have performance similar to DRAM but are much larger, leading to a potential huge change in performance per watt in systems (3DXpoint is the most well-known of these technologies). Then there are novel potential technologies that are not yet ready for HVM: STT-MRAM, FeRAM, ReRAM, CBRAM. In logic, there is a search for a new switch but so far nothing is better than silicon FinFET, although some of the weirder devices may be good for specialized applications.
Finally it was Eric DeBenedictus's turn. He is the "Rebooting Computing" guy, and emphasized that with dimensional scaling over, we have switched from ITRS to IDRS to take in the idea of optimization at the level of the whole system. His feeling is that we need a roadmap driven by the next killer app for computing, like spreadsheets for the PC, or the internet for smartphones. The current state of the art is, perhaps, AlphaGo but that is a very limited market (not many people want to play go) and at 100KW it is too much power. Self-driving cars are a compelling vision but not there yet. We need a killer app that comes from putting technologies like these together and reducing the whole thing from a supercomputer to run on whatever a smartphone morphs into.
The potential savings of 100,000X could come from 10X in devices (there is still some efficiency increase to be had), 10X by going to 3D packaging so memory, etc., is really close to the devices, but 1000X from changes in architecture, algorithms, and software.
If that last component seems unrealistic, look at the gains we have gotten recently from neural networks versus algorithmic approaches. He foresees a system where 20% is Von Neumann architecture running 95% of the code, 80% is new design but only 5% of the code, programmed by PhD geeks, resulting in the equivalent of a billion lines of code of machine-learned behavior.
A final note about IRDS itself, the roadmap document. It should be finalized in November and published in December. It is free.