• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. 75 Years of the Microprocessor
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
isca
microprocessor

75 Years of the Microprocessor

20 Jul 2021 • 7 minute read

 breakfast bytes logoAt the recent ISCA, there was a panel session with some of the major contributors to microprocessor development during the last 50 years. They were also asked to predict how they thought the microprocessor would develop during the next 25 years, going out to 2045. Just as a reminder, the panel was:

  • Federico Faggin, designer of the first microprocessor the 4004, and the founder of Zilog.
  • John Hennessy, founder of MIPS, won the Turing award for work on RISC (with Patterson), currently chairman of Alphabet.
  • Dave Patterson, Berkeley RISC, won the Turing award for work on RISC, currently at Google.
  • Glen Henry, IBM veteran, from IBM mainframes to PCs, and now Centaur Technologies.
  • Kathy Papermaster, IBM designer for 26 years, head of IBM Cell used in Playstation 3.
  • Lee Smith, a founder of Arm, joined forerunner Acorn in 1983.
  • Shekhar Borkhar, microprocessor research for 34 years at Intel, now at Qualcomm.
  • Chris Rowen, co-founder of MIPS, founder of Tensilica, founder of Babblelabs, currently at Cisco.
  • Scott Gardner (the moderator) at iDT for 10 years, then Intrinsity (acquired by Apple).

You can read about what they said in my two earlier posts:

  • 50 Years of the Microprocessor, Part 1
  • 50 Years of the Microprocessor, Part 2

It is probably going too far to say that there were any conclusions from a discussion like that, but a few common themes did emerge. I'll take a look at them and add some of my own color.

Most Advances in Microprocessors Have Been From Moore's Law

One quote from Shekhar is that there have been no significant architectural advances since the 1970s and the IBM 370. As it happens, that's a computer I'm very familiar with since Cambridge University ran its university-wide time-sharing service on an IBM 370/165. I heard similar sentiments at a Microprocessor Forum years ago when someone (I forget who) pointed out that all the architectural innovation up to that point (pipelining and caches, I don't think speculative execution was on the scene then) had resulted in a 6X speedup. All the rest of the hundreds or thousands of times speedup since whatever you took as your base year was due to Moore's Law.

Shekhar's point is slightly different: that all the architectural innovation in microprocessors was just taking innovation that already existed in mainframes and other non-microprocessor computers, and moving it into silicon. Dave Patterson's big insight about RISC was another wrinkle. He realized that if nothing changed, then microcode would be moved from mainframes into microprocessors, and that it would be a disaster since microcode always needed a way to update it and in a silicon context that was not really feasible. His conclusion: get rid of the microcode and just run the object code directly on the micro-engine.

But even there he was foreshadowed. The IBM 801 was the first RISC computer. it wasn't a microprocessor since it was too early for that. But it had a very simple instruction set with only register-register operations, and a single load, and a single store instruction. The eye-opening event around the IBM 801 was when John Cocke decided to retarget the 801's PL/I compiler to the IBM 360 architecture. The compiler had a built-in assumption that there was only a single load and store, and you couldn't do complicated operations like add a value in memory to a register like you could on an IBM 360. So everyone assumed that the IBM 360 PL/I compiler would generate much more optimal code than the brain-dead 801 compilers since it could make use of the whole instruction set. They were wrong...very wrong. The 801 compiler generated code that ran three times as fast. Somehow that lesson didn't get learned and nobody ever built a single chip implementation of the 801 or anything like it, until the first RISC and the first MIPS chips many years later.

Anyway, all the panelists agreed that we have got used to having Moore's Law. We have used that to add more functionality to the architecture, but mostly microprocessors have got more powerful because of it.

Moore's Law Will Plateau

But that was then.

Another point of agreement was that Moore's Law will plateau after 2nm and that's what we'll be stuck with for at least a decade. Given that it takes 10-15 years for a technology to go from the lab to volume manufacturing, and given that there is nothing promising in the lab, then 2nm is (roughly) as good as it will get for a long time. In fact, 2nm CMOS silicon is so good that it is next to impossible for any new technology to do better since it needs to beat 2nm CMOS on price and performance to get traction. John Hennessy compared it to the end of the vacuum tube era. Tubes did not improve, or extremely slowly. What changed the world was two massive discontinuities. First, the invention of the transistor as a replacement for the tube. And then the invention of the planar integrated circuit as a way to mass-produce transistors on a scale never seen before.

It will probably take a similar completely discontinuous development to improve beyond 2nm silicon, and some sort of biological solution seems the most likely route although, as Federico said, "that's science fiction today so we're probably looking 30 to 40 years out".

Domain-Specific Architectures Are the Future

Everyone agreed that domain-specific architectures are the future. You can't really speed up a single core of a microprocessor much since Denard scaling ended long ago and so you can't clock a microprocessor at a very high frequency without power going through the roof ("rocket nozzle" temperatures). You also can't put an arbitrary number of cores on a microprocessor since you can't turn them all on at once. Whatever you do, you are going to have some dark silicon. If all your cores are the same, as in a normal multi-core general-purpose microprocessor, there is just no point in putting extra cores on the chip if you can't turn them on. But with domain-specific architectures, it makes sense since those cores can be powered down when not required, and when they are required they use a lot less power and deliver a lot more performance than the general-purpose cores implementing the same functionality in software.

Future Memory Is DRAM and Flash

The real bottleneck in a computer system today is memory access. Things are going in the wrong direction though, with SRAM slowing in speed relative to logic, and DRAM performance hardly changing. Having switched from planar to 3D, flash still has improvements in density by adding more layers, and perhaps moving a process generation or two in the next decade.

Waiting in the wings are other memories, but they have all proved to be disappointing. Phase-change memory aka 3D Xpoint aka Optane is promising, but the volumes are too small to make it economical. It is at the crossroads that two-sided markets where nobody wants the credit cards because merchants don't take them, and merchants don't want to set up to take them since nobody has credit cards. It's big advantage is non-volatility, but that requires huge changes to software to take advantage of, and so far it is only specialized database systems that have made the effort to take advantage of this.

So the general agreement was that future memories would just be DRAM and flash, except for niches that could support a higher price point (like embedded MRAM replacing eFlash on SoCs).

The Microprocessor in 2045 Will Look Very Familiar

One of the questions that the panel was asked to predict was what a microprocessor would look like in 2045. The general agreement was that microprocessors are a mature technology, analogous to a NAND gate, and so the expectation is that, just as a 2045 NAND gate will look pretty much like a 2021 NAND gate, a 2045 microprocessor won't look much different from a 2021 one. Dave Patterson pointed out that his 1995 prediction that a microprocessor in 2021 would look very much like a 1995 one but bigger was pretty accurate.

Just as machine learning has come along in the last decade, and GPUs in the decade before, they may be some unique domain-specific processors. But the general-purpose microprocessor will be much as today. If quantum computing works, requiring cryogenic cooling, it will be limited to cloud access. You are not going to have a quantum computer in your smartphone.

UPDATE: Video of the panel now available (90 minutes, get a coffee):

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.