• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. 50 Years of the Microprocessor, Part 2
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
isca
microprocessor

50 Years of the Microprocessor, Part 2

14 Jul 2021 • 14 minute read

 breakfast bytes logoAt the recent International Symposium on Computer Architecture (ISCA) there was a special session to celebrate the 50th anniversary of the microprocessor, with eight experts who had influence during those decades. The introductory statements from the panelists were covered in my first post 50 Years of the Microprocessor, Part 1.

A reminder on who the panelists were:

Shekhar Borkhar

At the end of part 1 of this two-part blog series, Shekhar Borkar had introduced himself. But he also had a slide and attempted to answer some of the questions the panel had been asked to address.

What were the major challenges to the growth of the microprocessor? Shekhar thinks they were not technical but "unnecessary distractions" such as ISA wars and RISC versus CISC. The progress was mostly that: technology provided the transistors, architects used and abused them, and software advances made microprocessors user-friendly.

Advice for aspiring students? Ignore the fashion and hype and do the research where your passion lies.

The challenges as the microprocessor grows to 75 years and up? It will not grow anymore. The opportunities are using them wisely in systems. The microprocessor won't look anything much different in 2040, it is already mature. It's like a NAND gate. We don't need to keep redesigning it. "In the '70s and '80s I designed NAND gates, I don't anymore. I just use them from the library".

These are what I see as the inflection points over the last 30 or 40 years. On the X-axis are the inflection points as seen by the designer, on the Y-axis, the attributes of the design. First, we added caches and pipelines, which quadrupled the area to double the performance with no real impact on energy efficiency. Then superscalar, where we tripled the area for double the performance, and a loss in energy efficiency. Out-of-order and speculation again doubled the die area for some increase in performance, but energy efficiency decreased. Deep pipelines were a bad inflection point: frequency increased, die size doubled, performance increased a little, energy efficiency decreased. So we went back to non-deep pipelines since frequency was no longer the king, it was multi-thread performance that matters. But, in truth, there has been no significant architectural advance since the 1970s with the IBM 370. Not nothing but nothing significant.

Scott asked for reactions. "Are we going to have to rename this conference? Do you agree with Shekhar?"

Dave

The inflection point going forward is domain-specific architecture. The ideas the didn't make it into general-purpose computing made it into domain-specific architectures. Ideas are being recycled, particularly systolic arrays. Machine learning right now is, at its heart, matrix multiply. It is basically a two-dimensional computer architecture. When you use these cloud services, you are using machine learning, and most of the time it is running on a dedicated accelerator that is a big matrix multiple unit using systolic arrays to great effect.

There are lots of opportunities for computer architects and software people to make big gains collaborating, and if you don’t do that then things aren’t going to get any faster. So I think there is plenty of opportunity for architectural innovation in the domain-specific architecture space. You are not stuck with a million line C++ program and trying to figure out how to make it go faster. These things are written in higher-level languages which makes it much easier to optimize below. A lot of high-level stuff is machine learning and it is a force of nature. The big things in my career were the microprocessor, the internet, the worldwide web. Machine learning may prove to be something that significant. It’s a revolution that’s going through the whole software stack. There are plenty of opportunities for architects. There’s plenty of room at the top.

John

I agree with David. If the software model changes, the the hardware model has to change, since the ISA is the interface between hardware and software. Those early microprocessors, we programmed them in assembly language. You couldn’t use anything else. There wasn’t enough memory in the machine. You had to squeeze every bit out. Then of course the RISC inspiration came from a high-level language viewpoint and the emergence of Unix. Now you have to have CUDA to program GPUs, things like TensorFlow to program these other machines. If the software model continues to change then we need to be willing to think about how the hardware model has to change.

Glen

I think the discussion of what is a microprocessor is silly. The best microprocessor around is IBM Z15, in a mainframe. But it’s a microprocessor. I do architectural research and we sit around wondering how to use the silicon. This is all going to change since the risk of security in our current designs is very high. The proper approach to security is going to change some of the things we’ve been doing as architects, and if we don’t do something it’s going to be a real problem.

Chris

I think it is really important to work backward from application developers. A modern app developer doesn't even think of the OS interface, lets alone the ISA. They are thinking about cloud services. The average line of code doesn’t get executed very much. Some lines of code matter a lot and so architecture matters a lot and we need to work out how to drive the silicon. But we are driven by how to create new services, even if they don’t execute a lot, still matter a lot. So we will have hybrid architectures since the average line of code compiles and runs transparently and doesn’t matter much where it runs. We will have to paper over the fact it is heterogeneous since some parts really matter a lot and you will push the envelope there on efficiency.

Scott pulled everyone back to looking at inflection points in the last 50 years.

Federico

To me, the important inflection points are the ones that move all aspects of technology in the same direction. The Personal Computer, especially the IBM PC that was open, that created a frenzy of ideas that drove the market for 40 years. Next was iPhone, a PC plus internet, and all the other stuff we learned in the meantime. Now we see machine intelligence, AI, robotics, that requires their own specialized architectures and hardware, and the technology that could be essential to this could be FPGAs that allow you to configure the hardware for a particular application. Someone mentioned security, and that is fundamental. Cybercrime will get only worse, and we really need to think through how to create computers that are unhackable.

Kathy

As an inflection point, technology is driving the whole thing. So CMOS in general. But nobody mentioned the role of design automation along the way and what it has allowed us to do.

Dave

I'm going to move onto the predictions. I actually prepared a slide and since I prepared it let me show it.

At the 25th anniversary of the microprocessor, I was asked by Scientific American to make some predictions. So I can actually look back and see what I predicted. People were predicting all sorts of weird things, and I said that the computers of 2020 would be very similar to the people of "today" (meaning 1995). I speculated that chips would get really big, more than one processor on a chip. And that is true. But the Scientific American people thought that was such a boring prediction that they had a sidebar with single electron computers, reversible logic, and other odd stuff. The problem is that when people try and predict the future they try and say something entertaining. But when I try and predict the future, I try and find the things that have to happen, that you can't imagine not happening. Then you have a better chance. So my bullets are on this slide.

By 2045, RISC-V will have become as significant for microprocessors as Linux is for operating systems. People like open source, and RISC-V is good enough.

We've got to improve security by hardware. I've given up waiting for my software colleagues to fix it. Ransomware is not an acceptable IT tax. Even the President of the USA is aware of ransomware. I think it is up to hardware people and computer architects to attack this. We can wait for 100% bug-free software.

If quantum computing works it will be for the cloud, not the edge. But edge will still be important. For that, we need something beyond 2nm CMOS.

My big prediction for 2045: "The stored program concept is too elegant to be easily replaced. I believe future computers will be very like machines of the past even if they are made of different stuff. I don’t think the microprocessor of 2045 will be startling to people from our time." That is the same prediction as I made in 1995, except with the year changed.

We might plateau for a few years since we need something beyond 2nm, but whatever it turns out to be, has to beat CMOS for manufacturability and cost. We've got used to designing chips where the technology is moving faster than anything else in the world. We could get stuck like engineers in many other fields and have to build it out of the same material as before.

Federico

That's what I think as well. I think we are about to plateau. I don’t see much hope below 2nm. With current technologies we do many things, we don’t just do one thing. Where can we get a technology that can work at room temperature, and can do better than silicon? I think biology is the answer, although it is more science fiction today. But that is 30 to 40 years from now. I don’t think we will have something viable that can compute with silicon any time sooner than that.

Shekhar

I am in agreement, but nothing is in sight for me that will supersede CMOS. IF there was something in sight, it could happen in 15 years. But there is nothing in sight. In the past, we used architecture analysis to use the transistors that were freely made available. Now we need to think backward as to what architectural advantages can be used in the future given that the technology is not improving. Some of the things that Chris mentioned, I agree with, such as domain-specific. We need to think about systems and not get fixated on monolithic microprocessors.

Scott put John on the spot. "Professor John Hennessy. There are journalists here who will write it down. What are your predictions?"

John

I think we are going to find ML is even more useful than we thought. We will do massive amounts of training, and we will write less code. Just look at Google’s natural language system. It is 1/100th the number of lines of code and it is more accurate than the previous system. I’m afraid I agree with Federico that there may be an extended period here with no advance in the technology. We will have to innovate without the approach we are so used to relying on, namely Moore’s Law. Think about what happened when we had tubes before the invention of the transistor. There was a long plateau, tubes did not get better fast and we had a lot of issues with reliability and other things. We could have a similar kind of situation now at the tail end of Moore’s Law.

Lee, what do you think? In 25 years time you'll still be at Arm?

Lee

Haha, I have plans and they don’t involve being at Arm. The power has come full circle. The way we prolong Moore's law is by exploiting more special purpose hardware. The proof is in your pocket. Camera, video encoding, radios, GPU, and so on. That’s all there to get energy efficiency. All of those functions that go into your phone. However, in the base station they are all done by software-defined radio. That allows the deployment of upgradable radio standards. Those base stations have to last not two years, like your phone, they have to last 25 years.

Chris, go ahead, what will it look like?

Chris

I think it is true that Moore's Law has decelerated and we are not going to get the stuff we got used to. We still have a lot of headroom with respect to parallelism. Machine learning takes a lot of advantage of parallelism. We’ll see more parallelism, some between different units. But we will see continued increase in performance overall, but it will be built on a mix of general-purpose and special-purpose architectures. I’m an incrementalist. It won’t look so different in 25 years, but the number and variety of special-purpose architectures will be huge, and the general-purpose architectures will have unbelievably long lifetimes due to their ecosystems.

Kathy, are you going to come out of retirement and do a DNA computer?

Kathy

No! But this discussion made me think of some other things. Because if we really think that the technology we are going to be stuck with, so to speak. I’m not sure it is going to be completely stuck, we’ll wait and see. I’d like to just say it is exciting to see more and more heterogeneous computers. Vector compute, as pioneered by Cell, for example, which was really difficult to program at the time.

Glen, I am anointing you a visionary.

Glen

I don’t think there is a need for more technology. The things we talked about, AI, compression, cryptography, are all very regular and you can do a lot with the technology we have. Pouring more transistors into the general-purpose processor is bad, in my opinion. That’s where the security leaks of today, and I include bugs, come from. I used to give a talk, I gave it for seven years at Microprocessor Forum, on the evilness of out-of-order execution. Of course, that was before we did it! We sit around and say for our next chip we could use 1B transistors per core, how would we do that. Oh, let’s have a 40-stage pipeline, blah blah blah. The general-purpose processor part of the world doesn’t need more technology. The performance limit today, as we all know, is memory access. That’s where we need silicon.

With that, Lizy John came back. She had been monitoring the question channel. There were lots, with only time to answer a few.

Question: "The First DRAM was introduced just before 4004 and it has stuck around for 50 years without much change. What about emerging memories?"

David: It is really hard for a new memory technology to win. You have to find a niche and win in cost-performace and use that to grow the volume. Flash memory made it. For the new non-volatile memory technologies, many are promising but they have to get to volume. It will be great if one of them makes it but I don’t know which or why.

Chris: We have always used some in different process technologies, so they are way out in the memory hierarchy.

John: One of the problems we will have is that SRAM is slowing down compared to logic, and that is going to mean a bigger gap despite the fact that we keep growing caches. Memories will increasingly be a problem.

Shekhar: There are three rules in memory: $/bit, $/bit, and $/bit. The only possible technologies are DRAM and flash.

Glen: There are software issues with non-volatile memories. We need portable attractive software level models of how persistence works. It is still a research topic, so seven to ten years from being a deployed technology. Needs to be “right first time” to attack the volumes for DRAM and flash.

Federico: It is hard to beat one transistor. You can't have half a transistor per bit. Spintronic looked like it might make it but they couldn’t even come close.

Question: "Do you feel various application technologies are rushing through technology nodes without extracting the value from each node?"

John: That’s what Intel is doing right now, slowing down and getting more out of it.

Question: "Why are we not investing more in 3D, especially monolithic?"

Shekhar: if you look at monolithic, its value propositions is slim. One, due to getting the heat out. The other is that it is so cumbersome to design. As a niche it works, stacking regular structure like DRAM and flash, but not logic.

David: There is obviously a revolution in packaging. The future is more like chiplets, perhaps. If there is a very efficient packaging technology out there it could be a game-changer.

Glenn: We are measuring things wrong, benchmark performance is not the critical issue, security is. We all know how to make a more reliable processor, we're just afraid to do it.

And with that, the 90 minutes were up. Lizy thanked Scott and the panel.

Conclusions

This post is insanely long. So I will summarize what I think are the "conclusions" and add my own opinions in a separate post.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.