• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Getting to Hyperscale Data Centers: Mainframes to Minic…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
minicomputer
mainframe
IBM

Getting to Hyperscale Data Centers: Mainframes to Minicomputers

2 Apr 2020 • 8 minute read

 breakfast bytes logo

Over the years, the way computing has been delivered has gone through a number of cycles. Like most things in life, this is mostly driven by underlying economics that changes over time: what is the best way to deliver everyone the computer resources that they require. One thing that is especially interesting is the constant interplay between what is centralized and what is at the edge. There is always a need to centralize some data, at the very minimum. There is always some need for something at the edge, since it is too inconvenient to always have to go to the computer to use it.

Today, we have smartphones and cloud data centers. I asked Linley Gwenapp last year if he thought there was more aggregate compute power in all our phones, or in all the data centers, but he'd never seen any numbers. With more than one smartphone per human, I'd bet on them. It is almost an exact 50-50 split for DRAM between mobile and other computers.

The Dedicated Computer Era

From when computers were first invented during the second world war, up until the mid-1960s, they were usually dedicated to a single task. Neither the computers nor their operating systems were powerful enough to enable sharing, and so even if the computer was used for different tasks, those had to be done one after another. Later, the first PCs would have a very similar model, no capability to share, and usually dedicated to a single task like managing a dental office.

For example, the first computer in the world dedicated to business applications was LEO I, modeled closely on Cambridge University's EDSAC. Surprisingly, this was not built for the sort of leading-edge high-tech company that you might expect. It was built for J. Lyons and Co, a British company that ran teashops, restaurants, and other food manufacturing. Some food products are still available, but the teashops are gone. I remember going to one "Lyons' Corner House" in London when I was a teenager. LEO stood for Lyons Electronic Office. Food companies have a set of unique problems since their products are perishable and so accurate and fast inventory control is especially important. I thought that I had a photograph of part of LEO at the Computer History Museum, but I cannot find it. When the LEO II, its successor, was decommissioned, it was offered to a number of places including the school I went to, since we had been early to get involved in what would become known as computer science but was still just called programming or computing. But then we discovered it was 35,000 cubic feet, taking up more space than a tennis court so the school passed.

But in that era, most computers were like that, purchased by one company or, later, one department to do a specific task. There were no computer networks, nor anything similar. If you wanted to use the computer for some task, then you had to physically go there, or at least get a computer operator to go there for you.

 In this era, IBM developed four completely separate lines of computers for different business segments such as commercial and scientific, but although IBM was already the market leader, there was another dozen or so computer companies in the US, and many in other countries too.

The Timesharing Mainframe Era

The next change was the invention of what was called timesharing. The first such system was called the Compatible Time-Sharing System or CTSS, developed at MIT. It was first demonstrated in 1961 and started providing services to users at the university in 1963. This system had many claims to fame, but one that is still largely with us today is that it was the first system that required you to provide a password to log in.

When IBM developed the IBM 360 series of computers, they came to dominate the entire computer industry. The big computers that they produced were known as mainframes. In the 1960s, these all were known by names such as the System 350 Model 65, with the bigger numbers being more powerful than the smaller numbers. Initially, these mainframes did not support timesharing, only batch processing. But IBM developed TSO (Time Sharing Option) released initially in 1971. There was an earlier, experimental timesharing system called TSS, which was only released to a few early customers and then eventually canceled when it was superseded by TSO.

I wrote about the development of the System/360 last week in a post Fred Brooks: "It Is a Humbling Experience to Make a Multi-Million Dollar Mistake" which you'll have to read to find out that is all about, given that the System/360 was a huge success.

By the time I got to university in the early 1970s it was the zenith of the timesharing era. One large (by the standards of the day—it had a megabyte of memory) IBM 370/165 provided computing service for the entire university. The 370 number came about since it was an IBM 360 "for the seventies". 

It was a mixture of what was called a cafeteria service (go to the computer with your punched cards and leave with a printout) and timesharing service provided by terminals all over the "campus". I put "campus" in quotes since Cambridge doesn't really have a campus, colleges and departments grew up along with the city, captured in the phrase "town and gown". So university buildings are interspersed with shops, pubs, a market square, parks, theatres, cinemas, and more. I seem to remember that undergraduates, except for those studying computer science, were not allowed interactive access and were restricted to the cafeteria service. There were on the order of 100 terminals. I tried to find out the performance of the 370/165 but there weren't any benchmarks back in that era. What I did discover is that the main memory had 2us cycle time, and the cache memory had an 80ns cycle time. I'm guessing it would run at 5-10 MIPS in practice. It is amazing, in a way, that most of a leading university's computing needs were supplied by a computer with about the raw compute performance of a Motorola 68020 microprocessor, so equivalent to running the whole university on a single Sun-3 workstation.

Perhaps even more surprising was that the university shut the service down from 6am on Saturday mornings until Sunday night. When I was studying computer science, I would typically work all night on Fridays until the service went down at 6am (since I could get stuff done) and then take most of the weekend off (since with no access to computers there was little I could have done). That also brought home to me how much we sleep. I would go to the pub with friends, they would then go back to sleep, I would work all night, and then meet them at breakfast. I'd done a full day's (well, night's) work, and they'd done...nothing.

Today, the leading mainframe vendor is the same as it was back in this era: IBM. But back then they were not the only mainframe vendors. But there were others. In fact they were known as IBM and the Seven Dwarves. But GE sold their computer business to Honeywell, and RCA to Sperry. The remaining 5 were known as the BUNCH: Burroughs, Univac, NCR, CDC, and Honeywell. But IBM was bigger...than all of them put together.

Also, in 1970, Gene Amdahl, who had been the architect of the IBM 360 series, left IBM and started Amdahl Corporation. These ran the IBM 360 software directly, so perhaps the first "clone".

The Minicomputer Era

Mainframes were big and expensive, requiring big air-conditioned facilities, and constant attention from operators. Several companies realized that it was possible to design smaller, cheaper computers. Names like Digital Equipment, Prime Computer, Data General, and several in Europe and Japan.

On my computer science course, we had a Data General Nova minicomputer. You would have to book time on it since it was only accessible by one user at a time. Further, since it was used for our assembly language programming coursework (yes, that was a thing back then), if you got things wrong then you risked crashing the whole computer and having to reboot it.

Minicomputers allowed smaller organizations such as individual departments in companies, or non-computer-science departments in universities, to have their own computers. They were initially 16-bit (or sometimes 18) with memory limited to 64KB as a result. The DEC Vax 11/780 was the first 32-bit "minicomputer", a sort of halfway house between the 16-bit minicomputers and the big expensive mainframes. It became standard for scientific computing, and started to make significant headway against IBM in the commercial sectors since it was so much cheaper and delivered similar performance.

With the invention of fast local area networks such as Ethernet, the next generation of minicomputers were linked together and known as workstations. The products of Sun and Apollo (eventually acquired by HP) became the workhorses of IC design and software development, since they also had bitmapped graphics displays. One other aspect of workstations is that they were often provided to an individual engineer and the workstation was under or on his or her desk. This was when computers finally moved out of computer rooms with raised floors, extra air-conditioning, and special power. A workstation could simply be plugged into a normal power outlet as if it were a hairdryer. It also needed to be connected to the network, since workstations were not really set up to be completely standalone.

After Workstations

Tomorrow, the PC, the smartphone, the cloud.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.