Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
Recently I was at the Computer History Museum (CHM) in Mountain View. For more about the museum, see Computer History Museum History. There are so many historical computers that it is impossible to summarize. Instead, I decided to take a personal approach as a way to bring the number of computers to write about down to a manageable number.
Four of the computers there were very significant in my own early life as a computer scientist. One of the astounding things to people who work with computers today, is just how little horsepower and memory the computers of the early years actually made do with. We are used to Moore's Law running forward, but if we run it backwards we get to computers that are literally millions of times slower than today.
One remark I've heard so often that it's become a cliché: "Did you know your smartphone has more compute power than NASA had to go to the moon?"
Of course, it's true. The A10 in the current iPhone is about 35,000 MIPS (the numbers are vague since Apple doesn't release any). NASA reputedly had a total of 1MIPS of compute power for the moon program—and, yes, it is 1 MIPS not 1MIP since the S is not making it plural, the abbreviation is for Million Instructions Per Second, which makes no sense without the S. The top speed of the Apollo missions, during the return transit between moon and earth, was about 11km/second or 24,000mph. An iPhone has 35,000 times more computer power than NASA, so if we reduce the speed by the same factor we arrive at 0.66mph. So the accurate reply to "Did you know your iPhone has more compute power than NASA had to go to the moon" would be "Yes, and did you know that at maximum speed, the spacecraft went faster than a tortoise."
Let's call this computer 0, since there isn't one in the Computer History Museum, and computer scientists (and hardware designers) like to count from zero.
The first computers I programmed were ICT 1900 series machines. ICT was a British computing company (it changed its name to ICL and eventually was absorbed by Fujitsu). The head of the mathematics department and I begged and cajoled to get some access to find out more about these new-fangled computer things. There was one at Cheltenham Technical College, near the school, which was the first time I got inside a computer room (yes, computers all had their own rooms until the early 1980s). Then ICL themselves gave us free computer time but we had to punch our programs on paper tape, mail them to London, and receive printouts a few days later. Then, somehow, we got the local military base, which had the Harry Potter-esque name of RAF Quigley, gave us an hour on their ICL once a week. The math teacher and my big achievement, beside scrounging free mainframe computer time, was persuading the school to buy an ASR-33 teletype so we could punch paper tapes.
We learnt Fortran IV, which was (and is) a language intended for scientific and mathematical calculations. However, you can get it to do more symbolic stuff and I remember the most complex program I ever wrote in those days being to solve checkmate-in-one-move chess problems (yes, OK, not much of a problem chess-wise). In fact, I progressed to a new type of problem that doesn't exist: the other person gets to move first, and then you mate them. I left school before going the final stage, to the sort of problem that does actually exist in many newspapers (are newspapers still a thing?), "white to play and mate in two moves".
I didn't spot any ICT/ICL machines at CHS. There are some pieces of the early Manchester University machines, the story of Ferranti, the first commercially available computer, and Leo, the computers the Lyon's Coffee Shops contracted to have built to run their business. Until recently, the British computer industry was not a success. But Arm and Raspberry Pi are on top of their own niches. But for the rest of the industry, as the famous history book 1066 and All That says in its final sentence, "America was thus top nation and history came to a ."
I went to Cambridge University, which, in that era, had its own entrance exam which took place in December. So typically people would stay an extra term at school, take the exam, and then have the rest of the year off before starting at University the following October. I spent some of that time working for a friend of my father who was a lecturer in the department of engineering at Cambridge. Most computing took place on terminals attached to a time-shared system called Atlas, but in the basement, the engineering department had their own machine, an IBM 1130. This one does appear at CHS.
The 1130 had 8K words (it wasn't byte-oriented but we would say 16K bytes today) of memory. It had a 16-bit binary architecture, although only 15 bits were used for memory addresses so the maximum memory, if you paid more money than we did, was 32K words. However, it had a full Fortran compiler that would run in that space (in fact the Fortran compiler could run in 4K words). It was a multi-pass compiler but had an architecture different from a modern multi-pass compiler (such as gcc) in that the source code was processed into memory, and then a sequence of small programs ran in series, each manipulating the internal data until it was finally in IBM 1130 machine code when it would either be written to disk (yes, we had disk!) or run. The resulting program could be larger than the memory of the computer doing the compilation (you could compile on a 4K machine code for a 32K machine).
It is worth reflecting on just how small this is. 4K words (8K bytes) to compile full Fortran. It is probably about 100-1000X less than the application on your phone that turns on the flash to use as a flashlight.
The disks were removable, as were all disks in that era. Each disk cartridge contained a single platter and could hold up to 512,000 16-bit words, so 1MB. I think that I had the rare privilege of having one of my own, most people didn't get to keep anything on the disk cartridges permanently.
To use the computer, you would book the entire machine, show up, load your disk cartridge into the second drive (the first held the system disk with the operating system and compilers), and start to work. Programs were held on either 8-hole paper tape or 80-column punched cards when prepared offline. So the first thing you would typically do is take the program you prepared offline and copy it to a file on the disk. Compile, run, debug, edit. When your time was up, you would punch out the current state of your program on paper tape and take it with you, leaving nothing on the disk. Even if, like me, you had your own cartridge, you would still do this, since there was no backup of the disks and there was always a good chance someone might overwrite your disk. Plus, of course, disk drives would occasionally fail.
When I started my courses at the university ("matriculated" in Cambridge speak) I got to use the mainframe that supported the entire university. This was brand new. It was an IBM 370/165 and had the seemingly amazing amount of 1 megabyte of memory. Since this was ferrite core memory, there were actually a million little ferrite cores in the refrigerator-sized box. There were two ways to access this, a cafeteria system or online terminals. There were no public terminals that I remember, undergraduates all had to use the cafeteria system.
The cafeteria system meant that you prepared the program and job that you wanted to run on punched cards. Then you went to the public area where there was a card reader and a line-printer. You would put your cards in the card-reader and it would read them (amazingly fast, over 10 cards per second). Under the hood, your job would be queued on the disks on the mainframe upstairs, eventually run, and then the output would appear on the printer. There was no interactive debugging, so getting a program working in this way was challenging.
I remember when they added the second megabyte of memory to the mainframe. This was a big deal because it was—ta da—solid state memory. It wasn't made by IBM who still only had core memory. I assume that this would have been DRAM. Dennard (yes, he of the scaling) had invented DRAM in 1968 and by the early 1970s companies like Intel (in those days a memory company) were shipping chips.
In my third year at Cambridge, I started to study computer science, which in that era was a one-year course. One big advantage of this was the the CS students had a room with shared interactive terminals to the IBM 370 and so finally we escaped the cafeteria system of students in inferior subjects like natural science. We also had several computers owned by the department (although it was called the "Computer Laboratory" still) such as a British Modula-1. Most of our programming was done in BCPL, which was a sort of Cambridge language since Martin Richards taught our compiler courses. It was the first "curly brace" language, although on the keyboards of the day, which didn't even have lower case, the braces were $( and $).There was a derivative version of it called B, developed at Bell Labs, and another language was built from that and named by moving one letter down the alphabet. Yes, that would be C, which must be the most successful programming language ever (and which begat C++, C#, C99, and more).
One thing I still find hard to believe was that they didn't run the system at weekends. At 6am on Saturday morning the mainframe would be shut down and restarted late on Sunday night. Since this was the main compute service for pretty much the entire university, that meant nobody could get much done at weekends. I actually developed a habit of working all night every Friday night until the system was shut down, and then taking the weekend off.
The IBM 370/165 was at the high end of the IBM 360 series of computers. This was an amazing "bet the company" gamble by IBM. In 1962, IBM had revenue of $2.5B and CEO Tom Watson Jr decided to develop the system 360 at a cost of $5B. It was similar to Boeing's bet on the 747. If either product had failed, the future of the company would be in jeopardy. The system 360 was not a single computer, it was a series of six computers with the top of the range one being 50 times more powerful than the lowest, and a whole new family of high-performance peripherals. It was an instant success. In fact, when you hear the words "mainframe' or "big iron" it almost always meant an IBM 360 mainframe of some sort.
The creation of the operating system, OS/360, was a story in itself. In my year as a CS undergraduate, Fred Brooks, who had headed up the program was on sabbatical at Cambridge and regularly talked about it. His book, The Mythical Man Month, came out during that year and has become a classic, still relevant today. Here are two of his rules:
Brooks updated the book for its 25th anniversary (that is the version linked to above) and if you are interested in software engineering, and haven't already done so, then you should read it.
Part 2 of this saga next week. Oh, and remember above when I said that computers had their own rooms. Phones had their own little houses.