Three marks will be given for each generation.
The notes/lecture material for this section is given below. The important points are
• No Operating System
• Based on vacuum tubes
• Had an operating system
• Batch jobs introduced
• Based on transistors
• 1965-1980 (or 1971 depending on your view)
• Multi-programming and time-sharing possible
• Spooling possible
• Based on integrated circuits
• 1980 (or 1971) to present
• Start of PC revolution so MS-DOS, UNIX etc. were developed.
• Based on (V)LSI
First Generation (1945-1955)
Like many developments, the first digital computer was developed due to the motivation of war. During the
second world war many people were developing automatic calculating machines. For example
• By 1941 a German engineer (Konrad Zuse) had developed a computer (the Z3) that designed airplanes
• In 1943, the British had built a code breaking computer called Colossus which decoded German
messages (in fact, Colossus only had a limited effect on the development of computers as it was not a
general purpose computer – it could only break codes – and its existence was kept secret until long
after the war ended).
• By 1944, Howard H. Aiken, an engineer with IBM, had built an all-electronic calculator that created
ballistic charts for the US Navy. This computer contained about 500 miles of wiring and was about
half as long as a football field. Called The Harvard IBM Automatic Sequence Controlled Calculator
(Mark I, for short) it took between three and five seconds to do a calculation and was inflexible as the
sequence of calculations could not change. But it could carry out basic arithmetic as well as more
• ENIAC (Electronic Numerical Integrator and Computer) was developed by John Presper Eckert and
John Mauchly. It consisted of 18,000 vacuum tubes, 70,000 soldered resisters and five million soldered
joints. It consumed so much electricity (160kw) that an entire section of Philadelphia had their lights
Operating Systems (G53OPS) - Examination
dim whilst it was running. ENIAC was a general purpose computer that ran about 1000 faster than the
• In 1945 John von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC)
which had a memory which held a program as well as data. In addition the CPU, allowed all computer
functions to be coordinated through a single source. The UNIVAC I (Universal Automatic Computer),
built by Remington Rand in 1951 was one of the first commercial computers to make use of these
These first computers filled entire rooms with thousands of vacuum tubes. Like the analytical engine they
did not have an operating system, they did not even have programming languages and programmers had to
physically wire the computer to carry out their intended instructions. The programmers also had to book
time on the computer as a programmer had to have dedicated use of the machine.
Second Generation (1955-1965)
Vacuum tubes proved very unreliable and a programmer, wishing to run his program, could quite easily
spend all his/her time searching for and replacing tubes that had blown. The mid fifties saw the
development of the transistor which, as well as being smaller than vacuum tubes, were much more reliable.
It now became feasible to manufacture computers that could be sold to customers willing to part with their
money. Of course, the only people who could afford computers were large organisations who needed large
air conditioned rooms in which to place them.
Now, instead of programmers booking time on the machine, the computers were under the control of
computer operators. Programs were submitted on punched cards that were placed onto a magnetic tape.
This tape was given to the operators who ran the job through the computer and delivered the output to the
As computers were so expensive methods were developed that allowed the computer to be as productive as
possible. One method of doing this (which is still in use today) is the concept of batch jobs. Instead of
submitting one job at a time, many jobs were placed onto a single tape and these were processed one after
another by the computer. The ability to do this can be seen as the first real operating system (although, as
we said above, depending on your view of an operating system, much of the complexity of the hardware
had been abstracted away by this time).
Third Generation (1965-1980)
The third generation of computers is characterised by the use of Integrated Circuits as a replacement for
transistors. This allowed computer manufacturers to build systems that users could upgrade as necessary.
IBM, at this time introduced its System/360 range and ICL introduced its 1900 range (this would later be
updated to the 2900 range, the 3900 range and the SX range, which is still in use today).
Up until this time, computers were single tasking. The third generation saw the start of multiprogramming.
That is, the computer could give the illusion of running more than one task at a time. Being able to do this
allowed the CPU to be used much more effectively. When one job had to wait for an I/O request, another
program could use the CPU.
The concept of multiprogramming led to a need for a more complex operating system. One was now
needed that could schedule tasks and deal with all the problems that this brings (which we will be looking
at in some detail later in the course).
In implementing multiprogramming, the system was confined by the amount of physical memory that was
available (unlike today where we have the concept of virtual memory).
Another feature of third generation machines was that they implemented spooling. This allowed reading of
punch cards onto disc as soon as they were brought into the computer room. This eliminated the need to
store the jobs on tape, with all the problems this brings.
Similarly, the output from jobs could also be stored to disc, thus allowing programs that produced output to
run at the speed of the disc, and not the printer.
Although, compared to first and second generation machines, third generation machines were far superior
but they did have a downside. Up until this point programmers were used to giving their job to an operator
(in the case of second generation machines) and watching it run (often through the computer room door –
which the operator kept closed but allowed the programmers to press their nose up against the glass). The
turnaround of the jobs was fairly fast.
Now, this changed. With the introduction of batch processing the turnaround could be hours if not days.
This problem led to the concept of time sharing. This allowed programmers to access the computer from a
terminal and work in an interactive manner.
Obviously, with the advent of multiprogramming, spooling and time sharing, operating systems had to
become a lot more complex in order to deal with all these issues.
Fourth Generation (1980-present)
The late seventies saw the development of Large Scale Integration (LSI). This led directly to the
development of the personal computer (PC). These computers were (originally) designed to be single user,
highly interactive and provide graphics capability.
One of the requirements for the original PC produced by IBM was an operating system and, in what is
probably regarded as the deal of the century, Bill Gates supplied MS-DOS on which he built his fortune.
In addition, mainly on non-Intel processors, the UNIX operating system was being used.
It is still (largely) true today that there are mainframe operating systems (such as VME which runs on ICL
mainframes) and PC operating systems (such as MS-Windows and UNIX), although the edges are starting
to blur. For example, you can run a version of UNIX on ICL’s mainframes and, similarly, ICL were
planning to make a version of VME that could be run on a PC.