2. OBJECTIVES
At the end of this lesson, the students will be able to
trace the history of computers be able to identify the
technologies that enabled computers of the past to run
faster and better know how the process of abstraction
work
4. Computers are the lifeblood of our society. It
is no secret that computing technology, in one
form or another, has managed to crawl into our
everyday lives in ways we may or may not
notice, whether we like it or not. In this lesson,
we will take a look at the early history of
computers, and how each new invention
changed the landscape of what computing
technology meant in their day.
5. The word "computer" did not always
mean like how we use it today. Its first
recorded use was in 1513, in a book by
Richard Braithwait called The Yong Mans
Gleanings. There it said, “I have read the
truest computer of Times, and the best
Arithmetician that euer breathed, and he
reduceth thy dayes into a short number.”
6. This use of the word "computer" referred
to a job title; it was a person who did
calculations or computations. The modern
meaning of the word was first used in 1897
to refer to any type of calculating machine.
However, the original sense of the word was
still being used until the middle of the 20th
century, when human computers were
declining in number.
7. Ancient Computing Devices
As for any device that even remotely resembles modern-day
computers, we can look as far back as 2500 BC in Mesopotamia.
There, humans had invented what is now considered as the first
ever computer—the abacus. The abacus was a device that was
used to perform addition and subtraction. It had varying designs,
from the simple stones- on-grooves on sand to the beads-on-wire
design commonly used until today. It worked by representing place
values with one wire or column on the abacus. Each bead on that
column represents a unit value multiplied by its place value. These
values are moved on the wire to represent subtracted or added
values.
9. Before the era we now consider as modern
history, humans invented devices more
advanced than the abacus. One thing that
comes to mind is the astrolabe, a device that
was used by sailors. It calculated a ship's
position at sea by computing the angle of stars
and constellations relative to itself. We have
also made use of slide rules in more recent
history. It resembled a ruler with a slider on it. It
was used to perform division and multiplication.
10. Mechanical Calculators
In 1694, Gottfried Leibniz, a German mathematician,
invented the step reckoner. This is a digital mechanical
calculator. This was the first calculator capable of
performing the four fundamental arithmetic operations.
According to Leibniz, “it is beneath the dignity of excellent
men to waste their time in calculation when any peasant
could do just as accurately with the aid of a machine.”
12. The design of the step reckoner turned out to be a
brilliant idea that it was used in calculator design for the
next three centuries. Unfortunately, even with mechanical
calculators, complex calculations would need several hours
or days to get even just a single answer. That is why most
people experienced computing through pre- computed
tables. Since mechanical calculators were expensive,
human computers assembled booklets that contained the
answers to most of the common mathematical problems.
For example, instead of reaching for a mechanical
calculator to compute the square root of 1,700,458,987 and
wait hours, even days, to have the answer, you can just
13. A form of pre-computed table, a range table, was
used in the military. Accurately firing a cannon,
which, in 1800s, can reach over a kilometer, is hard
to do, especially with differences in weather
conditions, temperature, and atmospheric pressure.
Range tables allowed soldiers to look up weather
conditions, and it would tell them which angle to fire.
The problem, however, is that a new range table
would have to be computed for every new design of
artillery shells or cannon. Charles Babbage
acknowledged this problem in 1822.
14. The Father of Modern Computers
Charles Babbage proposed the Difference Engine, a complex
machine that could perform operations on polynomials, i.e the
relationship between variables. He started construction in 1893, but
unfortunately, it was never finished. But in 1991, historians used
Babbage's drawings to finish the Difference Engine—and it worked!
During the construction of the Difference Engine, Charles Babbage
came up with another idea: a computer that is more advanced than
the Difference Engine. This device he had in mind will be a general-
purpose computer, and he called it the Analytical Engine. Unlike the
computers that came before it, the Analytical Engine could be used
for more than just one particular computation. It could be given data,
and it would operate in them in sequence. The results of these would
be printed on paper, thus storing its results.
16. Charles Babbage worked with a lady named Ada Lovelace
while he built the Analytical Engine. Lovelace was a mathematician
who wrote a program for the Analytical Engine that made it
calculate Bernoulli Numbers. However, this program was never run
since the Analytical Engine was never fully constructed. But
because Lovelace wrote the very first program for a programmable
computer, she is now regarded as the very first computer
programmer. And even though Babbage's computers were never
finished in his lifetime, the concepts and mechanisms behind his
devices influenced future computers so much that it is now
regarded as revolutionary, making him known as the Father of
Modern Computers.
17. Figure 4: Ada Lovelace, the first computer
programmer
18. The American Census Problem
The United States of America holds a census every ten years to count their citizens.
However, counting their citizens takes about 7 years, and by the time it was finished, the
statistics were already outdated. The problem became even bigger when the 1819 census
happened. It was estimated that the census would take 13 years to complete. This is when
Herman Hollerith stepped in. He introduced a device called tabulating machine, the first
electronic calculator. It used punched cards to automate the counting process. A punched
card is a card that has a series of holes to represent data. It is entered into the device, and
when the metal pegs entered the card's holes, they completed an electrical circuit that
registers the data. This device streamlined the census, and it was completed in just 2 years.
After the success, Herman Hollerith founded the Tabulating Machine Company, which was
later renamed into International Business Machines, or IBM, which operates until this day.
20. The Electronic Age of Computing
Herman Hollerith’s tabulating machine began the era
of electronic computers. Computer machines moved on
from using wheels and levers to electric circuits. The big
idea of electric circuits are that it used switches that could
close and cut the circuit. This is also when we started using
the binary number system, which we will discuss in lesson
2.
21. Throughout the history of electronic
computers, you will see this basic
concept of the circuit turning on and off.
Each computer that used electricity
operated this way. They only used
different approaches and levels of
advancements in technology.
22. THE MANHATTAN PROJECT
The Manhattan Project was a
research and development project
during the World War II. It was led
by the United States, and was
supported by the United Kingdom
and Canada
23. Relays
One of the largest electronic computers built was the
Harvard Mark I, completed by IBM in 1944 for the Allied
Forces during the World War II. It contained 765,000
components, three million connections, and five hundred
miles of wire. It was first used in simulations for the
Manhattan Project. The “brain” of this computer was
composed of relays, which are electrically-controlled
mechanical switches.
24. Figure 6: An open relay. It contains a switch (shown here in red) that is
open. When electricity flows through the control wire, the electromagnet
attracts the switch and closes the circuit
25. Figure 7: A closed relay. Because of the electricity that flows
through the control wire, the electromagnet attracts the metal
switch (shown here in red) and closes the circuit, making the
26. Relays bridged the transition from mechanical to electronic
computers. Relays were controlled by electricity, but they still had
moving parts. In a relay, there is a control wire that determines
whether a circuit is opened or closed. The control wire connects to
a coil of wire inside the relay. When current flows through the coil,
an electromagnetic field is created, which, in turn, attracts a metal
arm inside the relay, snapping shut and completing the circuit. You
can think of a relay like a water faucet. The control wire is like the
faucet handle. Open the faucet, and the water flows through the
pipe. Close the faucet, and the flow of the water stops. Relays are
doing the same thing, just with electrons instead of water. The
controlled circuit can then connect to other circuits, or to
something like a motor, which might add or subtract one from a
count on a gear, like what Hollerith’s tabulating machine did.
27. Unfortunately, the mechanical arm inside the
relay has mass, and therefore cannot move instantly
between on and off states. A good relay in the 1940s
might be able to flick back and forth fifty times per
second. That might seem pretty fast, but it’s not fast
enough to be useful at solving large complex
problems. The Harvard Mark I could do 3 additions or
subtractions per second. Multiplications took 6
seconds, and divisions took 15. And more complex
operations, like trigonometric functions, could take
over a minute.
28. Another problem with relays is wear and
tear. Mechanisms that have moving parts
will wear over time. Some things break
entirely, and other things start getting sticky,
slow, and unreliable. And as the number of
switches grow, the probability of failure
increases too. The Harvard Mark I had
roughly 3,500 relays.
29. These machines were also huge, dark, and
warm. They attracted insects. In September
1947, operators of the Harvard Mark II
pulled a dead moth from a malfunctioning
relay. Grace Hopper noted: “From then on,
when anything went wrong with a computer,
we said it had bugs in it.” And that’s where
we get the term computer bug.
30. Figure 8: The very first computer bug: a dead moth
found inside the Harvard Mark II
31. Vacuum Tubes
It was clear that we need a faster and more reliable
alternative to the electromechanical relays. Fortunately, that
alternative already existed. In 1904, John Ambrose Fleming
developed a new electrical component called a thermionic valve,
which housed two electrodes inside an airtight glass bulb—this was
the first vacuum tube. One of the electrodes could be heated, which
would cause it to emit electrons. The other electrode could then
attract the electrons to create the flow of electricity, but only if it was
positively charged—if it had a negative or neutral charge, the
electrons would no longer be attracted across the vacuum so no
current would flow. An electronic component that permits the one-
way flow of current is called a diode, but what was really needed
was a switch to help turn this flow on and off.
32. Figure 9: samples of vacuum tubes. Some had metal
caps on them to support higher voltage
33. Luckily, shortly after, in 1906, American inventor Lee
de Forest added a third “control” electrode that sits
between the two electrodes in Fleming’s design. By
applying positive charge to the control electrode, it would
permit the flow of electrons as before. But if the control was
given a negative charge, it would prevent the flow of
electricity. So by manipulating the control wire, one could
open or close the circuit. It is very similar to the relay, but
importantly, vacuum tubes had no moving parts. This
meant that there was less wear and tear, and more
importantly, they could switch thousands of times per
second.
34. Figure 10: Lee de Forest’s design for vacuum tubes with three electrodes. The heating electrode emits
positively- charged electrons. These electrons would be then captured by another electrode. The control
electrode turns the circuit on or off by making the heating electrode emit negatively-charged electrons
instead of positively-charged ones.
35. These triode vacuum tubes would become the basis of
radio, long distance telephone, and many other electronic
devices for nearly half a century. However, vacuum tubes
weren’t perfect. They were fragile, and can burn out like
light bulbs. But they were a big improvement over
mechanical relays. Also, initially, vacuum tubes were really
expensive—a radio might use just one, but a computer
might require hundreds of vacuum tubes. But by the 1940s,
their cost and reliability had improved to a point where they
became feasible for use in computers, or at least by people
with deep pockets, like the government. This marked the
shift from electromechanical to electronic computing
36. The Electronic Numerical Integrator and
Calculator (ENIAC) was the first general-
purpose computer that was also at the same
time programmable and electronic. It used
vacuum tubes, and was designed by John
Mauchly and J. Presper Eckert. It was so fast,
it could perform 5,000 ten-digit additions, or
subtractions, per second. It was operational for
ten years, and was estimated to have done
more arithmetic than the entire human race up
37. By the 1950s, the vacuum
tubes were reaching their limits.
To reduce cost and size, as well
as improve reliability and speed,
a radical new electronic switch
would be needed.
38. Figure 11: The ENIAC, photographed by the US
Army, from 1947-1955
39. Transistors
In 1947, Bell Laboratory scientists John Bardeen,
Walter Brattain, and William Shockley invented the
transistor, and with it, a whole new era of computing was
born! The physics behind transistors is pretty complex,
relying on quantum mechanics, so we’re going to stick to
the basics. A transistor is just like a relay or vacuum tube -
it’s a switch that can be opened or closed by applying
electrical power via a control wire.
40. Figure 12: The transistor. It consists of two electrodes that conduct
electricity and a semiconductor that can conduct and stop current flow using
its physical characteristics. This made transistors small, even microscopic.
41. Typically, transistors have two electrodes
separated by a material that sometimes can conduct
electricity, and other times resist it – a
semiconductor. In this case, the control wire attaches
to a “gate” electrode. By changing the electrical
charge of the gate, the conductivity of the
semiconducting material can be manipulated,
allowing current to flow or be stopped – like the water
faucet analogy we discussed earlier. Even the very
first transistor at Bell Labs showed tremendous
promise – it could switch between on and off states
42. Further, unlike vacuum tubes made of glass and with carefully
suspended, fragile components, transistors were solid material
known as a solid state component. Almost immediately, transistors
could be made smaller than the smallest possible relays or
vacuum tubes. This led to dramatically smaller and cheaper
computers, like the IBM 608, released in 1957 – the first fully
transistor-powered, commercially-available computer. It contained
3000 transistors and could perform 4,500 additions, or roughly 80
multiplications or divisions, every second. IBM soon transitioned all
of its computing products to transistors, bringing transistor- based
computers into offices, and eventually, homes.
43. Today, computers use transistors that are smaller than 50
nanometers in size – for reference, a sheet of paper is roughly
100,000 nanometers thick. And they’re not only incredibly small,
they’re super fast – they can switch states millions of times per
second, and can run for decades. A lot of this transistor and
semiconductor development happened in the Santa Clara Valley,
between San Francisco and San Jose, California. As the most
common material used to create semiconductors is silicon, this
region soon became known as Silicon Valley. Even William
Shockley moved there, founding Shockley Semiconductor, whose
employees later founded Fairchild Semiconductors, whose
employees later founded Intel - the world’s largest computer chip
maker today.