William Stallings
Computer Organization
and Architecture
8th
Edition
Chapter 2
Computer Evolution and
Performance
ENIAC - background
• Electronic Numerical Integrator And
Computer
• Eckert and Mauchly
• University of Pennsylvania
• Trajectory tables for weapons
• Started 1943
• Finished 1946
—Too late for war effort
• Used until 1955
ENIAC - details
• Decimal (not binary)
• 20 accumulators of 10 digits
• Programmed manually by switches
• 18,000 vacuum tubes
• 30 tons
• 15,000 square feet
• 140 kW power consumption
• 5,000 additions per second
Von Neumann Architecture
The term Von Neumann architecture, also known as
the Von Neumann model or the Princeton
architecture, derives from a 1945 computer architecture
description by the mathematician and early
computer scientist John von Neumann and others,
First Draft of a Report on the EDVAC. This describes a
design architecture for an electronic digital computer with
subdivisions of a processing unit consisting of an
arithmetic logic unit and processor registers, a control unit
containing an instruction register and program counter, a
memory to store both data and instructions, external
mass storage, and input and output mechanisms. The
meaning of the term has evolved to mean a
stored-program computer in which an instruction fetch and
a data operation cannot occur at the same time because
they share a common bus. This is referred to as the
Von Neumann bottleneck and often limits the performance
of the system.
Von Neumann Architecture
• The design of a Von Neumann architecture is simpler
than the more modern Harvard architecture which is
also a stored-program system but has one dedicated
set of address and data buses for memory, and
another set of address and data buses for fetching
instructions.
• A stored-program digital computer is one that keeps
its programmed instructions, as well as its data, in
read-write, random-access memory (RAM). Stored-
program computers were an advancement over the
program-controlled computers of the 1940s, such as
the Colossus and the ENIAC, which were programmed
by setting switches and inserting patch leads to route
data and to control signals between various functional
units. In the vast majority of modern computers, the
same memory is used for both data and program
instructions.
von Neumann/Turing
• Stored Program concept
• Main memory storing programs and data
• ALU operating on binary data
• Control unit interpreting instructions from
memory and executing
• Input and output equipment operated by
control unit
• Princeton Institute for Advanced Studies
—IAS
• Completed 1952
Structure of von Neumann machine
IAS - details
• 1000 x 40 bit words
—Binary number
—2 x 20 bit instructions
• Set of registers (storage in CPU)
—Memory Buffer Register
—Memory Address Register
—Instruction Register
—Instruction Buffer Register
—Program Counter
—Accumulator
—Multiplier Quotient
Structure of IAS –
detail
Harvard Architecture
• The Harvard architecture is a
computer architecture with physically separate
storage and signal pathways for instructions and data.
The term originated from the Harvard Mark I relay-
based computer, which stored instructions on
punched tape (24 bits wide) and data in electro-
mechanical counters. These early machines had
limited data storage, entirely contained within the
central processing unit, and provided no access to the
instruction storage as data. Programs needed to be
loaded by an operator, the processor could not boot
itself.
• Today, most processors implement such separate
signal pathways for performance reasons but actually
implement a Modified Harvard architecture, so they
can support tasks like loading a program from
disk storage as data and then executing it.
Harvard Architecture
Harvard Architecture
In a Harvard architecture, there is no
need to make the two memories share
characteristics. In particular, the word
width, timing, implementation
technology, and memory address
structure can differ. In some systems,
instructions can be stored in
read-only memory while data memory
generally requires read-write memory. In
some systems, there is much more
instruction memory than data memory so
instruction addresses are wider than data
addresses.
Harvard Architecture
Under pure von Neumann architecture the CPU
can be either reading an instruction or
reading/writing data from/to the memory. Both
cannot occur at the same time since the
instructions and data use the same bus system.
In a computer using the Harvard architecture,
the CPU can both read an instruction and
perform a data memory access at the same time,
even without a cache. A Harvard architecture
computer can thus be faster for a given circuit
complexity because instruction fetches and data
access do not contend for a single memory
pathway .
Transistors
• Replaced vacuum tubes
• Smaller
• Cheaper
• Less heat dissipation
• Solid State device
• Made from Silicon (Sand)
• Invented 1947 at Bell Labs
• Second generation machines
Generations of Computer
• Vacuum tube - 1946-1957
• Transistor - 1958-1964
• Small scale integration - 1965 on
—Up to 100 devices on a chip
• Medium scale integration - to 1971
—100-3,000 devices on a chip
• Large scale integration - 1971-1977
—3,000 - 100,000 devices on a chip
• Very large scale integration - 1978 -1991
—100,000 - 100,000,000 devices on a chip
• Ultra large scale integration – 1991 -
—Over 100,000,000 devices on a chip
Moore’s Law
• Increased density of components on chip
• Gordon Moore – co-founder of Intel
• Number of transistors on a chip will double every
year
• Since 1970’s development has slowed a little
—Number of transistors doubles every 18 months
• Cost of a chip has remained almost unchanged
• Higher packing density means shorter electrical
paths, giving higher performance
• Smaller size gives increased flexibility
• Reduced power and cooling requirements
• Fewer interconnections increases reliability
Growth in CPU Transistor Count

02_Computer Evolution and Performance.pptx

  • 1.
    William Stallings Computer Organization andArchitecture 8th Edition Chapter 2 Computer Evolution and Performance
  • 2.
    ENIAC - background •Electronic Numerical Integrator And Computer • Eckert and Mauchly • University of Pennsylvania • Trajectory tables for weapons • Started 1943 • Finished 1946 —Too late for war effort • Used until 1955
  • 3.
    ENIAC - details •Decimal (not binary) • 20 accumulators of 10 digits • Programmed manually by switches • 18,000 vacuum tubes • 30 tons • 15,000 square feet • 140 kW power consumption • 5,000 additions per second
  • 4.
    Von Neumann Architecture Theterm Von Neumann architecture, also known as the Von Neumann model or the Princeton architecture, derives from a 1945 computer architecture description by the mathematician and early computer scientist John von Neumann and others, First Draft of a Report on the EDVAC. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.
  • 5.
    Von Neumann Architecture •The design of a Von Neumann architecture is simpler than the more modern Harvard architecture which is also a stored-program system but has one dedicated set of address and data buses for memory, and another set of address and data buses for fetching instructions. • A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random-access memory (RAM). Stored- program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch leads to route data and to control signals between various functional units. In the vast majority of modern computers, the same memory is used for both data and program instructions.
  • 6.
    von Neumann/Turing • StoredProgram concept • Main memory storing programs and data • ALU operating on binary data • Control unit interpreting instructions from memory and executing • Input and output equipment operated by control unit • Princeton Institute for Advanced Studies —IAS • Completed 1952
  • 7.
    Structure of vonNeumann machine
  • 8.
    IAS - details •1000 x 40 bit words —Binary number —2 x 20 bit instructions • Set of registers (storage in CPU) —Memory Buffer Register —Memory Address Register —Instruction Register —Instruction Buffer Register —Program Counter —Accumulator —Multiplier Quotient
  • 9.
    Structure of IAS– detail
  • 10.
    Harvard Architecture • TheHarvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the Harvard Mark I relay- based computer, which stored instructions on punched tape (24 bits wide) and data in electro- mechanical counters. These early machines had limited data storage, entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator, the processor could not boot itself. • Today, most processors implement such separate signal pathways for performance reasons but actually implement a Modified Harvard architecture, so they can support tasks like loading a program from disk storage as data and then executing it.
  • 11.
  • 12.
    Harvard Architecture In aHarvard architecture, there is no need to make the two memories share characteristics. In particular, the word width, timing, implementation technology, and memory address structure can differ. In some systems, instructions can be stored in read-only memory while data memory generally requires read-write memory. In some systems, there is much more instruction memory than data memory so instruction addresses are wider than data addresses.
  • 13.
    Harvard Architecture Under purevon Neumann architecture the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway .
  • 14.
    Transistors • Replaced vacuumtubes • Smaller • Cheaper • Less heat dissipation • Solid State device • Made from Silicon (Sand) • Invented 1947 at Bell Labs • Second generation machines
  • 15.
    Generations of Computer •Vacuum tube - 1946-1957 • Transistor - 1958-1964 • Small scale integration - 1965 on —Up to 100 devices on a chip • Medium scale integration - to 1971 —100-3,000 devices on a chip • Large scale integration - 1971-1977 —3,000 - 100,000 devices on a chip • Very large scale integration - 1978 -1991 —100,000 - 100,000,000 devices on a chip • Ultra large scale integration – 1991 - —Over 100,000,000 devices on a chip
  • 16.
    Moore’s Law • Increaseddensity of components on chip • Gordon Moore – co-founder of Intel • Number of transistors on a chip will double every year • Since 1970’s development has slowed a little —Number of transistors doubles every 18 months • Cost of a chip has remained almost unchanged • Higher packing density means shorter electrical paths, giving higher performance • Smaller size gives increased flexibility • Reduced power and cooling requirements • Fewer interconnections increases reliability
  • 17.
    Growth in CPUTransistor Count