This chapter discusses the evolution of computers from the ENIAC in the 1940s to modern multi-core processors. It covers key developments like the stored program concept, transistors replacing vacuum tubes, integrated circuits, and Moore's Law. Performance improvements involved caching, pipelining, parallelism and balancing processor and memory speeds. Embedded systems also evolved, with ARM processors widely used in devices. Benchmarks and Amdahl's Law address limits of parallel processing.
02 computer evolution and performance.ppt [compatibility mode]bogi007
This document summarizes the evolution of computer architecture from early computers like ENIAC through modern multi-core processors. It discusses key developments like stored programs, transistors replacing vacuum tubes, integrated circuits, memory hierarchies, and parallelism through pipelining and multiple cores. Moore's Law of increasing transistor counts is also summarized.
Chapter 2 - Computer Evolution and PerformanceCésar de Souza
The document discusses the evolution of computer hardware from the 1940s onwards. It describes early computers like ENIAC which used vacuum tubes and was programmed manually via switches. The stored program concept developed by von Neumann separated the program and data into memory. Transistors replaced vacuum tubes, making computers smaller, cheaper and more reliable. Integrated circuits led to generations of computers with increasing numbers of components on a single chip due to Moore's Law. Memory speed could not keep up with rising CPU speeds, leading to cache memory and other performance improvements.
Lecture 2 computer evolution and performanceWajahat HuxaIn
The document summarizes the evolution of computers from ENIAC to modern multi-core processors. Some key points include:
- ENIAC was one of the earliest general-purpose electronic computers built in the 1940s. It led to the development of the stored-program concept by Von Neumann.
- Advances like transistors, integrated circuits, and microprocessors allowed computers to become smaller, cheaper, and more powerful over time. Moore's Law predicted exponential growth in transistor counts.
- Modern multi-core CPUs pack multiple processor "cores" onto a single chip to improve performance through parallel processing. Cache size has also increased significantly to bridge the processor-memory speed gap.
ENIAC was the first general-purpose electronic digital computer, built during World War II to calculate artillery firing tables. The stored program concept was developed by von Neumann, allowing computers to store both data and instructions in memory. This led to the development of the IAS machine at Princeton in 1952, which had 1000 words of memory that could each store a 40-bit number or two 20-bit instructions. It utilized registers like the program counter, accumulator, and instruction register to process instructions and data from memory. Transistors replaced vacuum tubes in computers, making them smaller, cheaper, and more reliable.
The CPU, or processor, carries out the instructions of a computer program and is the primary component responsible for a computer's functions. As microelectronic technology advanced, more transistors were placed on integrated circuits, decreasing the number of chips needed for a complete CPU. Processor registers provide the fastest way for a CPU to access data and are located at the top of the memory hierarchy. Common processor architectures include the ARM architecture which has influenced the design of many CPUs due to its low power consumption and flexibility.
This chapter discusses types of external memory including magnetic disks, optical disks, and magnetic tape. It provides details on disk formatting and organization, read/write mechanisms, disk speed characteristics, and RAID configurations for magnetic disks. Optical disks discussed include CD-ROM, CD-R, DVD, and high definition optical disks. Characteristics of magnetic tape such as Linear Tape-Open (LTO) tape drives are also summarized.
Intel 8th generation and 7th gen microprocessor full details especially for t...Chessin Chacko
Charles Babbage invented the first mechanical computer called the Babbage Engine in the 19th century. The Babbage Engine's processor, called the Mill, is considered a precursor to the modern computer processor. A processor is an electronic circuit that executes computer programs and contains a processing unit to perform calculations along with a control unit. Modern processors like those from Intel continue to evolve with each new generation bringing performance improvements through increased cores, cache sizes, and integration of more components onto a single chip or die.
02 computer evolution and performance.ppt [compatibility mode]bogi007
This document summarizes the evolution of computer architecture from early computers like ENIAC through modern multi-core processors. It discusses key developments like stored programs, transistors replacing vacuum tubes, integrated circuits, memory hierarchies, and parallelism through pipelining and multiple cores. Moore's Law of increasing transistor counts is also summarized.
Chapter 2 - Computer Evolution and PerformanceCésar de Souza
The document discusses the evolution of computer hardware from the 1940s onwards. It describes early computers like ENIAC which used vacuum tubes and was programmed manually via switches. The stored program concept developed by von Neumann separated the program and data into memory. Transistors replaced vacuum tubes, making computers smaller, cheaper and more reliable. Integrated circuits led to generations of computers with increasing numbers of components on a single chip due to Moore's Law. Memory speed could not keep up with rising CPU speeds, leading to cache memory and other performance improvements.
Lecture 2 computer evolution and performanceWajahat HuxaIn
The document summarizes the evolution of computers from ENIAC to modern multi-core processors. Some key points include:
- ENIAC was one of the earliest general-purpose electronic computers built in the 1940s. It led to the development of the stored-program concept by Von Neumann.
- Advances like transistors, integrated circuits, and microprocessors allowed computers to become smaller, cheaper, and more powerful over time. Moore's Law predicted exponential growth in transistor counts.
- Modern multi-core CPUs pack multiple processor "cores" onto a single chip to improve performance through parallel processing. Cache size has also increased significantly to bridge the processor-memory speed gap.
ENIAC was the first general-purpose electronic digital computer, built during World War II to calculate artillery firing tables. The stored program concept was developed by von Neumann, allowing computers to store both data and instructions in memory. This led to the development of the IAS machine at Princeton in 1952, which had 1000 words of memory that could each store a 40-bit number or two 20-bit instructions. It utilized registers like the program counter, accumulator, and instruction register to process instructions and data from memory. Transistors replaced vacuum tubes in computers, making them smaller, cheaper, and more reliable.
The CPU, or processor, carries out the instructions of a computer program and is the primary component responsible for a computer's functions. As microelectronic technology advanced, more transistors were placed on integrated circuits, decreasing the number of chips needed for a complete CPU. Processor registers provide the fastest way for a CPU to access data and are located at the top of the memory hierarchy. Common processor architectures include the ARM architecture which has influenced the design of many CPUs due to its low power consumption and flexibility.
This chapter discusses types of external memory including magnetic disks, optical disks, and magnetic tape. It provides details on disk formatting and organization, read/write mechanisms, disk speed characteristics, and RAID configurations for magnetic disks. Optical disks discussed include CD-ROM, CD-R, DVD, and high definition optical disks. Characteristics of magnetic tape such as Linear Tape-Open (LTO) tape drives are also summarized.
Intel 8th generation and 7th gen microprocessor full details especially for t...Chessin Chacko
Charles Babbage invented the first mechanical computer called the Babbage Engine in the 19th century. The Babbage Engine's processor, called the Mill, is considered a precursor to the modern computer processor. A processor is an electronic circuit that executes computer programs and contains a processing unit to perform calculations along with a control unit. Modern processors like those from Intel continue to evolve with each new generation bringing performance improvements through increased cores, cache sizes, and integration of more components onto a single chip or die.
The document discusses different types of processors including budget, mainstream, dual core, and Intel Pentium and Core 2 processors. It provides details on the architecture and features of Pentium, dual core, and Core 2 processors. Pentium was introduced in 1993 and was a breakthrough as it had 3.1 million transistors. Dual core processors have two separate cores on the same die to allow parallel processing. Core 2 processors were introduced in 2006 and improved on previous designs with dual or quad cores, larger caches, virtualization support, and 64-bit capabilities.
This document describes Phoenix combustion analysis systems, including their Phoenix CAS software, Phoenix AM acquisition module, Phoenix RT real-time module, and Phoenix C3 compact module. The Phoenix CAS software controls Phoenix hardware, calculates combustion parameters, and displays data. The Phoenix AM acquires high-speed analog and digital input data. The Phoenix RT performs cycle-by-cycle analysis in real time. The Phoenix C3 combines functionality of AM and RT in a compact package suitable for in-vehicle testing.
The document discusses several types of processors including Pentium 4, dual-core, and quad-core processors, explaining their features and advantages. Pentium 4 used the NetBurst architecture but faced challenges scaling to higher speeds. Dual-core and quad-core processors place multiple processor cores on a single chip to improve performance through parallel processing while reducing power needs.
The document discusses several components of a computer including RAM, hard drives, power supplies, CPUs, and motherboards. RAM is used for short-term memory storage and comes in various sizes. Hard drives are used for long-term storage of data and programs, and can connect via SATA cables. Power supplies convert AC to DC power and supply the right amount of power to components. CPUs carry out instructions and are measured in hertz/megahertz. Motherboards connect all hardware components and are essential for a computer to function.
The document discusses the history and development of microprocessors from the Intel 4004 in 1971 to Intel dual core processors in 2006. It provides details on key processors such as the 8008, 8080, 8086, 8088, 80386, 80486, Pentium, Pentium Pro, Pentium II, Pentium III, Pentium IV, and dual core/Core 2 processors. It describes features such as clock speed, number of transistors, cache memory, and architecture of various processors over time.
The document discusses the evolution of computer architecture from early computers like ENIAC to modern multi-core processors. It describes key developments like the stored-program concept, transistor-based computers, microprocessors, increased integration through Moore's Law, and performance improvements through caching, pipelining, and parallelism. The evolution of Intel and PowerPC processors is also summarized.
Difference between i3 and i5 and i7 and core 2 duo pdfnavendu shekhar
The document compares and contrasts Intel Core i3, i5, and i7 processors. It provides details on their core configurations, clock speeds, features like hyper-threading, turbo boost, virtualization support, and instruction sets. It also explains the difference between single-core, dual-core, and quad-core processors. Additionally, it provides information on Turbo Boost technology, which allows processor cores to run faster than their rated frequency under certain conditions.
This document provides an overview of computer architecture and microprocessors. It describes what a microprocessor is, its key design objectives of maximizing performance and productivity within constraints like power and area. It outlines the internal structure of a processor including the control unit, ALU, and register file. The document also discusses instruction set architecture, assembly code, types of instructions, encoding instructions into binary, and different number systems used.
An octa-core processor contains eight independent processing cores on a single chip. This allows multiple tasks to be performed simultaneously, improving efficiency. Some benefits of octa-core processors include increased multitasking ability, better performance for resource-intensive apps, lower power consumption, and a generally smoother experience for activities like web browsing, gaming and video playback. Trends in the microprocessor field have moved from dual-core to quad-core, penta-core, hexa-core and now octa-core designs, packing more cores onto a single chip to enable parallel processing.
This document provides an overview of Intel processor history and details about the Intel i7 microprocessor. It outlines the evolution of Intel processors from 1978 to 2013, including models like the 8086, 80386, Pentium, and Core i series. The document then describes key features of the Intel i7 including its quad-core design, support for multiple threads, integrated memory controller, cache structure, and technologies like hyper-threading, turbo boost, and virtualization support. Diagrams of the i7 architecture and its registers are also included.
The document discusses the central processing unit (CPU) and its components. It provides:
1. An overview of CPUs, from early mainframe computers that used printed circuit boards to modern computers that use a single microprocessor chip.
2. Details on the physical components of the CPU, including its pins, socket, heat sink, and fan.
3. Descriptions of the key internal components of the CPU, specifically the control unit which coordinates components, and the arithmetic logic unit which performs calculations.
4. Examples of different types of microprocessors used in modern computers from Intel, including dual and quad-core models.
This document provides an introduction to computer processors. It discusses the main types of processors produced by Intel and AMD, including Celeron, Pentium, Core i3/i5/i7, Athlon, Phenom, and FX series. It explains that the processor, also called the central processing unit (CPU), is the main component of a computer that executes stored program instructions. The CPU consists of a control unit that manages instruction flow and an arithmetic/logic unit that performs arithmetic and logic operations. Memory holds the data and instructions used by the CPU. Major processor vendors include Intel, AMD, Nvidia, IBM, Via, and Arm. Performance-enhancing technologies include core technology, turbo boost, virtualization,
This document provides an overview of the history and development of computer architecture. It begins with some of the earliest computing devices like the abacus and ENIAC, the first general-purpose electronic digital computer. It then discusses the evolution of CPU and memory architecture from vacuum tubes to integrated circuits and microprocessors. The document outlines different bus architectures like ISA, EISA, MCA, PCI, and AGP that were used to connect components. It also reviews memory hierarchies and I/O interfaces like IDE, SCSI, serial ports, USB, and parallel ports. The presentation aims to trace the progression of computer hardware technology over time.
This document discusses AMD processors and their history. It provides details about AMD's first in-house x86 processor (K5), the introduction of the Athlon processor in 1999, and AMD's development of 64-bit processors including the Opteron and Sempron. Pros of AMD processors include competitive gaming performance and integrated security features, while cons include limited memory compatibility and potential overheating issues in older models. The document recommends AMD for their competitive pricing and power efficiency.
The document provides information about processors:
- A processor, also known as the central processing unit (CPU), is an electronic circuit that executes instructions of a computer program and processes data. It handles the central management functions of a computer.
- The main components of a processor include the execution unit, branch predictor, floating point unit, primary cache, and bus interfaces.
- Processor speed is measured by its clock speed in gigahertz (GHz), and it comes in different architectures like AMD and Intel. Parallel processing uses multiple CPUs or processor cores simultaneously.
The CPU, or central processing unit, is the component of a computer that processes instructions and information. It controls all other parts of the computer. The CPU is made up of millions of tiny switches called transistors mounted on a silicon chip. The speed at which a CPU processes data is measured in megahertz or gigahertz, and processors are becoming faster, with one gigahertz equaling one billion cycles per second. The main manufacturers of CPUs today are Intel, which produces Pentium chips, and AMD, which produces Athlon chips, though other manufacturers like IBM also exist.
The document provides an introduction to computer architecture. It discusses binary numbers and the bit and byte units used to measure digital information. It describes the major components of a computer system, including the central processing unit (CPU), memory, hard drives, and input/output components. The CPU functions are explained, including fetching and executing instructions through a cycle and using registers, caches, and memory in a hierarchy. Direct access memory like RAM is faster than sequential access storage like hard disks.
A dual core processor contains two separate processing units called cores contained within a single integrated circuit chip. Each core has its own cache and controller, allowing it to function similarly to an individual processor. By having two cores, dual core processors can perform some tasks twice as fast as a single core processor. Examples of dual core CPUs include the Intel Core Duo, AMD X2, and PowerPC G5. While dual core processors are faster than single core, software must be designed to take advantage of both cores for maximum performance gains.
This document defines a computer system and its basic components. It explains that a computer accepts data as input, processes it, stores the information, and outputs it. The three major components are hardware, software, and peopleware. It then discusses the central processing unit, memory units, input/output devices, and storage devices that make up basic computer hardware. The CPU contains an arithmetic logic unit and control unit that execute stored instructions. Memory includes volatile RAM and non-volatile ROM. The BIOS checks components at startup and loads the operating system. Input devices send data for processing while output devices display processed results.
The document summarizes computer arithmetic and the arithmetic logic unit (ALU). It discusses:
1) The ALU handles integer and floating point calculations. It may have a separate floating point unit.
2) There are different methods for representing integers like sign-magnitude and two's complement. Two's complement is commonly used.
3) Floating point numbers use a sign, significand, and exponent to represent real numbers in a normalized format like ±.significand × 2exponent.
The document summarizes key aspects of cache memory including location, capacity, access methods, performance, and organization. It discusses cache memory hierarchies, characteristics of different memory types, mapping techniques like direct mapping and set associative mapping, and factors that influence cache design like block size and replacement algorithms. The goal of using a cache is to improve memory access time by taking advantage of temporal and spatial locality in programs.
The document discusses different types of processors including budget, mainstream, dual core, and Intel Pentium and Core 2 processors. It provides details on the architecture and features of Pentium, dual core, and Core 2 processors. Pentium was introduced in 1993 and was a breakthrough as it had 3.1 million transistors. Dual core processors have two separate cores on the same die to allow parallel processing. Core 2 processors were introduced in 2006 and improved on previous designs with dual or quad cores, larger caches, virtualization support, and 64-bit capabilities.
This document describes Phoenix combustion analysis systems, including their Phoenix CAS software, Phoenix AM acquisition module, Phoenix RT real-time module, and Phoenix C3 compact module. The Phoenix CAS software controls Phoenix hardware, calculates combustion parameters, and displays data. The Phoenix AM acquires high-speed analog and digital input data. The Phoenix RT performs cycle-by-cycle analysis in real time. The Phoenix C3 combines functionality of AM and RT in a compact package suitable for in-vehicle testing.
The document discusses several types of processors including Pentium 4, dual-core, and quad-core processors, explaining their features and advantages. Pentium 4 used the NetBurst architecture but faced challenges scaling to higher speeds. Dual-core and quad-core processors place multiple processor cores on a single chip to improve performance through parallel processing while reducing power needs.
The document discusses several components of a computer including RAM, hard drives, power supplies, CPUs, and motherboards. RAM is used for short-term memory storage and comes in various sizes. Hard drives are used for long-term storage of data and programs, and can connect via SATA cables. Power supplies convert AC to DC power and supply the right amount of power to components. CPUs carry out instructions and are measured in hertz/megahertz. Motherboards connect all hardware components and are essential for a computer to function.
The document discusses the history and development of microprocessors from the Intel 4004 in 1971 to Intel dual core processors in 2006. It provides details on key processors such as the 8008, 8080, 8086, 8088, 80386, 80486, Pentium, Pentium Pro, Pentium II, Pentium III, Pentium IV, and dual core/Core 2 processors. It describes features such as clock speed, number of transistors, cache memory, and architecture of various processors over time.
The document discusses the evolution of computer architecture from early computers like ENIAC to modern multi-core processors. It describes key developments like the stored-program concept, transistor-based computers, microprocessors, increased integration through Moore's Law, and performance improvements through caching, pipelining, and parallelism. The evolution of Intel and PowerPC processors is also summarized.
Difference between i3 and i5 and i7 and core 2 duo pdfnavendu shekhar
The document compares and contrasts Intel Core i3, i5, and i7 processors. It provides details on their core configurations, clock speeds, features like hyper-threading, turbo boost, virtualization support, and instruction sets. It also explains the difference between single-core, dual-core, and quad-core processors. Additionally, it provides information on Turbo Boost technology, which allows processor cores to run faster than their rated frequency under certain conditions.
This document provides an overview of computer architecture and microprocessors. It describes what a microprocessor is, its key design objectives of maximizing performance and productivity within constraints like power and area. It outlines the internal structure of a processor including the control unit, ALU, and register file. The document also discusses instruction set architecture, assembly code, types of instructions, encoding instructions into binary, and different number systems used.
An octa-core processor contains eight independent processing cores on a single chip. This allows multiple tasks to be performed simultaneously, improving efficiency. Some benefits of octa-core processors include increased multitasking ability, better performance for resource-intensive apps, lower power consumption, and a generally smoother experience for activities like web browsing, gaming and video playback. Trends in the microprocessor field have moved from dual-core to quad-core, penta-core, hexa-core and now octa-core designs, packing more cores onto a single chip to enable parallel processing.
This document provides an overview of Intel processor history and details about the Intel i7 microprocessor. It outlines the evolution of Intel processors from 1978 to 2013, including models like the 8086, 80386, Pentium, and Core i series. The document then describes key features of the Intel i7 including its quad-core design, support for multiple threads, integrated memory controller, cache structure, and technologies like hyper-threading, turbo boost, and virtualization support. Diagrams of the i7 architecture and its registers are also included.
The document discusses the central processing unit (CPU) and its components. It provides:
1. An overview of CPUs, from early mainframe computers that used printed circuit boards to modern computers that use a single microprocessor chip.
2. Details on the physical components of the CPU, including its pins, socket, heat sink, and fan.
3. Descriptions of the key internal components of the CPU, specifically the control unit which coordinates components, and the arithmetic logic unit which performs calculations.
4. Examples of different types of microprocessors used in modern computers from Intel, including dual and quad-core models.
This document provides an introduction to computer processors. It discusses the main types of processors produced by Intel and AMD, including Celeron, Pentium, Core i3/i5/i7, Athlon, Phenom, and FX series. It explains that the processor, also called the central processing unit (CPU), is the main component of a computer that executes stored program instructions. The CPU consists of a control unit that manages instruction flow and an arithmetic/logic unit that performs arithmetic and logic operations. Memory holds the data and instructions used by the CPU. Major processor vendors include Intel, AMD, Nvidia, IBM, Via, and Arm. Performance-enhancing technologies include core technology, turbo boost, virtualization,
This document provides an overview of the history and development of computer architecture. It begins with some of the earliest computing devices like the abacus and ENIAC, the first general-purpose electronic digital computer. It then discusses the evolution of CPU and memory architecture from vacuum tubes to integrated circuits and microprocessors. The document outlines different bus architectures like ISA, EISA, MCA, PCI, and AGP that were used to connect components. It also reviews memory hierarchies and I/O interfaces like IDE, SCSI, serial ports, USB, and parallel ports. The presentation aims to trace the progression of computer hardware technology over time.
This document discusses AMD processors and their history. It provides details about AMD's first in-house x86 processor (K5), the introduction of the Athlon processor in 1999, and AMD's development of 64-bit processors including the Opteron and Sempron. Pros of AMD processors include competitive gaming performance and integrated security features, while cons include limited memory compatibility and potential overheating issues in older models. The document recommends AMD for their competitive pricing and power efficiency.
The document provides information about processors:
- A processor, also known as the central processing unit (CPU), is an electronic circuit that executes instructions of a computer program and processes data. It handles the central management functions of a computer.
- The main components of a processor include the execution unit, branch predictor, floating point unit, primary cache, and bus interfaces.
- Processor speed is measured by its clock speed in gigahertz (GHz), and it comes in different architectures like AMD and Intel. Parallel processing uses multiple CPUs or processor cores simultaneously.
The CPU, or central processing unit, is the component of a computer that processes instructions and information. It controls all other parts of the computer. The CPU is made up of millions of tiny switches called transistors mounted on a silicon chip. The speed at which a CPU processes data is measured in megahertz or gigahertz, and processors are becoming faster, with one gigahertz equaling one billion cycles per second. The main manufacturers of CPUs today are Intel, which produces Pentium chips, and AMD, which produces Athlon chips, though other manufacturers like IBM also exist.
The document provides an introduction to computer architecture. It discusses binary numbers and the bit and byte units used to measure digital information. It describes the major components of a computer system, including the central processing unit (CPU), memory, hard drives, and input/output components. The CPU functions are explained, including fetching and executing instructions through a cycle and using registers, caches, and memory in a hierarchy. Direct access memory like RAM is faster than sequential access storage like hard disks.
A dual core processor contains two separate processing units called cores contained within a single integrated circuit chip. Each core has its own cache and controller, allowing it to function similarly to an individual processor. By having two cores, dual core processors can perform some tasks twice as fast as a single core processor. Examples of dual core CPUs include the Intel Core Duo, AMD X2, and PowerPC G5. While dual core processors are faster than single core, software must be designed to take advantage of both cores for maximum performance gains.
This document defines a computer system and its basic components. It explains that a computer accepts data as input, processes it, stores the information, and outputs it. The three major components are hardware, software, and peopleware. It then discusses the central processing unit, memory units, input/output devices, and storage devices that make up basic computer hardware. The CPU contains an arithmetic logic unit and control unit that execute stored instructions. Memory includes volatile RAM and non-volatile ROM. The BIOS checks components at startup and loads the operating system. Input devices send data for processing while output devices display processed results.
The document summarizes computer arithmetic and the arithmetic logic unit (ALU). It discusses:
1) The ALU handles integer and floating point calculations. It may have a separate floating point unit.
2) There are different methods for representing integers like sign-magnitude and two's complement. Two's complement is commonly used.
3) Floating point numbers use a sign, significand, and exponent to represent real numbers in a normalized format like ±.significand × 2exponent.
The document summarizes key aspects of cache memory including location, capacity, access methods, performance, and organization. It discusses cache memory hierarchies, characteristics of different memory types, mapping techniques like direct mapping and set associative mapping, and factors that influence cache design like block size and replacement algorithms. The goal of using a cache is to improve memory access time by taking advantage of temporal and spatial locality in programs.
This chapter discusses operating system support and functions including program creation, execution, I/O access, file access, system access, error handling, and accounting. It covers the evolution of operating systems from early single-program systems with no OS to modern time-sharing systems. Key topics include memory management techniques like paging, segmentation, and virtual memory which allow more efficient use of system resources through processes and virtual address translation.
This chapter introduces computer architecture and organization. Architecture refers to attributes visible to programmers like the instruction set. Organization is how features are implemented internally. The x86 and IBM System/370 families share the same architectures but have different organizations between versions. Computer functions involve data processing, storage, movement, and control. The chapter outlines the book's topics and provides internet resources for further information.
This chapter discusses reduced instruction set computers (RISC). It provides background on major advances in computers that led to RISC designs, such as cache memory, microprocessors, and pipelining. The key features of RISC processors are described, including large general-purpose registers, a limited and simple instruction set, and an emphasis on optimizing the instruction pipeline. The chapter compares RISC to CISC processors and discusses the driving forces behind both approaches. It analyzes the execution characteristics of programs and implications for processor design, such as optimizing register usage and careful pipeline design.
This chapter discusses instruction sets, which are the collection of instructions understood by the CPU. It covers the key elements of instructions, such as operation codes and operands. It also examines different characteristics of instruction sets, including data types, instruction formats, addressing modes, and types of operations. Instruction sets can vary in the number of operands they support and how data is ordered in memory (byte order or endianness). The chapter provides examples of instruction sets from processors like x86, ARM, and others.
03 top level view of computer function and interconnectionSher Shah Merkhel
The document summarizes key topics from Chapter 3 of William Stallings' Computer Organization and Architecture textbook, including:
- The components of a computer including the control unit, ALU, main memory, and I/O.
- How programs are executed through an instruction cycle of fetching and executing instructions.
- Mechanisms for flow control including interrupts, program counters, and jumps.
- How the different computer components are interconnected through buses for data, addresses, and control signals.
- Common bus architectures and how arbitration works to allow shared access to buses.
This chapter discusses processor structure and function. It covers the basic components of a CPU including registers, instruction cycles, and data flow. Key points include:
- A CPU must fetch, interpret, fetch operands for, process, and write data from instructions. It uses registers for temporary storage and a program counter to keep track of instructions.
- Common registers include general purpose, data, address, condition code, control/status, and program status registers. General purpose registers can vary in number, size, and whether they are general or specialized use.
- An instruction cycle involves fetching an instruction from memory, decoding it, fetching operands, executing the instruction, and writing results. Pipelining and branch prediction are
This chapter discusses instruction sets and addressing modes. It covers common addressing modes like immediate, direct, indirect, register, register indirect, displacement, and stack addressing. It provides examples and diagrams to illustrate each addressing mode. The chapter also discusses instruction formats for different architectures like PDP-8, PDP-10, PDP-11, VAX, x86, and ARM. It covers ARM's load/store multiple addressing and thumb instruction set. The chapter concludes with an overview of assemblers and a simple example program.
A ppt on computer evolution. It show you how computers are renewed with time and how they were in past and how they are in present. Read ppt and be grateful for the knowledge you get from it
The document discusses the evolution of computer hardware from the 1940s to present day. It begins with early computers like ENIAC which were vacuum tube based and programmed manually. The stored program concept developed by von Neumann allowed programs and data to be stored in memory and greatly influenced later computer design. Transistors replaced vacuum tubes leading to smaller, cheaper computers. The development of integrated circuits and Moore's Law led to exponential increases in transistor counts and computer performance over generations. Techniques like pipelining, caching, parallelism and multiple cores have helped improve processor performance as physical limits are reached. Memory speeds have not kept up, requiring hierarchical memory designs. The x86 architecture evolved from 8-bit to 64-bit while maintaining backwards
This document discusses the evolution of computer hardware from ENIAC to modern multi-core processors. It covers early computers like ENIAC, the development of the stored program concept by von Neumann, the transistor revolution, integrated circuits, and Moore's Law. It also discusses improvements in processor design like pipelining, caches, parallelism. Modern computers use multiple processor cores on a chip to continue improving performance within power and physical limits.
ch2 -A Computer Evolution and Performance updated.pdfKhizarKhizar8
The document summarizes the evolution of computers from the ENIAC in the 1940s to modern processors. It describes key developments such as the stored program concept pioneered by von Neumann, the transition to transistors which enabled smaller and more efficient machines, and the development of integrated circuits which led to continued improvements in performance and capacity. The x86 architecture evolved from early Intel processors like the 8086 and now powers most personal computers. Embedded systems often use the ARM architecture which was designed for low power and small size.
The document discusses the evolution of computers from early machines like ENIAC to modern microprocessors. It describes key developments such as the stored-program concept pioneered by von Neumann, the transition to transistors which made computers smaller and more reliable, the development of integrated circuits and Moore's Law. It also summarizes improvements in processor design including pipelining, caching, superscalar execution and the use of multiple processor cores.
This document discusses the evolution of computer organization and architecture from early machines like ENIAC to modern computers. It covers the development of the von Neumann architecture, early commercial computers, the introduction of transistors and integrated circuits, Moore's Law, improvements in processor and memory performance, and the evolution of the x86 architecture. Key topics covered include stored program concepts, cache memory, pipelining, multicore processors, and balancing processor and I/O speeds.
The document provides an overview of compute infrastructure, including:
- Physical computers contain components like CPUs, memory, ports, and connectivity.
- Early computers used vacuum tubes and punch cards. Transistors and ICs reduced size and cost.
- Common processor types include Intel/AMD x86, ARM, IBM POWER, Oracle SPARC.
- Early computer memory included magnetic cores and RAM chips replaced it. RAM types are SRAM and DRAM.
- The BIOS controls a computer from power on until the operating system loads.
Computer Evolution and Performance (part 2)Ajeng Savitri
This document discusses the evolution of computer generations and CPU technology over time. It describes the progression from vacuum tubes to modern integrated circuits with billions of transistors. Key developments discussed include Moore's Law, von Neumann architecture, generations of Intel CPUs, memory and I/O improvements, and the shift to multi-core processors. The x86 instruction set is traced from the 8080 to modern multi-core CPUs like the Core i7. Overall the document provides a high-level overview of the major technological milestones in computing hardware from the 1940s to present day.
This document discusses the evolution of computer architecture from semiconductor memory in the 1970s to recent processor trends. Key points covered include the development of microprocessors from the 4004 in 1971 to recent multi-core and many-integrated core processors. The document also discusses RISC architectures like ARM and benchmarks for evaluating system performance.
IBM and ASTRON, the Netherlands Institute for Radio Astronomy, will unveil a prototype high-density, 64-bit microserver CPU placed on a 133 x 55 mm board running Linux. The partners are building the microserver as part of the DOME project, which is tasked with building an IT roadmap for the Square Kilometer Array, an international consortium to build the world's largest and most sensitive radio telescope. Scientists estimate that the processing power required to operate the telescope will be equal to several millions of today's fastest computers.
IBM scientist Ronald Luijten (@ronaldgadget) will present the microserver in English from ASTRON's offices in Dwingeloo, The Netherlands.
This was recorded on 3 July 14:00 Central European Time
This document provides an overview of computer hardware components and concepts. It describes the basic components of a computer system including the system unit, CPU, memory, storage, and peripheral devices. It discusses different types of computer systems and platforms. The document also covers hardware basics such as motherboards, buses, caches and processors. Finally, it discusses computer networks, storage technologies, input devices and output devices.
This document discusses embedded computing and microcontrollers. It provides information on characteristics of embedded systems like meeting deadlines and real-time constraints. It explains why microprocessors are useful for implementing digital systems efficiently. Microprocessors can be customized for different price points and markets. The document also discusses challenges in embedded computing like power consumption and testing. It provides specifications of computer components like the processor, memory, and ports. Finally, it describes several families of microcontrollers like the Intel 4004, 8051, and ARM profiles.
A presentation on CPU. Focusing on Single and Multi core processors, Hyper threading and Turbo boost Technologies, RISC and CISC processors and computing/storage platforms. :)
This document discusses modern CPUs and how they can be extended. It describes CPU architecture and how it determines layout and features. Major manufacturers like Intel, AMD, and Freescale are profiled. The document also outlines various ports and buses that can enhance a CPU's capabilities, such as USB, FireWire, SCSI, expansion slots, and PC Cards. It provides details on serial and parallel communication standards.
This document discusses modern CPUs and how they can be extended. It covers CPU architecture, manufacturers like Intel and AMD, comparing processor features, and ports/buses that expand processing power. These include serial/parallel ports, SCSI, USB, FireWire, expansion slots, and PC Cards. Plug and play allows automatic installation of new hardware.
Computers are incredibly versatile machines that have become an integral part of modern life. Essentially, a computer is a programmable electronic device that processes input data according to instructions stored in its memory to produce output.
This document discusses modern CPUs and how they can be extended. It describes CPU architecture and how it determines layout and features. Major manufacturers like Intel, AMD, and Freescale are profiled. Processors are compared based on speed, cache size, registers, and bus speed. Advanced topics covered include RISC, parallel processing, and symmetric/massively parallel configurations. The document also outlines standard computer ports and expansion technologies like serial/parallel, SCSI, USB, FireWire, expansion slots/cards, and PC cards that extend processing power. Plug and play capabilities are noted.
This document discusses modern CPUs and how they can be extended. It covers CPU architecture, manufacturers like Intel and AMD, comparing processor features, and ports/buses that expand processing power. These include serial/parallel ports, SCSI, USB, FireWire, expansion slots, and PC Cards. Plug and play allows non-technical users to install new hardware.
This document discusses modern CPUs and how they can be extended. It describes CPU architecture and differences between Intel, AMD, Freescale and IBM processors. Key specs for comparing processors are discussed like speed, cache size and bus speed. Advanced topics covered include RISC vs CISC, parallel processing, and standard computer ports. Ways to extend the processor's power explained are serial/parallel ports, SCSI, USB, FireWire, expansion slots, PC Cards and plug and play functionality.
Knowing what's inside and how it works will help you design, develop, and implement applications better, faster, cheaper, more efficient, and easier to use because you will be able to make informed decisions instead of guestimating and assuming.
Similar to 02 computer evolution and performance (20)
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
2. ENIAC - background
• Electronic Numerical Integrator And
Computer
• Eckert and Mauchly
• University of Pennsylvania
• Trajectory tables for weapons
• Started 1943
• Finished 1946
—Too late for war effort
• Used until 1955
3. ENIAC - details
• Decimal (not binary)
• 20 accumulators of 10 digits
• Programmed manually by switches
• 18,000 vacuum tubes
• 30 tons
• 15,000 square feet
• 140 kW power consumption
• 5,000 additions per second
4. von Neumann/Turing
• Stored Program concept
• Main memory storing programs and data
• ALU operating on binary data
• Control unit interpreting instructions from
memory and executing
• Input and output equipment operated by
control unit
• Princeton Institute for Advanced Studies
—IAS
• Completed 1952
6. IAS - details
• 1000 x 40 bit words
—Binary number
—2 x 20 bit instructions
• Set of registers (storage in CPU)
—Memory Buffer Register
—Memory Address Register
—Instruction Register
—Instruction Buffer Register
—Program Counter
—Accumulator
—Multiplier Quotient
8. Commercial Computers
• 1947 - Eckert-Mauchly Computer
Corporation
• UNIVAC I (Universal Automatic Computer)
• US Bureau of Census 1950 calculations
• Became part of Sperry-Rand Corporation
• Late 1950s - UNIVAC II
—Faster
—More memory
9. IBM
• Punched-card processing equipment
• 1953 - the 701
—IBM’s first stored program computer
—Scientific calculations
• 1955 - the 702
—Business applications
• Lead to 700/7000 series
10. Transistors
• Replaced vacuum tubes
• Smaller
• Cheaper
• Less heat dissipation
• Solid State device
• Made from Silicon (Sand)
• Invented 1947 at Bell Labs
• William Shockley et al.
11. Transistor Based Computers
• Second generation machines
• NCR & RCA produced small transistor
machines
• IBM 7000
• DEC - 1957
—Produced PDP-1
12. Microelectronics
• Literally - “small electronics”
• A computer is made up of gates, memory
cells and interconnections
• These can be manufactured on a
semiconductor
• e.g. silicon wafer
13. Generations of Computer
• Vacuum tube - 1946-1957
• Transistor - 1958-1964
• Small scale integration - 1965 on
—Up to 100 devices on a chip
• Medium scale integration - to 1971
—100-3,000 devices on a chip
• Large scale integration - 1971-1977
—3,000 - 100,000 devices on a chip
• Very large scale integration - 1978 -1991
—100,000 - 100,000,000 devices on a chip
• Ultra large scale integration – 1991 -
—Over 100,000,000 devices on a chip
14. Moore’s Law
• Increased density of components on chip
• Gordon Moore – co-founder of Intel
• Number of transistors on a chip will double every
year
• Since 1970’s development has slowed a little
— Number of transistors doubles every 18 months
• Cost of a chip has remained almost unchanged
• Higher packing density means shorter electrical
paths, giving higher performance
• Smaller size gives increased flexibility
• Reduced power and cooling requirements
• Fewer interconnections increases reliability
16. IBM 360 series
• 1964
• Replaced (& not compatible with) 7000
series
• First planned “family” of computers
—Similar or identical instruction sets
—Similar or identical O/S
—Increasing speed
—Increasing number of I/O ports (i.e. more
terminals)
—Increased memory size
—Increased cost
• Multiplexed switch structure
17. DEC PDP-8
• 1964
• First minicomputer (after miniskirt!)
• Did not need air conditioned room
• Small enough to sit on a lab bench
• $16,000
—$100k+ for IBM 360
• Embedded applications & OEM
• BUS STRUCTURE
19. Semiconductor Memory
• 1970
• Fairchild
• Size of a single core
—i.e. 1 bit of magnetic core storage
• Holds 256 bits
• Non-destructive read
• Much faster than core
• Capacity approximately doubles each year
20. Intel
• 1971 - 4004
—First microprocessor
—All CPU components on a single chip
—4 bit
• Followed in 1972 by 8008
—8 bit
—Both designed for specific applications
• 1974 - 8080
—Intel’s first general purpose microprocessor
21. Speeding it up
• Pipelining
• On board cache
• On board L1 & L2 cache
• Branch prediction
• Data flow analysis
• Speculative execution
24. Solutions
• Increase number of bits retrieved at one
time
—Make DRAM “wider” rather than “deeper”
• Change DRAM interface
—Cache
• Reduce frequency of memory access
—More complex cache and cache on chip
• Increase interconnection bandwidth
—High speed buses
—Hierarchy of buses
25. I/O Devices
• Peripherals with intensive I/O demands
• Large data throughput demands
• Processors can handle this
• Problem moving data
• Solutions:
—Caching
—Buffering
—Higher-speed interconnection buses
—More elaborate bus structures
—Multiple-processor configurations
27. Key is Balance
• Processor components
• Main memory
• I/O devices
• Interconnection structures
28. Improvements in Chip Organization and
Architecture
• Increase hardware speed of processor
—Fundamentally due to shrinking logic gate size
– More gates, packed more tightly, increasing clock
rate
– Propagation time for signals reduced
• Increase size and speed of caches
—Dedicating part of processor chip
– Cache access times drop significantly
• Change processor organization and
architecture
—Increase effective speed of execution
—Parallelism
29. Problems with Clock Speed and Login
Density
• Power
— Power density increases with density of logic and clock
speed
— Dissipating heat
• RC delay
— Speed at which electrons flow limited by resistance and
capacitance of metal wires connecting them
— Delay increases as RC product increases
— Wire interconnects thinner, increasing resistance
— Wires closer together, increasing capacitance
• Memory latency
— Memory speeds lag processor speeds
• Solution:
— More emphasis on organizational and architectural
approaches
31. Increased Cache Capacity
• Typically two or three levels of cache
between processor and main memory
• Chip density increased
—More cache memory on chip
– Faster cache access
• Pentium chip devoted about 10% of chip
area to cache
• Pentium 4 devotes about 50%
32. More Complex Execution Logic
• Enable parallel execution of instructions
• Pipeline works like assembly line
—Different stages of execution of different
instructions at same time along pipeline
• Superscalar allows multiple pipelines
within single processor
—Instructions that do not depend on one
another can be executed in parallel
33. Diminishing Returns
• Internal organization of processors
complex
—Can get a great deal of parallelism
—Further significant increases likely to be
relatively modest
• Benefits from cache are reaching limit
• Increasing clock rate runs into power
dissipation problem
—Some fundamental physical limits are being
reached
34. New Approach – Multiple Cores
• Multiple processors on single chip
— Large shared cache
• Within a processor, increase in performance
proportional to square root of increase in
complexity
• If software can use multiple processors, doubling
number of processors almost doubles
performance
• So, use two simpler processors on the chip rather
than one more complex processor
• With two processors, larger caches are justified
— Power consumption of memory logic less than
processing logic
35. x86 Evolution (1)
• 8080
— first general purpose microprocessor
— 8 bit data path
— Used in first personal computer – Altair
• 8086 – 5MHz – 29,000 transistors
— much more powerful
— 16 bit
— instruction cache, prefetch few instructions
— 8088 (8 bit external bus) used in first IBM PC
• 80286
— 16 Mbyte memory addressable
— up from 1Mb
• 80386
— 32 bit
— Support for multitasking
• 80486
— sophisticated powerful cache and instruction pipelining
— built in maths co-processor
36. x86 Evolution (2)
• Pentium
— Superscalar
— Multiple instructions executed in parallel
• Pentium Pro
— Increased superscalar organization
— Aggressive register renaming
— branch prediction
— data flow analysis
— speculative execution
• Pentium II
— MMX technology
— graphics, video & audio processing
• Pentium III
— Additional floating point instructions for 3D graphics
37. x86 Evolution (3)
• Pentium 4
— Note Arabic rather than Roman numerals
— Further floating point and multimedia enhancements
• Core
— First x86 with dual core
• Core 2
— 64 bit architecture
• Core 2 Quad – 3GHz – 820 million transistors
— Four processors on chip
• x86 architecture dominant outside embedded systems
• Organization and technology changed dramatically
• Instruction set architecture evolved with backwards compatibility
• ~1 instruction per month added
• 500 instructions available
• See Intel web pages for detailed information on processors
38. Embedded Systems
ARM
• ARM evolved from RISC design
• Used mainly in embedded systems
—Used within product
—Not general purpose computer
—Dedicated function
—E.g. Anti-lock brakes in car
39. Embedded Systems Requirements
• Different sizes
—Different constraints, optimization, reuse
• Different requirements
—Safety, reliability, real-time, flexibility,
legislation
—Lifespan
—Environmental conditions
—Static v dynamic loads
—Slow to fast speeds
—Computation v I/O intensive
—Descrete event v continuous dynamics
41. ARM Evolution
• Designed by ARM Inc., Cambridge,
England
• Licensed to manufacturers
• High speed, small die, low power
consumption
• PDAs, hand held games, phones
—E.g. iPod, iPhone
• Acorn produced ARM1 & ARM2 in 1985
and ARM3 in 1989
• Acorn, VLSI and Apple Computer founded
ARM Ltd.
42. ARM Systems Categories
• Embedded real time
• Application platform
—Linux, Palm OS, Symbian OS, Windows mobile
• Secure applications
43. Performance Assessment
Clock Speed
• Key parameters
— Performance, cost, size, security, reliability, power
consumption
• System clock speed
— In Hz or multiples of
— Clock rate, clock cycle, clock tick, cycle time
• Signals in CPU take time to settle down to 1 or 0
• Signals may change at different speeds
• Operations need to be synchronised
• Instruction execution in discrete steps
— Fetch, decode, load and store, arithmetic or logical
— Usually require multiple clock cycles per instruction
• Pipelining gives simultaneous execution of
instructions
• So, clock speed is not the whole story
45. Instruction Execution Rate
• Millions of instructions per second (MIPS)
• Millions of floating point instructions per
second (MFLOPS)
• Heavily dependent on instruction set,
compiler design, processor
implementation, cache & memory
hierarchy
46. Benchmarks
• Programs designed to test performance
• Written in high level language
— Portable
• Represents style of task
— Systems, numerical, commercial
• Easily measured
• Widely distributed
• E.g. System Performance Evaluation Corporation
(SPEC)
— CPU2006 for computation bound
– 17 floating point programs in C, C++, Fortran
– 12 integer programs in C, C++
– 3 million lines of code
— Speed and rate metrics
– Single task and throughput
47. SPEC Speed Metric
• Single task
• Base runtime defined for each benchmark using
reference machine
• Results are reported as ratio of reference time to
system run time
— Trefi execution time for benchmark i on reference
machine
— Tsuti execution time of benchmark i on test system
• Overall performance calculated by averaging ratios for all 12
integer benchmarks
— Use geometric mean
– Appropriate for normalized numbers such as ratios
48. SPEC Rate Metric
• Measures throughput or rate of a machine carrying out a
number of tasks
• Multiple copies of benchmarks run simultaneously
— Typically, same as number of processors
• Ratio is calculated as follows:
— Trefi reference execution time for benchmark i
— N number of copies run simultaneously
— Tsuti elapsed time from start of execution of program on all N
processors until completion of all copies of program
— Again, a geometric mean is calculated
49. Amdahl’s Law
• Gene Amdahl [AMDA67]
• Potential speed up of program using
multiple processors
• Concluded that:
—Code needs to be parallelizable
—Speed up is bound, giving diminishing returns
for more processors
• Task dependent
—Servers gain by maintaining multiple
connections on multiple processors
—Databases can be split into parallel tasks
50. Amdahl’s Law Formula
• For program running on single processor
— Fraction f of code infinitely parallelizable with no scheduling overhead
— Fraction (1-f) of code inherently serial
— T is total execution time for program on single processor
— N is number of processors that fully exploit parralle portions of code
• Conclusions
— f small, parallel processors has little effect
— N ->∞, speedup bound by 1/(1 – f)
– Diminishing returns for using more processors
51. Internet Resources
• http://www.intel.com/
—Search for the Intel Museum
• http://www.ibm.com
• http://www.dec.com
• Charles Babbage Institute
• PowerPC
• Intel Developer Home
52. References
• AMDA67 Amdahl, G. “Validity of the
Single-Processor Approach to Achieving
Large-Scale Computing Capability”,
Proceedings of the AFIPS Conference,
1967.