SlideShare a Scribd company logo
Parallel Computer Models

 CEG 4131 Computer Architecture III
          Miodrag Bolic




                                      1
Overview
•   Flynn’s taxonomy
•   Classification based on the memory arrangement
•   Classification based on communication
•   Classification based on the kind of parallelism
    – Data-parallel
    – Function-parallel




                                                      2
Flynn’s Taxonomy
– The most universally excepted method of classifying computer
   systems
– Published in the Proceedings of the IEEE in 1966
– Any computer can be placed in one of 4 broad categories
» SISD: Single instruction stream, single data stream
» SIMD: Single instruction stream, multiple data streams
» MIMD: Multiple instruction streams, multiple data streams
» MISD: Multiple instruction streams, single data stream




                                                                 3
SISD




                Instructions
 Processing                     Main memory
element (PE)                        (M)
                 Data




                          IS

                IS                      DS
 Control Unit                  PE             Memory




                                                       4
SIMD
       Applications:
       • Image processing
       • Matrix manipulations
       • Sorting




                          5
SIMD Architectures
• Fine-grained
   –   Image processing application
   –   Large number of PEs
   –   Minimum complexity PEs
   –   Programming language is a simple extension of a sequential
       language
• Coarse-grained
   – Each PE is of higher complexity and it is usually built with
     commercial devices
   – Each PE has local memory




                                                                    6
MIMD




       7
MISD

       Applications:
       • Classification
       • Robot vision




                          8
Flynn taxonomy
– Advantages of Flynn
» Universally accepted
» Compact Notation
» Easy to classify a system (?)
– Disadvantages of Flynn
» Very coarse-grain differentiation among machine
   systems
» Comparison of different systems is limited
» Interconnections, I/O, memory not considered in the
   scheme

                                                        9
Classification based on memory arrangement


       Shared memory
                                          Interconnection
                          I/O1                network
        Interconnection
            network
                          I/On
                                         PE1         PEn

      PE1          PEn                    M1          Mn

            Processors                    P1          Pn

Shared memory - multiprocessors
                                  Message passing - multicomputers


                                                             10
Shared-memory multiprocessors
• Uniform Memory Access (UMA)
• Non-Uniform Memory Access (NUMA)
• Cache-only Memory Architecture (COMA)

• Memory is common to all the processors.
• Processors easily communicate by means of shared
  variables.




                                                     11
The UMA Model
• Tightly-coupled systems (high degree of resource
  sharing)
• Suitable for general-purpose and time-sharing
  applications by multiple users.



                   P1                             Pn


                   $                              $


                        Interconnection network



                    Mem                     Mem




                                                       12
Symmetric and asymmetric multiprocessors
• Symmetric:
  - all processors have equal access to all peripheral
  devices.
  - all processors are identical.
• Asymmetric:
  - one processor (master) executes the operating system
  - other processors may be of different types and may be
  dedicated to special tasks.




                                                            13
The NUMA Model
•  The access time varies with the location of the memory
  word.
• Shared memory is distributed to local memories.
• All local memories form a global address space
  accessible by all processors

    Access time: Cache, Local memory, Remote memory
    COMA - Cache-only Memory Architecture
                            P1                                   Pn



                            $                                    $
                      Mem                                  Mem


                                 Interconnection network

                                 Distributed Memory (NUMA)

                                                                      14
Distributed memory multicomputers
• Multiple computers- nodes
• Message-passing network
• Local memories are private with its
  own program and data                    M     M     M
• No memory contention so that the       PE    PE    PE
  number of processors is very large
• The processors are connected by
                                         Interconnection
  communication lines, and the precise       network
  way in which the lines are connected
  is called the topology of the
  multicomputer.                         PE    PE    PE
• A typical program consists of           M     M     M
  subtasks residing in all the
  memories.
                                                           15
Classification based on type of
                interconnections
• Static networks




• Dynamic networks




                                           16
Interconnection Network [1]
• Mode of Operation (Synchronous vs. Asynchronous)

• Control Strategy (Centralized vs. Decentralized)

• Switching Techniques (Packet switching vs. Circuit
  switching)

• Topology (Static Vs. Dynamic)




                                                       17
Classification based on the kind of
                            parallelism[3]
                                          Parallel
                                       architectures
                                            PAs


         Data-parallel architectures                       Function-parallel architectures



                                                  Instruction-level   Thread-level    Process-level
                                                                                          PAs
                                                        PAs               PAs


                     DPs
                                                        ILPS                                 MIMDs



  Vector      Associative SIMDs Systolic     Pipelined VLIWs Superscalar Ditributed         Shared
              and neural        architecture processors       processors memory             memory
architecture architecture                                                   MIMD             (multi-
                                                                        (multi-computer) Processors)


                                                                                                      18
References
•   Advanced Computer Architecture and Parallel
    Processing, by Hesham El-Rewini and Mostafa Abd-El-
    Barr, John Wiley and Sons, 2005.
•   Advanced Computer Architecture Parallelism,
    Scalability, Programmability, by K. Hwang, McGraw-Hill
    1993.
•   Advanced Computer Architectures – A Design Space
    Approach by Desco Sima, Terence Fountain and Peter
    Kascuk, Pearson, 1997.




                                                        19
Speedup

• S = Speed(new) / Speed(old)

• S = Work/time(new) / Work/time(old)

• S = time(old) / time(new)

• S = time(before improvement) /
     time(after improvement)




                                        20
Speedup

• Time (one CPU): T(1)

• Time (n CPUs): T(n)

• Speedup: S

• S = T(1)/T(n)




                         21
Amdahl’s Law
The performance improvement to be gained from using
 some faster mode of execution is limited by the fraction
 of the time the faster mode can be used




                                                            22
Example



          20 hours

     A                                B
         must walk        200 miles


Walk 4 miles /hour
Bike 10 miles / hour
Car-1 50 miles / hour
Car-2 120 miles / hour
Car-3 600 miles /hour

                                          23
Example



          20 hours

     A                                                 B
         must walk            200 miles


Walk 4 miles /hour        50 + 20 = 70 hours        S=1
Bike 10 miles / hour      20 + 20 = 40 hours        S = 1.8
Car-1 50 miles / hour     4 + 20 = 24 hours         S = 2.9
Car-2 120 miles / hour    1.67 + 20 = 21.67 hours   S = 3.2
Car-3 600 miles /hour     0.33 + 20 = 20.33 hours   S = 3.4

                                                           24
Amdahl’s Law (1967)




∀ β : The fraction of the program that is naturally serial

• (1- β): The fraction of the program that is naturally
  parallel




                                                             25
S = T(1)/T(N)


                 T(1)(1- β )
T(N) = T(1)β +
                     N

      1              N
S=
   β+  (1- β ) =
                 βN + (1- β )
         N


                                26
Amdahl’s Law




               27

More Related Content

What's hot

Computer architecture multi processor
Computer architecture multi processorComputer architecture multi processor
Computer architecture multi processor
Mazin Alwaaly
 
Multivector and multiprocessor
Multivector and multiprocessorMultivector and multiprocessor
Multivector and multiprocessor
Kishan Panara
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
Haris456
 
INTRODUCTION TO PARALLEL PROCESSING
INTRODUCTION TO PARALLEL PROCESSINGINTRODUCTION TO PARALLEL PROCESSING
INTRODUCTION TO PARALLEL PROCESSING
GS Kosta
 
13. multiprocessing
13. multiprocessing13. multiprocessing
13. multiprocessing
karishmamubeen
 
Multi threaded rtos
Multi threaded rtosMulti threaded rtos
Multi threaded rtos
James Wong
 
Introduction to parallel computing
Introduction to parallel computingIntroduction to parallel computing
Introduction to parallel computing
VIKAS SINGH BHADOURIA
 
Multiprocessor
MultiprocessorMultiprocessor
Multiprocessor
Satvik Khurana
 
multi processors
multi processorsmulti processors
multi processors
Acad
 
Multithreading
MultithreadingMultithreading
Multithreading
A B Shinde
 
Multiple processor (ppt 2010)
Multiple processor (ppt 2010)Multiple processor (ppt 2010)
Multiple processor (ppt 2010)
Arth Ramada
 
Shared-Memory Multiprocessors
Shared-Memory MultiprocessorsShared-Memory Multiprocessors
Shared-Memory Multiprocessors
Salvatore La Bua
 
Multithreaded processors ppt
Multithreaded processors pptMultithreaded processors ppt
Multithreaded processors ppt
Siddhartha Anand
 
Multi Processors And Multi Computers
 Multi Processors And Multi Computers Multi Processors And Multi Computers
Multi Processors And Multi Computers
Nemwos
 
Lecture 9 -_pthreads-linux_threads
Lecture 9 -_pthreads-linux_threadsLecture 9 -_pthreads-linux_threads
Lecture 9 -_pthreads-linux_threads
Prashant Pawar
 
Multi processing
Multi processingMulti processing
Multi processing
Muhammad Ishaq
 
Parallel Processing Concepts
Parallel Processing Concepts Parallel Processing Concepts
Parallel Processing Concepts
Dr Shashikant Athawale
 
multiprocessors and multicomputers
 multiprocessors and multicomputers multiprocessors and multicomputers
multiprocessors and multicomputers
Pankaj Kumar Jain
 
Mimd
MimdMimd
Chapter 08
Chapter 08Chapter 08
Chapter 08
Google
 

What's hot (20)

Computer architecture multi processor
Computer architecture multi processorComputer architecture multi processor
Computer architecture multi processor
 
Multivector and multiprocessor
Multivector and multiprocessorMultivector and multiprocessor
Multivector and multiprocessor
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
 
INTRODUCTION TO PARALLEL PROCESSING
INTRODUCTION TO PARALLEL PROCESSINGINTRODUCTION TO PARALLEL PROCESSING
INTRODUCTION TO PARALLEL PROCESSING
 
13. multiprocessing
13. multiprocessing13. multiprocessing
13. multiprocessing
 
Multi threaded rtos
Multi threaded rtosMulti threaded rtos
Multi threaded rtos
 
Introduction to parallel computing
Introduction to parallel computingIntroduction to parallel computing
Introduction to parallel computing
 
Multiprocessor
MultiprocessorMultiprocessor
Multiprocessor
 
multi processors
multi processorsmulti processors
multi processors
 
Multithreading
MultithreadingMultithreading
Multithreading
 
Multiple processor (ppt 2010)
Multiple processor (ppt 2010)Multiple processor (ppt 2010)
Multiple processor (ppt 2010)
 
Shared-Memory Multiprocessors
Shared-Memory MultiprocessorsShared-Memory Multiprocessors
Shared-Memory Multiprocessors
 
Multithreaded processors ppt
Multithreaded processors pptMultithreaded processors ppt
Multithreaded processors ppt
 
Multi Processors And Multi Computers
 Multi Processors And Multi Computers Multi Processors And Multi Computers
Multi Processors And Multi Computers
 
Lecture 9 -_pthreads-linux_threads
Lecture 9 -_pthreads-linux_threadsLecture 9 -_pthreads-linux_threads
Lecture 9 -_pthreads-linux_threads
 
Multi processing
Multi processingMulti processing
Multi processing
 
Parallel Processing Concepts
Parallel Processing Concepts Parallel Processing Concepts
Parallel Processing Concepts
 
multiprocessors and multicomputers
 multiprocessors and multicomputers multiprocessors and multicomputers
multiprocessors and multicomputers
 
Mimd
MimdMimd
Mimd
 
Chapter 08
Chapter 08Chapter 08
Chapter 08
 

Viewers also liked

Paralell
ParalellParalell
Paralell
Mark Vicuna
 
Analysis and design of a half hypercube interconnection network topology
Analysis and design of a half hypercube interconnection network topologyAnalysis and design of a half hypercube interconnection network topology
Analysis and design of a half hypercube interconnection network topology
Amir Masoud Sefidian
 
A Look Back | A Look Ahead Seattle Foundation Services
A Look Back | A Look Ahead Seattle Foundation ServicesA Look Back | A Look Ahead Seattle Foundation Services
A Look Back | A Look Ahead Seattle Foundation Services
Seattle Foundation
 
Evolution of Computer
Evolution of ComputerEvolution of Computer
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel Programming
Uday Sharma
 
Pipelining
PipeliningPipelining
Pipelining
sarith divakar
 
Interconnection Network
Interconnection NetworkInterconnection Network
Interconnection Network
Ali A Jalil
 
Lec3 final
Lec3 finalLec3 final
Lec3 final
Gichelle Amon
 
Ccn unit-2- data link layer by prof.suresha v
Ccn unit-2- data link layer by prof.suresha vCcn unit-2- data link layer by prof.suresha v
Ccn unit-2- data link layer by prof.suresha v
SURESHA V
 
Pipelining
PipeliningPipelining
Pipelining
AJAL A J
 
INSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMINSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISM
Kamran Ashraf
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
Dr Sandeep Kumar Poonia
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
Dr Sandeep Kumar Poonia
 
Instruction level parallelism
Instruction level parallelismInstruction level parallelism
Instruction level parallelism
deviyasharwin
 
Concept of Pipelining
Concept of PipeliningConcept of Pipelining
Concept of Pipelining
SHAKOOR AB
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processing
Page Maker
 
Instruction pipeline: Computer Architecture
Instruction pipeline: Computer ArchitectureInstruction pipeline: Computer Architecture
Instruction pipeline: Computer Architecture
InteX Research Lab
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and Disadvantages
Murtadha Alsabbagh
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithms
guest084d20
 
Parallel Algorithm Models
Parallel Algorithm ModelsParallel Algorithm Models
Parallel Algorithm Models
Martin Coronel
 

Viewers also liked (20)

Paralell
ParalellParalell
Paralell
 
Analysis and design of a half hypercube interconnection network topology
Analysis and design of a half hypercube interconnection network topologyAnalysis and design of a half hypercube interconnection network topology
Analysis and design of a half hypercube interconnection network topology
 
A Look Back | A Look Ahead Seattle Foundation Services
A Look Back | A Look Ahead Seattle Foundation ServicesA Look Back | A Look Ahead Seattle Foundation Services
A Look Back | A Look Ahead Seattle Foundation Services
 
Evolution of Computer
Evolution of ComputerEvolution of Computer
Evolution of Computer
 
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel Programming
 
Pipelining
PipeliningPipelining
Pipelining
 
Interconnection Network
Interconnection NetworkInterconnection Network
Interconnection Network
 
Lec3 final
Lec3 finalLec3 final
Lec3 final
 
Ccn unit-2- data link layer by prof.suresha v
Ccn unit-2- data link layer by prof.suresha vCcn unit-2- data link layer by prof.suresha v
Ccn unit-2- data link layer by prof.suresha v
 
Pipelining
PipeliningPipelining
Pipelining
 
INSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMINSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISM
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Instruction level parallelism
Instruction level parallelismInstruction level parallelism
Instruction level parallelism
 
Concept of Pipelining
Concept of PipeliningConcept of Pipelining
Concept of Pipelining
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processing
 
Instruction pipeline: Computer Architecture
Instruction pipeline: Computer ArchitectureInstruction pipeline: Computer Architecture
Instruction pipeline: Computer Architecture
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and Disadvantages
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithms
 
Parallel Algorithm Models
Parallel Algorithm ModelsParallel Algorithm Models
Parallel Algorithm Models
 

Similar to Ceg4131 models

Multiprocessor.pptx
 Multiprocessor.pptx Multiprocessor.pptx
Multiprocessor.pptx
Muhammad54342
 
PARALLELISM IN MULTICORE PROCESSORS
PARALLELISM  IN MULTICORE PROCESSORSPARALLELISM  IN MULTICORE PROCESSORS
PARALLELISM IN MULTICORE PROCESSORS
Amirthavalli Senthil
 
network ram parallel computing
network ram parallel computingnetwork ram parallel computing
network ram parallel computing
Niranjana Ambadi
 
Hpc 4 5
Hpc 4 5Hpc 4 5
Hpc 4 5
Yasir Khan
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptBIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
Kadri20
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
ssuser9dbd7e
 
Parallel computing
Parallel computingParallel computing
Parallel computing
Vinay Gupta
 
Computer system Architecture. This PPT is based on computer system
Computer system Architecture. This PPT is based on computer systemComputer system Architecture. This PPT is based on computer system
Computer system Architecture. This PPT is based on computer system
mohantysikun0
 
Chap1
Chap1Chap1
Chap1
adisi
 
Multiprocessor_YChen.ppt
Multiprocessor_YChen.pptMultiprocessor_YChen.ppt
Multiprocessor_YChen.ppt
AberaZeleke1
 
High Performance Computer Architecture
High Performance Computer ArchitectureHigh Performance Computer Architecture
High Performance Computer Architecture
Subhasis Dash
 
Lecture 1 (distributed systems)
Lecture 1 (distributed systems)Lecture 1 (distributed systems)
Lecture 1 (distributed systems)
Fazli Amin
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
aminnezarat
 
Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
Ismail El Gayar
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptx
AbcvDef
 
Intro to parallel computing
Intro to parallel computingIntro to parallel computing
Intro to parallel computing
Piyush Mittal
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
Shaveta Banda
 
Flynn's Classification .pptx
Flynn's Classification .pptxFlynn's Classification .pptx
Flynn's Classification .pptx
Nayan Gupta
 
archintro.pdf
archintro.pdfarchintro.pdf
archintro.pdf
GauravDagar13
 

Similar to Ceg4131 models (20)

Multiprocessor.pptx
 Multiprocessor.pptx Multiprocessor.pptx
Multiprocessor.pptx
 
PARALLELISM IN MULTICORE PROCESSORS
PARALLELISM  IN MULTICORE PROCESSORSPARALLELISM  IN MULTICORE PROCESSORS
PARALLELISM IN MULTICORE PROCESSORS
 
network ram parallel computing
network ram parallel computingnetwork ram parallel computing
network ram parallel computing
 
Hpc 4 5
Hpc 4 5Hpc 4 5
Hpc 4 5
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
 
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptBIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
 
Parallel computing
Parallel computingParallel computing
Parallel computing
 
Computer system Architecture. This PPT is based on computer system
Computer system Architecture. This PPT is based on computer systemComputer system Architecture. This PPT is based on computer system
Computer system Architecture. This PPT is based on computer system
 
Chap1
Chap1Chap1
Chap1
 
Multiprocessor_YChen.ppt
Multiprocessor_YChen.pptMultiprocessor_YChen.ppt
Multiprocessor_YChen.ppt
 
High Performance Computer Architecture
High Performance Computer ArchitectureHigh Performance Computer Architecture
High Performance Computer Architecture
 
Lecture 1 (distributed systems)
Lecture 1 (distributed systems)Lecture 1 (distributed systems)
Lecture 1 (distributed systems)
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
 
Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptx
 
Intro to parallel computing
Intro to parallel computingIntro to parallel computing
Intro to parallel computing
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
 
Flynn's Classification .pptx
Flynn's Classification .pptxFlynn's Classification .pptx
Flynn's Classification .pptx
 
archintro.pdf
archintro.pdfarchintro.pdf
archintro.pdf
 

Recently uploaded

Here's Why Every Semi-Truck Should Have ELDs
Here's Why Every Semi-Truck Should Have ELDsHere's Why Every Semi-Truck Should Have ELDs
Here's Why Every Semi-Truck Should Have ELDs
jennifermiller8137
 
Dahua Security Camera System Guide esetia
Dahua Security Camera System Guide esetiaDahua Security Camera System Guide esetia
Dahua Security Camera System Guide esetia
Esentia Systems
 
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
g1inbfro
 
Expanding Access to Affordable At-Home EV Charging by Vanessa Warheit
Expanding Access to Affordable At-Home EV Charging by Vanessa WarheitExpanding Access to Affordable At-Home EV Charging by Vanessa Warheit
Expanding Access to Affordable At-Home EV Charging by Vanessa Warheit
Forth
 
EV Charging at MFH Properties by Whitaker Jamieson
EV Charging at MFH Properties by Whitaker JamiesonEV Charging at MFH Properties by Whitaker Jamieson
EV Charging at MFH Properties by Whitaker Jamieson
Forth
 
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
ggany
 
Charging Fueling & Infrastructure (CFI) Program by Kevin Miller
Charging Fueling & Infrastructure (CFI) Program  by Kevin MillerCharging Fueling & Infrastructure (CFI) Program  by Kevin Miller
Charging Fueling & Infrastructure (CFI) Program by Kevin Miller
Forth
 
EN Artificial Intelligence by Slidesgo.pptx
EN Artificial Intelligence by Slidesgo.pptxEN Artificial Intelligence by Slidesgo.pptx
EN Artificial Intelligence by Slidesgo.pptx
aichamardi99
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
u2cz10zq
 
EV Charging at Multifamily Properties by Kevin Donnelly
EV Charging at Multifamily Properties by Kevin DonnellyEV Charging at Multifamily Properties by Kevin Donnelly
EV Charging at Multifamily Properties by Kevin Donnelly
Forth
 
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
afkxen
 
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
78tq3hi2
 
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
MarynaYurchenko2
 
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
78tq3hi2
 
AadiShakti Projects ( Asp Cranes ) Raipur
AadiShakti Projects ( Asp Cranes ) RaipurAadiShakti Projects ( Asp Cranes ) Raipur
AadiShakti Projects ( Asp Cranes ) Raipur
AadiShakti Projects
 
Catalytic Converter theft prevention - NYC.pptx
Catalytic Converter theft prevention - NYC.pptxCatalytic Converter theft prevention - NYC.pptx
Catalytic Converter theft prevention - NYC.pptx
Blue Star Brothers
 
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
afkxen
 
Charging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
Charging and Fueling Infrastructure Grant: Round 2 by Brandt HertensteinCharging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
Charging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
Forth
 
Charging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
Charging Fueling & Infrastructure (CFI) Program Resources by Cat PleinCharging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
Charging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
Forth
 

Recently uploaded (19)

Here's Why Every Semi-Truck Should Have ELDs
Here's Why Every Semi-Truck Should Have ELDsHere's Why Every Semi-Truck Should Have ELDs
Here's Why Every Semi-Truck Should Have ELDs
 
Dahua Security Camera System Guide esetia
Dahua Security Camera System Guide esetiaDahua Security Camera System Guide esetia
Dahua Security Camera System Guide esetia
 
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
原版制作(澳洲WSU毕业证书)西悉尼大学毕业证文凭证书一模一样
 
Expanding Access to Affordable At-Home EV Charging by Vanessa Warheit
Expanding Access to Affordable At-Home EV Charging by Vanessa WarheitExpanding Access to Affordable At-Home EV Charging by Vanessa Warheit
Expanding Access to Affordable At-Home EV Charging by Vanessa Warheit
 
EV Charging at MFH Properties by Whitaker Jamieson
EV Charging at MFH Properties by Whitaker JamiesonEV Charging at MFH Properties by Whitaker Jamieson
EV Charging at MFH Properties by Whitaker Jamieson
 
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
按照学校原版(UniSA文凭证书)南澳大学毕业证快速办理
 
Charging Fueling & Infrastructure (CFI) Program by Kevin Miller
Charging Fueling & Infrastructure (CFI) Program  by Kevin MillerCharging Fueling & Infrastructure (CFI) Program  by Kevin Miller
Charging Fueling & Infrastructure (CFI) Program by Kevin Miller
 
EN Artificial Intelligence by Slidesgo.pptx
EN Artificial Intelligence by Slidesgo.pptxEN Artificial Intelligence by Slidesgo.pptx
EN Artificial Intelligence by Slidesgo.pptx
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证如何办理
 
EV Charging at Multifamily Properties by Kevin Donnelly
EV Charging at Multifamily Properties by Kevin DonnellyEV Charging at Multifamily Properties by Kevin Donnelly
EV Charging at Multifamily Properties by Kevin Donnelly
 
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
一比一原版(Columbia文凭证书)哥伦比亚大学毕业证如何办理
 
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
原版制作(Exeter毕业证书)埃克塞特大学毕业证完成信一模一样
 
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
53286592-Global-Entrepreneurship-and-the-Successful-Growth-Strategies-of-Earl...
 
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
快速办理(napier毕业证书)英国龙比亚大学毕业证在读证明一模一样
 
AadiShakti Projects ( Asp Cranes ) Raipur
AadiShakti Projects ( Asp Cranes ) RaipurAadiShakti Projects ( Asp Cranes ) Raipur
AadiShakti Projects ( Asp Cranes ) Raipur
 
Catalytic Converter theft prevention - NYC.pptx
Catalytic Converter theft prevention - NYC.pptxCatalytic Converter theft prevention - NYC.pptx
Catalytic Converter theft prevention - NYC.pptx
 
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
一比一原版(WashU文凭证书)圣路易斯华盛顿大学毕业证如何办理
 
Charging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
Charging and Fueling Infrastructure Grant: Round 2 by Brandt HertensteinCharging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
Charging and Fueling Infrastructure Grant: Round 2 by Brandt Hertenstein
 
Charging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
Charging Fueling & Infrastructure (CFI) Program Resources by Cat PleinCharging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
Charging Fueling & Infrastructure (CFI) Program Resources by Cat Plein
 

Ceg4131 models

  • 1. Parallel Computer Models CEG 4131 Computer Architecture III Miodrag Bolic 1
  • 2. Overview • Flynn’s taxonomy • Classification based on the memory arrangement • Classification based on communication • Classification based on the kind of parallelism – Data-parallel – Function-parallel 2
  • 3. Flynn’s Taxonomy – The most universally excepted method of classifying computer systems – Published in the Proceedings of the IEEE in 1966 – Any computer can be placed in one of 4 broad categories » SISD: Single instruction stream, single data stream » SIMD: Single instruction stream, multiple data streams » MIMD: Multiple instruction streams, multiple data streams » MISD: Multiple instruction streams, single data stream 3
  • 4. SISD Instructions Processing Main memory element (PE) (M) Data IS IS DS Control Unit PE Memory 4
  • 5. SIMD Applications: • Image processing • Matrix manipulations • Sorting 5
  • 6. SIMD Architectures • Fine-grained – Image processing application – Large number of PEs – Minimum complexity PEs – Programming language is a simple extension of a sequential language • Coarse-grained – Each PE is of higher complexity and it is usually built with commercial devices – Each PE has local memory 6
  • 7. MIMD 7
  • 8. MISD Applications: • Classification • Robot vision 8
  • 9. Flynn taxonomy – Advantages of Flynn » Universally accepted » Compact Notation » Easy to classify a system (?) – Disadvantages of Flynn » Very coarse-grain differentiation among machine systems » Comparison of different systems is limited » Interconnections, I/O, memory not considered in the scheme 9
  • 10. Classification based on memory arrangement Shared memory Interconnection I/O1 network Interconnection network I/On PE1 PEn PE1 PEn M1 Mn Processors P1 Pn Shared memory - multiprocessors Message passing - multicomputers 10
  • 11. Shared-memory multiprocessors • Uniform Memory Access (UMA) • Non-Uniform Memory Access (NUMA) • Cache-only Memory Architecture (COMA) • Memory is common to all the processors. • Processors easily communicate by means of shared variables. 11
  • 12. The UMA Model • Tightly-coupled systems (high degree of resource sharing) • Suitable for general-purpose and time-sharing applications by multiple users. P1 Pn $ $ Interconnection network Mem Mem 12
  • 13. Symmetric and asymmetric multiprocessors • Symmetric: - all processors have equal access to all peripheral devices. - all processors are identical. • Asymmetric: - one processor (master) executes the operating system - other processors may be of different types and may be dedicated to special tasks. 13
  • 14. The NUMA Model • The access time varies with the location of the memory word. • Shared memory is distributed to local memories. • All local memories form a global address space accessible by all processors Access time: Cache, Local memory, Remote memory COMA - Cache-only Memory Architecture P1 Pn $ $ Mem Mem Interconnection network Distributed Memory (NUMA) 14
  • 15. Distributed memory multicomputers • Multiple computers- nodes • Message-passing network • Local memories are private with its own program and data M M M • No memory contention so that the PE PE PE number of processors is very large • The processors are connected by Interconnection communication lines, and the precise network way in which the lines are connected is called the topology of the multicomputer. PE PE PE • A typical program consists of M M M subtasks residing in all the memories. 15
  • 16. Classification based on type of interconnections • Static networks • Dynamic networks 16
  • 17. Interconnection Network [1] • Mode of Operation (Synchronous vs. Asynchronous) • Control Strategy (Centralized vs. Decentralized) • Switching Techniques (Packet switching vs. Circuit switching) • Topology (Static Vs. Dynamic) 17
  • 18. Classification based on the kind of parallelism[3] Parallel architectures PAs Data-parallel architectures Function-parallel architectures Instruction-level Thread-level Process-level PAs PAs PAs DPs ILPS MIMDs Vector Associative SIMDs Systolic Pipelined VLIWs Superscalar Ditributed Shared and neural architecture processors processors memory memory architecture architecture MIMD (multi- (multi-computer) Processors) 18
  • 19. References • Advanced Computer Architecture and Parallel Processing, by Hesham El-Rewini and Mostafa Abd-El- Barr, John Wiley and Sons, 2005. • Advanced Computer Architecture Parallelism, Scalability, Programmability, by K. Hwang, McGraw-Hill 1993. • Advanced Computer Architectures – A Design Space Approach by Desco Sima, Terence Fountain and Peter Kascuk, Pearson, 1997. 19
  • 20. Speedup • S = Speed(new) / Speed(old) • S = Work/time(new) / Work/time(old) • S = time(old) / time(new) • S = time(before improvement) / time(after improvement) 20
  • 21. Speedup • Time (one CPU): T(1) • Time (n CPUs): T(n) • Speedup: S • S = T(1)/T(n) 21
  • 22. Amdahl’s Law The performance improvement to be gained from using some faster mode of execution is limited by the fraction of the time the faster mode can be used 22
  • 23. Example 20 hours A B must walk 200 miles Walk 4 miles /hour Bike 10 miles / hour Car-1 50 miles / hour Car-2 120 miles / hour Car-3 600 miles /hour 23
  • 24. Example 20 hours A B must walk 200 miles Walk 4 miles /hour  50 + 20 = 70 hours S=1 Bike 10 miles / hour  20 + 20 = 40 hours S = 1.8 Car-1 50 miles / hour  4 + 20 = 24 hours S = 2.9 Car-2 120 miles / hour  1.67 + 20 = 21.67 hours S = 3.2 Car-3 600 miles /hour  0.33 + 20 = 20.33 hours S = 3.4 24
  • 25. Amdahl’s Law (1967) ∀ β : The fraction of the program that is naturally serial • (1- β): The fraction of the program that is naturally parallel 25
  • 26. S = T(1)/T(N) T(1)(1- β ) T(N) = T(1)β + N 1 N S= β+ (1- β ) = βN + (1- β ) N 26

Editor's Notes

  1. Two types of information flow into a processor: instructions and data. The instruction stream is defined as the sequence of instructions performed by the processing unit. The data stream is defined as the data traffic exchanged between the memory and the processing unit. According to Flynn’s classification, either of the instruction or data streams can be single or multiple. Comparisson with car assembly: SISD – one person is doing all the tasks one at the time MISD – one worker continues the work of the previous worker SIMD – several workers perform the same task concurrently; after all the workers are finished, another taks is given ot them MIMD – each worker constructs a car independently following his own set of instructions
  2. A processing elements is capable to process a instruction passed by another entity, where a memory can be used to hold computational values. The first figure demonstrate the interaction between a processing element and its memory module. A single instruction, single data architecture is represented in the second figure. The Control Unit will provide a instruction to the processing element and the memory module will serve as mention above. Another function here of the memory module is that its capable store information provided by the processing element and provide a instruction to the Control Unit.
  3. This architecture is capable to run with a boost of speedup compared to a sequential architectures. Since all processors are running at the same time, there a existence of certain processors waiting for others processors to finish running a specific instructions. The following example shows the same instruction running on two different processors. ----------------- | ----------------- PROCESSOR 1 | PROCESSOR 2 ----------------- |----------------- INST 1 |INST 1 INST 2 |INST 2 IF (A > B) |IF (A > B) //this processor will not validate this condition and will jump to INST 4 INST 3 | INST 3 INST 4 |INST 4 When processor 1 validates the condition instruction, it will have to do more computation compared to processor 2 which jumps to INST 4 since the condition is false. The SIMD model of parallel computing consists of two parts: a front-end computer of the usual von Neumann style, and a processor array. The processor array is a set of identical synchronized processing elements capable of simultaneously performing the same operation on different data. Each processor in the array has a small amount of local memory where the distributed data resides while it is being processed in parallel. A program can be developed and executed on the front end using a traditional serial programming language. The application program is executed by the front end in the usual serial way, but issues commands to the processor array to carry out SIMD operations in parallel. The similarity between serial and data parallel programming is one of the strong points of data parallelism. Synchronization is made irrelevant by the lock–step synchronization of the processors: Processors either do nothing or exactly the same operations at the same time. Fine-grained architectures: each processor processes few data elements Processor complexity
  4. Shared memory: bulletin board Message passing: letters Using the shared memory model for multiprocessor could induce a bottleneck to the architecture. Multiple processor could be writing at a occasion. And at some instance, more then one processor could be accessing the same memory location which could greatly ones computation output. Using a local memory for each processing element and using the message passing model improves the previously issue.
  5. Each processor may have registers, buffers, caches, and local memory banks as additional memory resources. These include access control - determines which process accesses are possible to which resources. Access control models make the required check for every access request issued by the processors to the shared memory, against the contents of the access control table. Synchronization constraints limit the time of accesses from sharing processes to shared resources. Protection is a system feature that prevents processes from making arbitrary access to resources belonging to other processes
  6. The UMA model is when any processor are reading a memory location through the cache, they will all have the same delay.
  7. The NUMA model : - Each processor have their own local memory - And memories are all part of a big address space where the processor holds that space with exclusivity Ex: Processor 1 -> 0 – 1GB Processor 2 -> 1GB – 2GB
  8. Processor have each their own local memory. These memory modules are not shared from a big address space as seen in the NUMA model
  9. In static networks, direct fixed links are established among nodes to form a fixed network in dynamic networks, connections are established as needed. Shared memory systems can be designed using bus-based or switch-based INs. Message passing INs can be divided into static and dynamic.
  10. In synchronous mode of operation, a single global clock is used by all components in the system such that the whole system is operating in a lock–step manner. Asynchronous mode of operation, on the other hand, does not require a global clock. Handshaking signals are used instead in order to coordinate the operation of asynchronous systems. While synchronous systems tend to be slower compared to asynchronous systems, they are race and hazard-free. Packet switching is the procedure for which the packet are responsible to find a path to the desire source Circuit switching is when a path has been designed for a packet to reach from the initial source to its destination