Overview of Parallel Systems Architectures
<ul><li>Grand challenge problems </li></ul><ul><li>Shared memory multiprocessors </li></ul><ul><li>Distributed memory mult...
Demand for Computational Speed <ul><li>Continual demand for greater computational speed from a computer system than is cur...
Grand Challenge Problems <ul><li>A grand challenge problem is one that cannot be solved in a reasonable amount of time wit...
Examples <ul><li>Global weather forecasting </li></ul><ul><li>Modeling the motion of astronomical bodies </li></ul><ul><li...
Brain simulation The human brain contains 100,000,000,000 neurons each neuron receives input from 1000 others To compute a...
Grand challenge problems are found in many fields of scientific research <ul><li>Astronomy and astrophysics </li></ul><ul>...
Parallel Computing <ul><li>Using more than one processor to solve a problem </li></ul><ul><li>Motives </li></ul><ul><ul><l...
Background Parallel computers - computers with more than one  processor  –  and their programming  - have been around for ...
Gill writes in 1958 “ ... There is therefore nothing new in the idea of parallel  programming, but its application to comp...
Conventional Computer Consists of a processor executing a program stored in a  (main) memory  Main memory Processor Instru...
Types of Parallel Computers <ul><li>Shared memory multiprocessor (SMM) </li></ul><ul><li>Distributed memory multicomputer ...
Shared Memory Multiprocessor System Interconnection network Memory modules Processors Natural way to extend the single pro...
SMM Examples <ul><li>Dual and quad pentiums </li></ul><ul><li>Power Mac G5s </li></ul><ul><ul><li>Dual processor (2 GHz ea...
Quad Pentium Shared Memory Multiprocessor Processor L1 cache L2 cache Bus interface Processor L1 cache L2 cache Bus interf...
Shared memory   <ul><li>Any memory location is accessible by any of the processors </li></ul><ul><li>A single address spac...
Building SMM systems <ul><li>Building SMM machines with more than 4 sockets/processors is very difficult and very expensiv...
Distributed Memory Multicomputer Complete computers linked by some type of  Interconnection network. Computers Interconnec...
Interconnection networks <ul><li>Static/direct link interconnection networks </li></ul><ul><li>Cluster interconnects/ netw...
Static network message passing multicomputers Computers connected by direct links P M C P M C P M C
Static Link Interconnection Topologies <ul><li>Ring </li></ul><ul><li>Tree </li></ul><ul><li>2-D and 3-D arrays </li></ul>...
Mesh (2D array) Computer (ie processor/memory)
Cube (3D array) Wire up the connections to represent a 3-D lattice with computers arranged at the vertices of a cube 110 1...
Tesseract (4D hypercube) Hypercubes popular in the 1980s - not now i.e. each computer is directly wired to 8 adjacent  com...
Thinking Machines Corp. CM-2 (The Connection Machine) Released 1987 Processors 65536 Memory 512 MB I/O Channels 8 Transfer...
Cluster interconnects Static link interconnects fell out of favour during the  1990s - too expensive! A network of worksta...
Key advantages <ul><li>Very high performance workstations and PCs readily available at low cost </li></ul><ul><li>The late...
Beowulf clusters <ul><li>A group of interconnected “commodity” computers achieving high performance with low cost </li></u...
Cluster interconnect hardware <ul><li>Originally fast Ethernet on low cost clusters </li></ul><ul><li>Gigabit Ethernet - e...
Symmetrical Multiprocessor cluster Can have a cluster of shared memory computers  (symmetrical multiprocessors) Processors...
Earth Simulator at JAMSTEC, Yokohama, Japan 640 processor nodes <ul><li>Some applications:  </li></ul><ul><li>Ocean-atmosp...
Massey’s “Helix” Cluster 2  AMD 1GB 65 Nodes 2  AMD 1GB Ethernet Interconnect Beowulf cluster of 65 Linux PC  boxes See ...
Cluster of rack mounted servers Apple Xserve G5 Computing unit is a blade with  dual  processors and shared memory  Interc...
Computational Grids <ul><li>The components of a parallel computer could be interconnected over the internet </li></ul><ul>...
Eg: seti@home Home PC Server sends “work units” to internet enabled PCs and collects the results Internet Radio telescope ...
Grid Computing <ul><li>An extreme way of achieving parallelism </li></ul><ul><li>Involves developing software tools that a...
Classification of Parallel Architectures <ul><li>Flynn (1986) created a classification for computers based upon </li></ul>...
Single Instruction Stream Single Data Stream (SISD) In a single processor computer, a single stream of instructions is gen...
Single Instruction Stream Multiple Data Stream (SIMD) A specially designed computer in which a single instruction stream  ...
Control Shared memory or interconnection memory SIMD Architecture Instruction stream Data streams The processors operate  ...
SIMD application example Add two matrices C = A + B Say we have two matrices A and B of order 2 and we  have 4 processors,...
Multiple Instruction Stream Single Data Stream (MISD) A computer with multiple processors each sharing a common memory. Th...
MISD example Check whether a number Z is prime Each processor is assigned a set of test divisors in its instruction  strea...
Multiple Instruction Stream Multiple Data Stream (MIMD) General purpose multiprocessor system - each processor has a separ...
Shared memory or interconnection  network Processors Controls Each processor operates under the control of an instruction ...
MIMD computers MIMD machines with shared memory are described as  tightly coupled (quad pentiums, Mac G5s, …) MIMD machine...
MIMD program structure Multiple Program Multiple Data (MPMD) Each processor will have its own program to execute Single Pr...
Upcoming SlideShare
Loading in …5
×

Lesson2

1,238 views

Published on

Published in: Technology
  • Be the first to comment

Lesson2

  1. 1. Overview of Parallel Systems Architectures
  2. 2. <ul><li>Grand challenge problems </li></ul><ul><li>Shared memory multiprocessors </li></ul><ul><li>Distributed memory multicomputers </li></ul><ul><li>Static/direct link interconnects </li></ul><ul><li>Cluster computers </li></ul><ul><li>Computational grids </li></ul><ul><li>Formal classification of parallel architectures </li></ul>Parallel and distributed computers
  3. 3. Demand for Computational Speed <ul><li>Continual demand for greater computational speed from a computer system than is currently possible </li></ul><ul><li>Areas requiring great computational speed include numerical modeling and simulation of scientific and engineering problems </li></ul><ul><li>Computations must be completed within a reasonable time period </li></ul>
  4. 4. Grand Challenge Problems <ul><li>A grand challenge problem is one that cannot be solved in a reasonable amount of time with today’s computers </li></ul><ul><li>Obviously an execution time of 10 years is always unreasonable </li></ul><ul><li>However, a grand challenge problem is not unsolvable </li></ul>
  5. 5. Examples <ul><li>Global weather forecasting </li></ul><ul><li>Modeling the motion of astronomical bodies </li></ul><ul><li>Modeling large DNA structures </li></ul><ul><li>Simulating the human brain </li></ul>
  6. 6. Brain simulation The human brain contains 100,000,000,000 neurons each neuron receives input from 1000 others To compute a change of brain “state”, one requires 10 14 calculations If each could be done in 1  s, it would take ~3 years to complete one calculation (This problem also presents grand challenges in storage requirements)
  7. 7. Grand challenge problems are found in many fields of scientific research <ul><li>Astronomy and astrophysics </li></ul><ul><li>Fluid dynamics </li></ul><ul><li>Meso-macro scale environmental modeling </li></ul><ul><li>Biomedical imaging </li></ul><ul><li>Molecular biology </li></ul><ul><li>Molecular design </li></ul><ul><li>Cognition </li></ul><ul><li>Nuclear power and weapons simulation </li></ul>
  8. 8. Parallel Computing <ul><li>Using more than one processor to solve a problem </li></ul><ul><li>Motives </li></ul><ul><ul><li>Idea is that n processors operating simultaneously can achieve the result n times faster. It will not be the case for various reasons. </li></ul></ul><ul><ul><li>Fault tolerance </li></ul></ul><ul><ul><li>Large amount of memory available </li></ul></ul>
  9. 9. Background Parallel computers - computers with more than one processor – and their programming - have been around for more than 40 years
  10. 10. Gill writes in 1958 “ ... There is therefore nothing new in the idea of parallel programming, but its application to computers. The author cannot believe that there will be any insuperable difficulty in extending it to computers. It is not to be expected that the necessary programming techniques will be worked out overnight. Much experimenting remains to be done. After all, the techniques that are commonly used in programming today were only won at the cost of considerable toil several years ago. In fact the advent of parallel programming may do something to revive the pioneering spirit in programming which seems at the present to be degenerating into a rather dull and routine occupation ...” Gill, S. (1958), “ Parallel Programming ,” The Computer Journal, vol. 1, April, pp. 2-10.
  11. 11. Conventional Computer Consists of a processor executing a program stored in a (main) memory Main memory Processor Instructions to processor Data to/from processor Object in main memory located by its address. Addresses start at 0 and extend to 2 n - 1 where n is the number of bits in the address
  12. 12. Types of Parallel Computers <ul><li>Shared memory multiprocessor (SMM) </li></ul><ul><li>Distributed memory multicomputer (DMM) </li></ul>
  13. 13. Shared Memory Multiprocessor System Interconnection network Memory modules Processors Natural way to extend the single processor model Have multiple processors connected to multiple memory modules All memory shared across all processors via a single address space
  14. 14. SMM Examples <ul><li>Dual and quad pentiums </li></ul><ul><li>Power Mac G5s </li></ul><ul><ul><li>Dual processor (2 GHz each) </li></ul></ul>
  15. 15. Quad Pentium Shared Memory Multiprocessor Processor L1 cache L2 cache Bus interface Processor L1 cache L2 cache Bus interface Processor L1 cache L2 cache Bus interface Processor L1 cache L2 cache Bus interface Processor/ memory bus I/O interface Memory controller Memory I/O bus Shared memory
  16. 16. Shared memory <ul><li>Any memory location is accessible by any of the processors </li></ul><ul><li>A single address space exists, meaning that each memory location is given a unique address within a single range of addresses </li></ul><ul><li>Generally shared memory programming is more convenient although it does require access to shared data to be controlled by the programmer </li></ul>
  17. 17. Building SMM systems <ul><li>Building SMM machines with more than 4 sockets/processors is very difficult and very expensive </li></ul><ul><li>8 socket 32 processor opteron systems available relatively cheaply </li></ul><ul><li>Eg Sun Microsystems E10000 “Starfire” server </li></ul><ul><ul><li>64 processors </li></ul></ul><ul><ul><li>Price: $US several million </li></ul></ul>
  18. 18. Distributed Memory Multicomputer Complete computers linked by some type of Interconnection network. Computers Interconnection network Messages Processor Local memory
  19. 19. Interconnection networks <ul><li>Static/direct link interconnection networks </li></ul><ul><li>Cluster interconnects/ networks </li></ul>
  20. 20. Static network message passing multicomputers Computers connected by direct links P M C P M C P M C
  21. 21. Static Link Interconnection Topologies <ul><li>Ring </li></ul><ul><li>Tree </li></ul><ul><li>2-D and 3-D arrays </li></ul><ul><li>Hypercubes </li></ul>
  22. 22. Mesh (2D array) Computer (ie processor/memory)
  23. 23. Cube (3D array) Wire up the connections to represent a 3-D lattice with computers arranged at the vertices of a cube 110 111 100 000 010 011 001 101 i.e. each computer is directly wired to 6 adjacent computers
  24. 24. Tesseract (4D hypercube) Hypercubes popular in the 1980s - not now i.e. each computer is directly wired to 8 adjacent computers
  25. 25. Thinking Machines Corp. CM-2 (The Connection Machine) Released 1987 Processors 65536 Memory 512 MB I/O Channels 8 Transfer rate 320 MB/s 4-D hypercube interconnect One preserved in Museum of American History, Smithsonian Institute
  26. 26. Cluster interconnects Static link interconnects fell out of favour during the 1990s - too expensive! A network of workstations (NOWs) became a very attractive alternative to the expensive supercomputers and parallel computer systems for high performance computing in the 1980s
  27. 27. Key advantages <ul><li>Very high performance workstations and PCs readily available at low cost </li></ul><ul><li>The latest processors can easily be incorporated into the system as they become available (future-proof) </li></ul><ul><li>Existing software can be used or modified </li></ul>
  28. 28. Beowulf clusters <ul><li>A group of interconnected “commodity” computers achieving high performance with low cost </li></ul><ul><li>Typically using commodity interconnects - high speed Ethernet and Linux OS </li></ul><ul><li>Beowulf comes from name given by NASA Goddard Space Flight Centre cluster project </li></ul>
  29. 29. Cluster interconnect hardware <ul><li>Originally fast Ethernet on low cost clusters </li></ul><ul><li>Gigabit Ethernet - easy upgrade path </li></ul><ul><li>More specialized/higher performance </li></ul><ul><ul><li>Myrinet - 2.4 Gbits/sec </li></ul></ul><ul><ul><li>cLan </li></ul></ul><ul><ul><li>SCI (Scalable Coherent Interface) </li></ul></ul><ul><ul><li>QsNet </li></ul></ul><ul><ul><li>Infiniband </li></ul></ul>
  30. 30. Symmetrical Multiprocessor cluster Can have a cluster of shared memory computers (symmetrical multiprocessors) Processors Memories Processors Memories Interconnection SMP Computer 0 SMP Computer n-1
  31. 31. Earth Simulator at JAMSTEC, Yokohama, Japan 640 processor nodes <ul><li>Some applications: </li></ul><ul><li>Ocean-atmosphere simulations </li></ul><ul><li>Interior Earth simulations </li></ul><ul><li>Holistic algorithm research </li></ul>8 vector processors/16 GB per node
  32. 32. Massey’s “Helix” Cluster 2  AMD 1GB 65 Nodes 2  AMD 1GB Ethernet Interconnect Beowulf cluster of 65 Linux PC boxes See http://helix.massey.ac.nz
  33. 33. Cluster of rack mounted servers Apple Xserve G5 Computing unit is a blade with dual processors and shared memory Interconnect by Gigabit ethernet Much more space efficient compared with clustering PC boxes - but more expensive
  34. 34. Computational Grids <ul><li>The components of a parallel computer could be interconnected over the internet </li></ul><ul><li>Grid computing involves the application of internet distributed computing resources to a single problem </li></ul>
  35. 35. Eg: seti@home Home PC Server sends “work units” to internet enabled PCs and collects the results Internet Radio telescope scans the sky looking for alien signals See http://setiathome.ssl.berkeley.edu ~4000 users in a 24 hour period
  36. 36. Grid Computing <ul><li>An extreme way of achieving parallelism </li></ul><ul><li>Involves developing software tools that allow internet distributed computing resources to function effectively as one machine </li></ul><ul><li>Resources need not be just processors - they can be databases, robotic systems, etc </li></ul><ul><li>See http://www.gridcomputing.com </li></ul>
  37. 37. Classification of Parallel Architectures <ul><li>Flynn (1986) created a classification for computers based upon </li></ul><ul><li>instruction streams and data streams </li></ul><ul><li>There four types: </li></ul><ul><li>SISD Single instruction stream single data stream </li></ul><ul><li>SIMD Single instruction stream multiple data stream </li></ul><ul><li>MISD Multiple instruction stream single data stream </li></ul><ul><li>MIMD Multiple instruction stream multiple data stream </li></ul>
  38. 38. Single Instruction Stream Single Data Stream (SISD) In a single processor computer, a single stream of instructions is generated by the program. The instructions operate on a single stream of data items. Control Memory Instruction stream Data stream Algorithms for SISD computers do not contain any parallelism Processor
  39. 39. Single Instruction Stream Multiple Data Stream (SIMD) A specially designed computer in which a single instruction stream is from a single program, but multiple data streams exist. The Instructions from program are broadcast to more than one Processor. Each processor executes the same instruction in synchronism, but using different data. Vector computers
  40. 40. Control Shared memory or interconnection memory SIMD Architecture Instruction stream Data streams The processors operate synchronously and a global clock is used to ensure lockstep operation P1 PN P2
  41. 41. SIMD application example Add two matrices C = A + B Say we have two matrices A and B of order 2 and we have 4 processors, ie we wish to calculate: C11 = A11 + B11 C12 = A12 + B12 C21 = A21 + B21 C22 = A22 + B22 The same instruction (add the two numbers) is sent to each processor, but each processor receives different data
  42. 42. Multiple Instruction Stream Single Data Stream (MISD) A computer with multiple processors each sharing a common memory. There are multiple streams of instructions and one stream of data. Memory Control Control Processor Processor
  43. 43. MISD example Check whether a number Z is prime Each processor is assigned a set of test divisors in its instruction stream Each processor, takes Z as input and tries to divide it by its divisors MISD is awkward to implement and such machines are just experimental No commercial MISD machine exists
  44. 44. Multiple Instruction Stream Multiple Data Stream (MIMD) General purpose multiprocessor system - each processor has a separate program and one instruction stream is generated from each program for each processor. Each instruction stream operates upon different data. The most general and most useful of our classifications.
  45. 45. Shared memory or interconnection network Processors Controls Each processor operates under the control of an instruction stream issued by its own control unit. Processors operate asynchronously in general MIMD architecture P1 PN P2 C1 C2 CN
  46. 46. MIMD computers MIMD machines with shared memory are described as tightly coupled (quad pentiums, Mac G5s, …) MIMD machines with interconnection network are described as loosely coupled (Beowulf and rack mounted clusters, etc) We will work with MIMD computers in this course
  47. 47. MIMD program structure Multiple Program Multiple Data (MPMD) Each processor will have its own program to execute Single Program Multiple Data (SPMD) A single source program is written, and each processor executes its own personal copy of the program

×