Multiple processor (ppt 2010)


Published on

Modern Operating System

Published in: Education, Technology
1 Comment
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Multiple processor (ppt 2010)

  1. 1. Multiple Processor System<br />Prepared by:<br />Arth B. Ramada<br />Cristy R. Peralta<br />Free Powerpoint Templates<br />
  2. 2. Introduction<br /><ul><li>The computer industry has been driven by an endless quest for more and more computing power.
  3. 3. Making computer this small may be possible, but then we hit another fundamental problem: </li></ul> HEAT DISSIPATION<br /><ul><li>The faster the computer runs, the more heat it generates
  4. 4. The smaller the computer, the harder it is to get rid of this heat.</li></li></ul><li>Introduction<br /><ul><li>One approach to greater speed is through massively parallel computers.
  5. 5. A system consisting of 1000 computers spread all over the world is no different than one consisting of 1000 computers in a single room. Although the delay and other technical characteristics are different.</li></li></ul><li>Multiprocessor Systems<br />( a ) Shared-memory multiprocessor <br />( b ) Message-passing multicomputer <br />( c ) Wide area distributed system<br />
  6. 6. MULTIPROCESSOR<br />Definition:<br /><ul><li>A computer system in which two or more CPUs share full access to a common RAM
  7. 7. The CPU can write some value into the memory and then read the word back and get a different value.</li></li></ul><li>MULTIPROCESSORS:<br />MULTIPROCESSOR HARDWARE<br /><ul><li>UMA (Uniform Memory Access) Multiprocessors
  8. 8. UMA Bus-Based SMP Architecture
  9. 9. UMA Multiprocessors using Crossbar Switches
  10. 10. UMA Multiprocessors using Multistage Switching Networks
  11. 11. NUMA (NonUniform Memory Access) Multiprocessor</li></li></ul><li>MULTIPROCESSORS:<br />UMA Bus-Based SMP Architectures<br />( a ) without caching<br />( b ) with caching<br />( c ) with caching and private memories<br />
  12. 12. MULTIPROCESSORS:<br />UMA Bus-Based SMP Architectures<br /><ul><li>Two or more CPUs and one or more memory modules all use the same BUS communication.
  13. 13. If the bus is busy when a CPU wants to read or write on memory, the CPU just waits until the bus becomes idle.</li></li></ul><li>MULTIPROCESSORS:<br />UMA Bus-Based SMP Architectures<br /><ul><li>There will be much less bus traffic, and the system can support more CPUs.
  14. 14. If the CPU attempts to write a word that is in one or more remote caches, the BUS hardware detects the write and puts a signal on the informing all other caches of the write</li></li></ul><li>MULTIPROCESSORS:<br />UMA Bus-Based SMP Architectures<br /><ul><li>The compiler should place all the program text, strings, constants and other read-only data & local variables in the private memories
  15. 15. The shared memory is then only used for writable shared variables.</li></li></ul><li>MULTIPROCESSORS:<br />UMA Multiprocessors using Crossbar Switches<br /><ul><li>CROSSPOINT – is a small switch that can be electrically opened or closed.</li></li></ul><li>MULTIPROCESSORS:<br />UMA Multiprocessors using Crossbar Switches<br /><ul><li>The nicest properties of the crossbar switch is that it is a NONBLOCKING NETWORK.</li></ul> “No CPU is ever denied the connections its needs”<br /><ul><li>The worst properties of the crossbar switch is the fact that the number of crosspoints grows (n2).</li></li></ul><li>MULTIPROCESSORS:<br />UMA Multiprocessors using Multistage Switching Network<br />2x2 switch<br />Message format<br /><ul><li>Message Format
  16. 16. Module  It tell which memory to use.
  17. 17. Address  It specifies an address within a module
  18. 18. Opcode  Gives the operation, READ or WRITE
  19. 19. Value  Contain an operand.</li></li></ul><li>MULTIPROCESSORS:<br />UMA Multiprocessors using Multistage Switching Network<br />Omega Switching Network <br />Number of Stages = (Log2n)<br />Number of Switches per Stage =(n/2),<br />Total Switches = (n/2)Log2n<br />
  20. 20. MULTIPROCESSORS:<br />NUMA Multiprocessors<br />NUMA Machines 3 Key Characteristics:<br />There is a single address visible to all CPUs<br />Access to remote memory is via LOAD and STORE instructions<br />Access to REMOTE MEMORY is slower than access to LOCAL MEMORY<br />NC-NUMA (No Caching) - access time to remote memory is not hidden<br />CC-NUMA (Cache-Coherent) – caches are present. <br />
  21. 21. MULTIPROCESSORS:<br />NUMA Multiprocessors<br />Directory-Based Multiprocessor - It Maintain a database telling where each cache is and what its status is.<br />
  22. 22. MULTIPROCESSORS:<br />Multiprocessor Operating System<br />Types<br />Each CPU Has Its Own Operation System<br /> The memory divide into as many partition as there are CPUs and give each CPU its own private memory and its own private copy of Operating System<br />
  23. 23. MULTIPROCESSORS:<br />Multiprocessor Operating System<br />Types<br />Master-Slave Multiprocessors<br /> There is one copy of the operating system and its table are present to CPU1.<br />All system calls are redirected to CPU1<br />
  24. 24. MULTIPROCESSORS:<br />Multiprocessor Operating System<br />Types<br />Symmetric Multiprocessors (SMP)<br /> There is one copy of the OS in memory but any CPU can run it.<br />If the two CPU simultaneously picking the same process to run the same memory page. It associate a MUTEX(Lock) with OS.<br />
  25. 25. MULTIPROCESSORS:<br />Multiprocessor Synchronization<br />TSL (Test and Set Lock) Instruction – TSL instruction must first lock the bus, preventing other CPUs from accessing, then both memory accesses, then unlock the bus<br />
  26. 26. MULTIPROCESSORS:<br />Multiprocessor Synchronization<br />Use of Multiple Locks to avoid cache trashing<br />
  27. 27. MULTIPROCESSORS:<br />Multiprocessor Scheduling<br />TIME SHARING<br /> The simplest scheduling algorithm for dealing unrelated process is to have a single system wide data structure for ready process<br />
  28. 28. MULTIPROCESSORS:<br />Multiprocessor Scheduling<br />Space Sharing<br /> This approach used when the processes are related to one another. <br /> Scheduling multiple threads at the same time across multiple CPUs<br />
  29. 29. MULTIPROCESSORS:<br />Multiprocessor Scheduling<br /> Communication between two threads belonging to Process A that are running out of phase.<br />
  30. 30. MULTIPROCESSORS:<br />Multiprocessor Scheduling<br />Gang Scheduling<br /> Group of related threads are scheduled as a unit (gang)<br />All member of a gang run simultaneously on different time shared CPU<br />All gang members start and end their time slices together.<br />
  31. 31. MULTICOMPUTERS<br />Definition:<br /><ul><li>Are tightly-coupled CPUs that do not share memory (each one has its Memory)
  32. 32. These systems are also known by a variety of other names, cluster computers and COWS (Cluster of Workstations).</li></li></ul><li>MULTICOMPUTERS:<br />MULTICOMPUTER HARDWARE<br />Interconnection Topologies<br />Single Switch<br />RING<br />GRID<br />Double Torus<br />CUBE<br />4D Hypercube<br />
  33. 33. MULTICOMPUTERS:<br />MULTICOMPUTER HARDWARE<br />Packet – a message that is broken up (either by a user software or a network face.<br />Store-and-forward packet switching<br />
  34. 34. MULTICOMPUTERS:<br />MULTICOMPUTER HARDWARE<br />Network Interface Board in a multicomputer<br />
  35. 35. MULTICOMPUTERS:<br />Low-Level Communication Software<br />Node to Network Communication<br />Use send & receive rings to coordinates main CPU with on-board CPU<br />When a sender has new packet to send, it first checks to see if there is an available slot in the send ring. If not, it must wait, to prevent overrun.<br />
  36. 36. MULTICOMPUTERS:<br />Blocking versus Nonblocking Calls<br />Blocking Calls<br /> – when a process call sends, it specifies a destination and a buffer to send that destination. While the message has been completely, the sending process is blocked.<br />NonBlocking Calls<br /> – If send is nonblocking, it returns control immediately, before the message is sent.<br />
  37. 37. MULTICOMPUTERS:<br />Remote Procedure Call<br />The program must be bound with a small library procedure called the CLIENT STUB (represent the client procedure)<br />The server is bound with a procedure called SERVER STUB.<br />
  38. 38. MULTICOMPUTERS:<br />Distributed Shared Memory<br />Various layers where shared memory:<br />(a) – The Hardware<br />(b) – The Operating System<br />(c) – User-level Software<br />
  39. 39. MULTICOMPUTERS:<br />Distributed Shared Memory<br />Replication<br />(a) Pages distributed on 4 machines<br />(b) CPU 0 reads page 10<br />(c) CPU 1 reads page 10<br />
  40. 40. MULTICOMPUTERS:<br />Distributed Shared Memory<br />False Sharing <br /><ul><li>Too large an effective page size introduces a new problem.
  41. 41. The problem here is that although the variable are unrelated, they appear by accident on the same page, so when a process uses one of them, it also gets the other.</li></li></ul><li>End of the first half<br />