Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Chapter 10

109 views

Published on

Parallel Processing

Published in: Engineering
  • Login to see the comments

  • Be the first to like this

Chapter 10

  1. 1. Er. Nawaraj Bhandari Topic 10 Parallel Processing Computer Architecture
  2. 2. P ARALLEL P ROCESSING  The simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it.  In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other.  In computers, parallel processing is the processing of program instructions by dividing them among multiple processors with the objective of running a program in less time. In the earliest computers, only one program ran at a time
  3. 3. Types of Parallel Processor Systems  Parallel computing is an evolution of serial computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Flynn has classified the computer systems based on parallelism in the instructions and in the data streams. These are:
  4. 4. Types of Parallel Processor Systems 1. Single instruction stream, single data stream (SISD). 2. Single instruction stream, multiple data stream (SIMD). 3. Multiple instruction streams, single data stream (MISD). 4. Multiple instruction stream, multiple data stream (MIMD).
  5. 5. Single instruction stream, single data stream (SISD). A single processor executes a single instruction stream to operate on data stored in a single memory. Uniprocessors fall into this category.
  6. 6. Single instruction stream, multiple data stream (SIMD). Single machine instruction controls the simultaneous execution of a number of processing elements on a lockstep basis. Each processing element has an associated data memory, so that instructions are executed on different sets of data by different processors. Vector and array processors fall into this category
  7. 7. Multiple instruction, multiple data (MIMD) stream A set of processors simultaneously execute different instruction sequences on different data sets. SMPs, clusters, and NUMA systems fit into this category
  8. 8. Multiple instruction, single data (MISD) stream A sequence of data is transmitted to a set of processors, each of which executes a different instruction sequence. This structure is not commercially implemented
  9. 9. Taxonomy of Parallel Processor Architecture
  10. 10. SYMMETRIC MULTIPROCESSORS The term SMP refers to a computer hardware architecture and also to the operating system behavior that reflects that architecture. An SMP can be defined as a standalone computer system with the following characteristics 1. There are two or more similar processors of comparable capability. 2. These processors share the same main memory and I/O facilities and are inter-connected by a bus or other internal connection scheme, such that memory access time is approximately the same for each processor.
  11. 11. SYMMETRIC MULTIPROCESSORS 3. All processors share access to I/O devices, either through the same channels or through different channels that provide paths to the same device. 4. All processors can perform the same functions (hence the term symmetric). 5. The system is controlled by an integrated operating system that provides interaction between processors and their programs at the job, task, file, and data element levels.
  12. 12. SYMMETRIC MULTIPROCESSORS The operating system of an SMP schedules processes or threads across all of the processors. An SMP organization has a number of potential advantages over a uniprocessor organization, including the following: Performance: If the work to be done by a computer can be organized so that some portions of the work can be done in parallel, then a system with multiple processors will yield greater performance than one with a single processor of the same type Availability: In a symmetric multiprocessor, because all processors can perform the same functions, the failure of a single processor does not halt the machine. Instead, the system can continue to function at reduced performance.
  13. 13. SYMMETRIC MULTIPROCESSORS Incremental growth: A user can enhance the performance of a system by adding an additional processor. Scaling: Vendors can offer a range of products with different price and performance characteristics based on the number of processors configured in the system.
  14. 14. Cache Coherence and the MESI Protocol 1. In contemporary multiprocessor systems, it is customary to have one or two levels of cache associated with each processor. 2. This organization is essential to achieve reasonable performance. It does, however, create a problem known as the cache coherence problem. 3. The essence of the problem is this: Multiple copies of the same data can exist in different caches simultaneously, and if processors are allowed to update their own copies freely, an inconsistent view of memory can result
  15. 15. Cache Coherence and the MESI Protocol 1. Write back: Write operations are usually made only to the cache. Main memory is only updated when the corresponding cache line is flushed from the cache. 2. Write through: All write operations are made to main memory as well as to the cache, ensuring that main memory is always valid. 3. If two caches contain the same line, and the line is updated in one cache, the other cache will unknowingly have an invalid value. Subsequent reads to that invalid line produce invalid results. Even with the write-through policy, inconsistency can occur unless other caches monitor the memory traffic or receive some direct notification of the
  16. 16. Cache Coherence and the MESI Protocol Cache coherence approaches have generally been divided into software and hardware approaches. Some implementations adopt a strategy that involves both software and hardware elements Software Solutions: Software cache coherence schemes attempt to avoid the need for additional hardware circuitry and logic by relying on the compiler and operating system to deal with the problem. Software approaches are attractive because the overhead of detecting potential problems is transferred from run time to compile time, and the design complexity is transferred from hardware to software.
  17. 17. Cache Coherence and the MESI Protocol Hardware Solutions: Hardware-based solutions are generally referred to as cache coherence protocols. These solutions provide dynamic recognition at run time of potential inconsistency conditions. Because the problem is only dealt with when it actually arises, there is more effective use of caches, leading to improved performance over a software approach. In addition, these approaches are transparent to the programmer and the compiler, reducing the software development burden.
  18. 18. ANY QUESTIONS?

×