Unit 8


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Unit 8

  1. 1. Chapter 8 Shared Memory Multiprocessors
  2. 2. <ul><li>A program consists of a collection of executable sub-program units. </li></ul><ul><li>These units, which we refer to as tasks, are also sometimes called programming grains. They must be defined, scheduled, and coordinated by hardware and software before or during program execution. </li></ul>
  3. 3. Basic Issues <ul><li>Multiprocessors usually are designed for two reasons: </li></ul><ul><li>fault tolerance and </li></ul><ul><li>program speedup. </li></ul>
  4. 4. These basic issues may are as follows: <ul><li>Partitioning. This is the process whereby the original program is decomposed into basic sub-program units or tasks, each of which can be assigned to a separate processor. Partitioning is performed either by programmer directives in the original source program or by the compiler at compile time </li></ul><ul><li>Scheduling of tasks. Associated with each program is a flow of control among the sub-program units or tasks. </li></ul><ul><li>Certain tasks must be completed before others can be initiated (i.e., one is dependent on the other). Other tasks represent functions that can be executed independently of the main program execution. The scheduler's run-time function is to arrange the task order of execution in such a way as to minimize overall program execution time. </li></ul>
  5. 5. <ul><li>3. Communication and synchronization. It does the system no good to merely schedule the initiation of various tasks in the proper order, unless the data that the tasks require is made available in an efficient way. Thus, communication time has to be minimized and the receiver task must be aware of the synchronization protocol being used. An issue associated with communications is memory coherency. This property ensures that the transmitting and receiving elements have the same, or a coherent, picture of the contents of memory, at least for data which is communicated between the two tasks. </li></ul>
  6. 6. Suppose consider this <ul><li>Suppose a program p is converted into a parallel form, pp. This conversion consists of partitioning pp into a set of tasks, T i. pp (as partitioned </li></ul>
  7. 8. Partitioning <ul><li>Partitioning is the process of dividing a program into tasks, each of which can be assigned to an individual processor for execution at run time. </li></ul><ul><li>These tasks represented as a node. </li></ul><ul><li>Portioning occur at run time well before execution. </li></ul><ul><li>Program overhead (o) is the added time a task takes to be loaded into a processor prior to beginning execution. </li></ul>
  8. 9. <ul><li>Overhead affects speedup </li></ul><ul><li>For each task Ti , there is an associated number of overhead operations oi , so that if Ti takes Oi operations without overhead, then: </li></ul>
  9. 10. <ul><li>In order to achieve speedup over a uniprocessor, a multiprocessor system must achieve the maximum degree of parallelism among executing subtasks or control nodes. On the other hand, if we increase the amount of parallelism by using finer-and finer-grain task sizes, we necessarily increase the amount of overhead. This defines the well known &quot;U&quot; shaped curve for grain size </li></ul>
  10. 11. The effects of grain size.
  11. 12. <ul><li>If uniprocessor program P1 does operation O1, then the parallel version of P1 does operations Op , </li></ul><ul><li>where Op ³ O1. </li></ul><ul><li>For each task Ti , there is an associated number of overhead operations oi , so that if Ti takes Oi operations without overhead, then: </li></ul>
  12. 13. Clustering <ul><li>Clustering is the grouping together of sub-tasks into a single assignable task. Clustering is usually performed both at partitioning time and during scheduling run time. : </li></ul>
  13. 14. <ul><li>The reasons for clustering during partition time might include </li></ul>
  14. 15. Moreover, the overhead time is <ul><li>Moreover, the overhead time is: </li></ul><ul><li>1. Configuration dependent. Different shared memory multiprocessors may have significantly different task overheads associated with them, depending on cache size, organization, and the way caches are shared. </li></ul><ul><li>2. Overhead may be significantly different depending on how tasks are actually assigned (scheduled) at run time. </li></ul>
  15. 16. The detection of parallelism itself in the program is achieved by one of three methods: <ul><li>Explicit statement of concurrency in the higher-level language, as in the use of such languages as CSP (communicating sequential processes) [131] or Occam [75], which allow programmers to delineate the boundaries among tasks that can be executed in parallel, and to specify communications between such tasks. </li></ul>
  16. 17. <ul><li>2. The use of programmer's hints in the source statement, which the compiler may choose to use or ignore. </li></ul>
  17. 18. <ul><li>Dependency Task List </li></ul><ul><li>T 1 T1 T2 T 3 </li></ul><ul><li>0 - - </li></ul><ul><li>1- 0 - </li></ul><ul><li>T 2 0 1 1 </li></ul><ul><li>T 3 </li></ul><ul><li>Dependency matrix. A 'one' entry indicates a dependency; e.g., in this figure a T2 depends on T1 and T3 depends on T2 </li></ul>
  18. 19. 8.3 Scheduling <ul><li>Scheduling can be done either statically (at compile time) or dynamically (at run time) </li></ul><ul><li>Usually, it is performed at both times. Static scheduling information can be derived on the basis of the probable critical paths. This alone is insufficient to ensure optimum speedup or even fault tolerance. </li></ul>
  19. 20. Run time scheduling <ul><li>Run-time scheduling can be performed in a number of different ways. </li></ul><ul><li>The scheduler itself may run on a particular processor </li></ul><ul><li>or it may run on any processor. </li></ul>
  20. 21. <ul><li>Typical run-time information includes information about the dynamic state of the program and the state of the system. The program state may include details provided by the compiler, such as information about the control structure and identification of critical paths or dependencies. Dynamic information includes information about resource availability and work load distribution. Program information must be generated by the program itself, and then gathered by a run-time routine to centralize this information.The major run-time overheads in run-time scheduling include:1. Information gathering.2. Scheduling. </li></ul>
  21. 22. <ul><li>Table 8.2 Scheduling. When: Scheduling can be performed at: </li></ul><ul><li>Compile time </li></ul><ul><li>(+) Advantage </li></ul><ul><li>Less run time overhead Compiler lacks stall information </li></ul><ul><li>Disadvantage </li></ul><ul><li>May not be fault tolerant </li></ul><ul><li>Run time </li></ul><ul><li>(+) Advantage </li></ul><ul><li>More efficient execution </li></ul><ul><li>Disadvantage </li></ul><ul><li>Higher overhead </li></ul>
  22. 23. <ul><li>How: Scheduling can be performed by:ArrangementCommentDesignated </li></ul><ul><li>single processor Simplest, least effort </li></ul><ul><li>Any single processor ¯ </li></ul><ul><li>Multiple processors Most complex, potentially most difficult 3. Dynamic execution control. </li></ul><ul><li>4. Dynamic data management. </li></ul>
  23. 24. <ul><li>Dynamic execution control is a provision for dynamic clustering or process creation at run time. Dynamic data management provides for the assignment of tasks and processors in such a way as to minimize the required amount of memory overhead delay in accessing data. </li></ul>
  24. 25. The overhead during scheduling is primarily a function of two specific program characteristics: <ul><li>1. Program dynamicity </li></ul><ul><li>2. Granularity </li></ul>
  25. 26. 8.4 Synchronization and Coherency <ul><li>In practice, a program obeys the synchronization model if and only if: </li></ul><ul><li>1. All synchronization operations must be performed before any subsequent memory operation can be performed. </li></ul><ul><li>2. All pending memory operations are performed before any synchronization operation is performed. </li></ul><ul><li>3. Synchronization operations are sequentially consistent. </li></ul>
  26. 27. 8.5 The Effects of Partitioning and Scheduling Overhead <ul><li>When a program is partitioned into tasks, the maximum number of concurrent tasks can be determined. This is simply the maximum number of tasks that can be executed at any one time. It is sometimes called the degree of parallelism that exists in the program. Even if a program has a high degree of parallelism, a corresponding degree of speedup may not be achieved. Recall the definition of speedup: </li></ul>
  27. 28. <ul><li>T1 represents the time required for a uniprocessor to execute the program using the best uni processor algorithm. Tp is the time it takes for p processors to </li></ul>