Multiprocessor/Multicore Systems Scheduling, Synchronization
Multiprocessors <ul><li>Definition: A computer system in which two or more CPUs share full access to a common RAM </li></ul>
Multiprocessor/Multicore Hardware (ex.1) <ul><li>Bus-based multiprocessors </li></ul>
Multiprocessor/Multicore  Hardware (ex.2)  UMA (uniform memory access)  <ul><li>Not/hardly scalable </li></ul><ul><li>Bus-...
Multiprocessor/Multicore Hardware (ex.3)  NUMA (non-uniform memory access)  <ul><li>Single address space visible to all CP...
Cache-coherence
Cache coherence (cont)  <ul><li>cache coherency protocols are based on a set of (cache block)  states  and  state transiti...
On multicores Reason for multicores : physical limitations can cause significant  heat dissipation  and  data synchronizat...
On multicores (cont) <ul><li>Also possible  (figure from www.microsoft.com/licensing/highlights/multicore.mspx) </li></ul>
OS Design issues (1): Who executes the OS/scheduler(s)? <ul><li>Master/slave architecture:  Key kernel functions always ru...
Master-Slave multiprocessor OS Bus
Non-symmetric Peer Multiprocessor OS  <ul><li>Each CPU has its own operating system </li></ul>Bus
Symmetric Peer Multiprocessor OS  <ul><li>Symmetric Multiprocessors </li></ul><ul><ul><li>SMP multiprocessor model </li></...
Scheduling in Multiprocessors <ul><li>Recall: Tightly coupled multiprocessing ( SMPs ) </li></ul><ul><ul><li>Processors sh...
Design issues 2:  Assignment of Processes to Processors <ul><li>Per-processor ready-queues vs global ready-queue </li></ul...
Multiprocessor Scheduling: per partition RQ <ul><li>Space sharing </li></ul><ul><ul><li>multiple threads at same time acro...
Multiprocessor Scheduling: Load sharing / Global ready queue <ul><li>Timesharing </li></ul><ul><ul><li>note use of single ...
Multiprocessor Scheduling  Load Sharing: a problem <ul><li>Problem with communication between two threads </li></ul><ul><u...
Design issues 3: Multiprogramming on processors? <ul><li>Experience shows: </li></ul><ul><ul><li>Threads running on separa...
Gang Scheduling <ul><li>Approach to address the  prev. problem : </li></ul><ul><ul><li>Groups of related threads scheduled...
Gang Scheduling: another option
Multiprocessor Thread Scheduling  Dynamic Scheduling <ul><li>Number of threads in a process are altered dynamically by the...
Summary: Multiprocessor Thread Scheduling <ul><li>Load sharing : processors/threads not assigned to particular processors ...
Multiprocessor Scheduling and Synchronization <ul><li>Priorities + blocking synchronization may result in:   </li></ul><ul...
Readers-Writers non-blocking synchronization (some slides are adapted from J. Anderson’s slides on same topic)
The Mutual Exclusion Problem Locking Synchronization <ul><li>N  processes, each with this structure: </li></ul>while  true...
Non-blocking Synchronization <ul><li>The problem: </li></ul><ul><ul><li>Implement a shared object  without mutual exclusio...
Non-blocking Synchronization <ul><li>Two variants: </li></ul><ul><ul><li>Lock-free: </li></ul></ul><ul><ul><ul><li>Only  s...
Readers/Writers Problem <ul><li>[Courtois,  et al. 1971.] </li></ul><ul><li>Similar to mutual exclusion, but  several read...
Solution 1 Readers have “priority”… Reader:: P( mutex ); rc  :=  rc  + 1; if  rc  = 1 then P( w ) fi; V( mutex ); CS; P( m...
Solution 2 Writers have “priority” … readers should not build long queue on r, so that writers can overtake  =>  mutex3 Re...
Properties <ul><li>If several writers try to enter their critical sections, one will execute P( r ), blocking readers. </l...
Concurrent Reading and Writing [Lamport ‘77] <ul><li>Previous solutions to the readers/writers problem use some form of  m...
Interesting Factoids <ul><li>This is the first ever  lock-free algorithm: guarantees consistency without locks </li></ul><...
The Problem <ul><li>Let  v  be a data item, consisting of one or more digits. </li></ul><ul><ul><li>For example,  v  = 256...
Preliminaries <ul><li>Definition:   v [ i ] , where  i     0, denotes the  i th  value written to  v.   (v [0]  is  v ’s ...
More Preliminaries We say:   r  reads  v [ k,l ] . Value is  consistent  if  k  =  l . write: k write: k + i write: l   ...
Theorem 1 If  v  is always written from right to left, then a read from left to right obtains a value v 1 [ k 1 , l 1 ]  v...
Another Example read : write:0    w v 2 w v 1 w d 3 w d 4 w d 1  w d 2 write:1    w v 2 w v 1 w d 3 w d 4 w d 1  w...
Theorem 2 Assume that  i      j  implies that  v [ i ]     v [ j ] , where  v  =  d 1  …  d m . (a) If  v  is always wri...
Example of (a) v   =  d 1 d 2 d 3 Read obtains  v [0,2]  = 390 < 400 =  v [2] . read :  read  d 1  read  d 2  read  d 3...
Example of (b) v   =  d 1 d 2 d 3 Read obtains  v [0,2]  = 498 > 398 =  v [0] . read :  read  d 3  read  d 2  read  d 1...
Readers/Writers Solution :> means assign larger value.  V1 means “left to right”.  V2 means “right to left”. Writer::  ...
Proof Obligation <ul><li>Assume reader reads  V2 [ k 1 ,  l 1 ]  D [ k 2 ,  l 2 ]  V1 [ k 3 ,  l 3 ] . </li></ul><ul><li>P...
Proof By Theorem 2, V2 [ k 1 , l 1 ]     V2 [ l 1 ]   and  V1 [ k 3 ]     V1 [ k 3 , l 3 ] .  (1) Applying Theorem 1 to ...
Supplemental Reading <ul><li>check: </li></ul><ul><ul><li>G.L. Peterson, “Concurrent Reading While Writing”, ACM TOPLAS, V...
Useful Synchronization Primitives Usually  Necessary  in Nonblocking Algorithms CAS2 extends this CAS(var, old, new)    i...
Another Lock-free Example Shared Queue type  Qtype = record v: valtype; next:  pointer to Qtype end shared var  Tail:  poi...
Using Locks in Real-time Systems The  Priority Inversion  Problem Uncontrolled use of locks in RT systems can result in un...
Dealing with Priority Inversions <ul><li>Common Approach:  Use lock-based schemes that bound their duration (as shown). </...
Upcoming SlideShare
Loading in...5
×

10 Multicore 07

718

Published on

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
718
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
44
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

10 Multicore 07

  1. 1. Multiprocessor/Multicore Systems Scheduling, Synchronization
  2. 2. Multiprocessors <ul><li>Definition: A computer system in which two or more CPUs share full access to a common RAM </li></ul>
  3. 3. Multiprocessor/Multicore Hardware (ex.1) <ul><li>Bus-based multiprocessors </li></ul>
  4. 4. Multiprocessor/Multicore Hardware (ex.2) UMA (uniform memory access) <ul><li>Not/hardly scalable </li></ul><ul><li>Bus-based architectures -> saturation </li></ul><ul><li>Crossbars too expensive (wiring constraints) </li></ul><ul><li>Possible solutions </li></ul><ul><li>Reduce network traffic by caching </li></ul><ul><li>Clustering -> non-uniform memory latency behaviour (NUMA. </li></ul>
  5. 5. Multiprocessor/Multicore Hardware (ex.3) NUMA (non-uniform memory access) <ul><li>Single address space visible to all CPUs </li></ul><ul><li>Access to remote memory slower than to local </li></ul><ul><li>Cache-controller/MMU determines whether a reference is local or remote </li></ul><ul><li>When caching is involved, it's called CC-NUMA (cache coherent NUMA) </li></ul><ul><li>Typically Read Replication (write invalidation) </li></ul>
  6. 6. Cache-coherence
  7. 7. Cache coherence (cont) <ul><li>cache coherency protocols are based on a set of (cache block) states and state transitions : 2 types of protocols </li></ul><ul><li>write-update </li></ul><ul><li>write-invalidate (suffers from false sharing) </li></ul><ul><ul><li>Some invalidations are not necessary for correct program execution: </li></ul></ul><ul><ul><li>Processor 1: Processor 2: </li></ul></ul><ul><ul><li>while (true) do while (true) do </li></ul></ul><ul><ul><li>A = A + 1 B = B + 1 </li></ul></ul><ul><ul><li>If A and B are located in the same cache block, a cache miss occurs in each loop-iteration due to a ping-pong of invalidations </li></ul></ul>
  8. 8. On multicores Reason for multicores : physical limitations can cause significant heat dissipation and data synchronization problems In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors. Virtual machine approach again in focus . Intel Core 2 dual core processor, with CPU-local Level 1 caches+ shared, on-die Level 2 cache.
  9. 9. On multicores (cont) <ul><li>Also possible (figure from www.microsoft.com/licensing/highlights/multicore.mspx) </li></ul>
  10. 10. OS Design issues (1): Who executes the OS/scheduler(s)? <ul><li>Master/slave architecture: Key kernel functions always run on a particular processor </li></ul><ul><ul><li>Master is responsible for scheduling; slave sends service request to the master </li></ul></ul><ul><ul><li>Disadvantages </li></ul></ul><ul><ul><ul><li>Failure of master brings down whole system </li></ul></ul></ul><ul><ul><ul><li>Master can become a performance bottleneck </li></ul></ul></ul><ul><li>Peer architecture: Operating system can execute on any processor </li></ul><ul><ul><li>Each processor does self-scheduling </li></ul></ul><ul><ul><li>New issues for the operating system </li></ul></ul><ul><ul><ul><li>Make sure two processors do not choose the same process </li></ul></ul></ul>
  11. 11. Master-Slave multiprocessor OS Bus
  12. 12. Non-symmetric Peer Multiprocessor OS <ul><li>Each CPU has its own operating system </li></ul>Bus
  13. 13. Symmetric Peer Multiprocessor OS <ul><li>Symmetric Multiprocessors </li></ul><ul><ul><li>SMP multiprocessor model </li></ul></ul>Bus
  14. 14. Scheduling in Multiprocessors <ul><li>Recall: Tightly coupled multiprocessing ( SMPs ) </li></ul><ul><ul><li>Processors share main memory </li></ul></ul><ul><ul><li>Controlled by operating system </li></ul></ul><ul><li>Different degrees of parallelism </li></ul><ul><ul><li>Independent and Coarse-Grained Parallelism </li></ul></ul><ul><ul><ul><li>no or very limited synchronization </li></ul></ul></ul><ul><ul><ul><li>can by supported on a multiprocessor with little change (and a bit of salt  ) </li></ul></ul></ul><ul><ul><li>Medium-Grained Parallelism </li></ul></ul><ul><ul><ul><li>collection of threads; usually interact frequently </li></ul></ul></ul><ul><ul><li>Fine-Grained Parallelism </li></ul></ul><ul><ul><ul><li>Highly parallel applications; specialized and fragmented area </li></ul></ul></ul>
  15. 15. Design issues 2: Assignment of Processes to Processors <ul><li>Per-processor ready-queues vs global ready-queue </li></ul><ul><li>Permanently assign process to a processor; </li></ul><ul><ul><li>Less overhead </li></ul></ul><ul><ul><li>A processor could be idle while another processor has a backlog </li></ul></ul><ul><li>Have a global ready queue and s chedule to any available processor </li></ul><ul><ul><li>can become a bottleneck </li></ul></ul><ul><ul><li>Task migration not cheap </li></ul></ul>
  16. 16. Multiprocessor Scheduling: per partition RQ <ul><li>Space sharing </li></ul><ul><ul><li>multiple threads at same time across multiple CPUs </li></ul></ul>
  17. 17. Multiprocessor Scheduling: Load sharing / Global ready queue <ul><li>Timesharing </li></ul><ul><ul><li>note use of single data structure for scheduling </li></ul></ul>
  18. 18. Multiprocessor Scheduling Load Sharing: a problem <ul><li>Problem with communication between two threads </li></ul><ul><ul><li>both belong to process A </li></ul></ul><ul><ul><li>both running out of phase </li></ul></ul>
  19. 19. Design issues 3: Multiprogramming on processors? <ul><li>Experience shows: </li></ul><ul><ul><li>Threads running on separate processors (to the extend of dedicating a processor to a thread) yields dramatic gains in performance </li></ul></ul><ul><ul><li>Allocating processors to threads ~ allocating pages to processes (can use working set model?) </li></ul></ul><ul><ul><li>Specific scheduling discipline is less important with more than on processor; the decision of “distributing” tasks is more important </li></ul></ul>
  20. 20. Gang Scheduling <ul><li>Approach to address the prev. problem : </li></ul><ul><ul><li>Groups of related threads scheduled as a unit (a gang) </li></ul></ul><ul><ul><li>All members of gang run simultaneously </li></ul></ul><ul><ul><ul><li>on different timeshared CPUs </li></ul></ul></ul><ul><ul><li>All gang members start and end time slices together </li></ul></ul>
  21. 21. Gang Scheduling: another option
  22. 22. Multiprocessor Thread Scheduling Dynamic Scheduling <ul><li>Number of threads in a process are altered dynamically by the application </li></ul><ul><li>Programs (through thread libraries) give info to OS to manage parallelism </li></ul><ul><ul><li>OS adjusts the load to improve use </li></ul></ul><ul><li>Or os gives info to run-time system about available processors, to adjust # of threads. </li></ul><ul><li>i.e dynamic vesion of partitioning: </li></ul>
  23. 23. Summary: Multiprocessor Thread Scheduling <ul><li>Load sharing : processors/threads not assigned to particular processors </li></ul><ul><li>load is distributed evenly across the processors; </li></ul><ul><li>needs c entral queue ; may be a bottleneck </li></ul><ul><li>preemptive threads are unlikely to resume execution on the same processor; c ache use is less efficient </li></ul><ul><li>Gang scheduling : Assigns threads to particular processors (simultaneous scheduling of threads that make up a process) </li></ul><ul><li>Useful where performance severely degrades when any part of the application is not running (due to synchronization) </li></ul><ul><li>Extreme version: Dedicated processor assignment (no multiprogramming of processors) </li></ul>
  24. 24. Multiprocessor Scheduling and Synchronization <ul><li>Priorities + blocking synchronization may result in: </li></ul><ul><li>priority inversion : low-priority process P holds a lock, high-priority process waits, medium priority processes do not allow P to complete and release the lock fast (scheduling less efficient). To cope/avoid this: </li></ul><ul><ul><li>use priority inheritance </li></ul></ul><ul><ul><li>use non-blocking synchronization (wait-free, lock-free, optimistic synchronization) </li></ul></ul><ul><li>convoy effect : processes need a resource for short time, the process holding it may block them for long time (hence, poor utilization) </li></ul><ul><ul><li>non-blocking synchronization is good here, too </li></ul></ul>
  25. 25. Readers-Writers non-blocking synchronization (some slides are adapted from J. Anderson’s slides on same topic)
  26. 26. The Mutual Exclusion Problem Locking Synchronization <ul><li>N processes, each with this structure: </li></ul>while true do Noncritical Section; Entry Section; Critical Section; Exit Section od <ul><li>Basic Requirements: </li></ul><ul><ul><li>Exclusion: Invariant(# in CS  1). </li></ul></ul><ul><ul><li>Starvation-freedom: (process i in Entry) leads-to (process i in CS). </li></ul></ul><ul><li>Can implement by “busy waiting” ( spin locks ) or using kernel calls . </li></ul>
  27. 27. Non-blocking Synchronization <ul><li>The problem: </li></ul><ul><ul><li>Implement a shared object without mutual exclusion . </li></ul></ul><ul><ul><ul><li>Shared Object: A data structure ( e.g ., queue) shared by concurrent processes. </li></ul></ul></ul><ul><ul><li>Why? </li></ul></ul><ul><ul><ul><li>To avoid performance problems that result when a lock-holding task is delayed . </li></ul></ul></ul><ul><ul><ul><li>To avoid priority inversions (more on this later). </li></ul></ul></ul>Locking
  28. 28. Non-blocking Synchronization <ul><li>Two variants: </li></ul><ul><ul><li>Lock-free: </li></ul></ul><ul><ul><ul><li>Only system-wide progress is guaranteed. </li></ul></ul></ul><ul><ul><ul><li>Usually implemented using “retry loops.” </li></ul></ul></ul><ul><ul><li>Wait-free: </li></ul></ul><ul><ul><ul><li>Individual progress is guaranteed. </li></ul></ul></ul><ul><ul><ul><li>Code for object invocations is purely sequential. </li></ul></ul></ul>
  29. 29. Readers/Writers Problem <ul><li>[Courtois, et al. 1971.] </li></ul><ul><li>Similar to mutual exclusion, but several readers can execute critical section at once . </li></ul><ul><li>If a writer is in its critical section, then no other process can be in its critical section . </li></ul><ul><li>+ no starvation, fairness </li></ul>
  30. 30. Solution 1 Readers have “priority”… Reader:: P( mutex ); rc := rc + 1; if rc = 1 then P( w ) fi; V( mutex ); CS; P( mutex ); rc := rc  1; if rc = 0 then V( w ) fi; V( mutex ) Writer:: P( w ); CS; V( w ) “ First” reader executes P(w). “Last” one executes V(w).
  31. 31. Solution 2 Writers have “priority” … readers should not build long queue on r, so that writers can overtake => mutex3 Reader:: P( mutex3 ); P( r ); P( mutex1 ); rc := rc + 1; if rc = 1 then P( w ) fi; V( mutex1 ); V( r ); V( mutex3 ); CS; P( mutex1 ); rc := rc  1; if rc = 0 then V( w ) fi; V( mutex1 ) Writer:: P( mutex2 ); wc := wc + 1; if wc = 1 then P( r ) fi; V( mutex2 ); P( w ); CS; V( w ); P( mutex2 ); wc := wc  1; if wc = 0 then V( r ) fi; V( mutex2 )
  32. 32. Properties <ul><li>If several writers try to enter their critical sections, one will execute P( r ), blocking readers. </li></ul><ul><li>Works assuming V( r ) has the effect of picking a process waiting to execute P( r ) to proceed. </li></ul><ul><li>Due to mutex3 , if a reader executes V( r ) and a writer is at P( r ), then the writer is picked to proceed. </li></ul>
  33. 33. Concurrent Reading and Writing [Lamport ‘77] <ul><li>Previous solutions to the readers/writers problem use some form of mutual exclusion . </li></ul><ul><li>Lamport considers solutions in which readers and writers access a shared object concurrently . </li></ul><ul><li>Motivation: </li></ul><ul><ul><li>Don’t want writers to wait for readers. </li></ul></ul><ul><ul><li>Readers/writers solution may be needed to implement mutual exclusion (circularity problem). </li></ul></ul>
  34. 34. Interesting Factoids <ul><li>This is the first ever lock-free algorithm: guarantees consistency without locks </li></ul><ul><li>An algorithm very similar to this is implemented within an embedded controller in Mercedes automobiles!! </li></ul>
  35. 35. The Problem <ul><li>Let v be a data item, consisting of one or more digits. </li></ul><ul><ul><li>For example, v = 256 consists of three digits, “2”, “5”, and “6”. </li></ul></ul><ul><li>Underlying model: Digits can be read and written atomically. </li></ul><ul><li>Objective: Simulate atomic reads and writes of the data item v . </li></ul>
  36. 36. Preliminaries <ul><li>Definition: v [ i ] , where i  0, denotes the i th value written to v. (v [0] is v ’s initial value.) </li></ul><ul><li>Note: No concurrent writing of v . </li></ul><ul><li>Partitioning of v : v 1  v m . </li></ul><ul><ul><li>v i may consist of multiple digits . </li></ul></ul><ul><li>To read v : Read each v i (in some order). </li></ul><ul><li>To write v : Write each v i (in some order). </li></ul>
  37. 37. More Preliminaries We say: r reads v [ k,l ] . Value is consistent if k = l . write: k write: k + i write: l        read v 3 read v 2 read v m -1 read v 1 read v m read r: 
  38. 38. Theorem 1 If v is always written from right to left, then a read from left to right obtains a value v 1 [ k 1 , l 1 ] v 2 [ k 2 , l 2 ] … v m [ k m , l m ] where k 1  l 1  k 2  l 2  …  k m  l m . Example: v = v 1 v 2 v 3 = d 1 d 2 d 3 Read reads v 1 [0,0] v 2 [1,1] v 3 [2,2] . read :  read v 1 read d 1  read v 2 read d 2  read v 3 read d 3 write:1    w v 3 w v 2 w v 1 w d 3 w d 2 w d 1 write:0    w v 3 w v 2 w v 1 w d 3 w d 2 w d 1 write:2    w v 3 w v 2 w v 1 w d 3 w d 2 w d 1
  39. 39. Another Example read : write:0    w v 2 w v 1 w d 3 w d 4 w d 1  w d 2 write:1    w v 2 w v 1 w d 3 w d 4 w d 1  w d 2  read v 1 r d 1  r d 2  read v 2 r d 4  r d 3 v = v 1 v 2 d 1 d 2 d 3 d 4 Read reads v 1 [0,1] v 2 [1,2] . write:2    w v 2 w v 1 w d 3 w d 4 w d 1  w d 2
  40. 40. Theorem 2 Assume that i  j implies that v [ i ]  v [ j ] , where v = d 1 … d m . (a) If v is always written from right to left, then a read from left to right obtains a value v [ k , l ]  v [ l ] . (b) If v is always written from left to right, then a read from right to left obtains a value v [ k , l ]  v [ k ] .
  41. 41. Example of (a) v = d 1 d 2 d 3 Read obtains v [0,2] = 390 < 400 = v [2] . read :  read d 1  read d 2  read d 3 write:1(399)    w d 3 w d 2 w d 1 9 9 3 write:0(398)    w d 3 w d 2 w d 1 8 9 3 write:2(400)    w d 3 w d 2 w d 1 0 0 4
  42. 42. Example of (b) v = d 1 d 2 d 3 Read obtains v [0,2] = 498 > 398 = v [0] . read :  read d 3  read d 2  read d 1 write:1(399)    w d 1 w d 2 w d 3 3 9 9 write:0(398)    w d 1 w d 2 w d 3 3 9 8 write:2(400)    w d 1 w d 2 w d 3 4 0 0
  43. 43. Readers/Writers Solution :> means assign larger value.  V1 means “left to right”.  V2 means “right to left”. Writer::  V1 :> V1; write D;  V2 := V1 Reader::  repeat temp := V2 read D  until V1 = temp
  44. 44. Proof Obligation <ul><li>Assume reader reads V2 [ k 1 , l 1 ] D [ k 2 , l 2 ] V1 [ k 3 , l 3 ] . </li></ul><ul><li>Proof Obligation: V2 [ k 1 , l 1 ] = V1 [ k 3 , l 3 ]  k 2 = l 2 . </li></ul>
  45. 45. Proof By Theorem 2, V2 [ k 1 , l 1 ]  V2 [ l 1 ] and V1 [ k 3 ]  V1 [ k 3 , l 3 ] . (1) Applying Theorem 1 to V2 D V1, k 1  l 1  k 2  l 2  k 3  l 3 . (2) By the writer program, l 1  k 3  V2 [ l 1 ]  V1 [ k 3 ] . (3) (1), (2), and (3) imply V2 [ k 1 , l 1 ]  V2 [ l 1 ]  V1 [ k 3 ]  V1 [ k 3 , l 3 ] . Hence, V2 [ k 1 , l 1 ] = V1 [ k 3 , l 3 ]  V2 [ l 1 ] = V1 [ k 3 ]  l 1 = k 3 , by the writer’s program.  k 2 = l 2 by (2).
  46. 46. Supplemental Reading <ul><li>check: </li></ul><ul><ul><li>G.L. Peterson, “Concurrent Reading While Writing”, ACM TOPLAS, Vol. 5, No. 1, 1983, pp. 46-55. </li></ul></ul><ul><ul><li>Solves the same problem in a wait-free manner: </li></ul></ul><ul><ul><ul><li>guarantees consistency without locks and </li></ul></ul></ul><ul><ul><ul><li>the unbounded reader loop is eliminated. </li></ul></ul></ul><ul><ul><li>First paper on wait-free synchronization. </li></ul></ul><ul><li>Now, very rich literature on the topic. Check also: </li></ul><ul><ul><li>PhD thesis A. Gidenstam, 2006, CTH </li></ul></ul><ul><ul><li>PhD Thesis H. Sundell, 2005, CTH </li></ul></ul>
  47. 47. Useful Synchronization Primitives Usually Necessary in Nonblocking Algorithms CAS2 extends this CAS(var, old, new)  if var  old then return false fi; var := new; return true  LL(var)  establish “link” to var; return var  SC(var, val)  if “link” to var still exists then break all current links of all processes; var := val; return true else return false fi 
  48. 48. Another Lock-free Example Shared Queue type Qtype = record v: valtype; next: pointer to Qtype end shared var Tail: pointer to Qtype; local var old, new: pointer to Qtype procedure Enqueue (input: valtype) new := (input, NIL); repeat old := Tail until CAS2(Tail, old->next, old, NIL, new, new) Tail old Tail old new new retry loop
  49. 49. Using Locks in Real-time Systems The Priority Inversion Problem Uncontrolled use of locks in RT systems can result in unbounded blocking due to priority inversions . Time Low Med High t t t 0 1 2 Shared Object Access Priority Inversion Computation not involving object accesses Solution: Limit priority inversions by modifying task priorities . Time Low Med High t 0 t 1 t 2
  50. 50. Dealing with Priority Inversions <ul><li>Common Approach: Use lock-based schemes that bound their duration (as shown). </li></ul><ul><ul><li>Examples: Priority-inheritance and -ceiling protocols. </li></ul></ul><ul><ul><li>Disadvantages: Kernel support, very inefficient on multiprocessors. </li></ul></ul><ul><li>Alternative: Use non-blocking objects. </li></ul><ul><ul><li>No priority inversions or kernel support. </li></ul></ul><ul><ul><li>Wait-free algorithms are clearly applicable here. </li></ul></ul><ul><ul><li>What about lock-free algorithms? </li></ul></ul><ul><ul><ul><li>Advantage: Usually simpler than wait-free algorithms. </li></ul></ul></ul><ul><ul><ul><li>Disadvantage: Access times are potentially unbounded . </li></ul></ul></ul><ul><ul><ul><li>But for periodic task sets access times are also predictable!! (check further-reading-pointers) </li></ul></ul></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×