Process Synchronization And Deadlocks

42,764 views
42,214 views

Published on

Published in: Business, Technology
8 Comments
13 Likes
Statistics
Notes
No Downloads
Views
Total views
42,764
On SlideShare
0
From Embeds
0
Number of Embeds
162
Actions
Shares
0
Downloads
1,322
Comments
8
Likes
13
Embeds 0
No embeds

No notes for slide

Process Synchronization And Deadlocks

  1. 1. Process Synchronization and Deadlocks In a nutshell
  2. 2. Content <ul><li>Motivation </li></ul><ul><li>Race Condition </li></ul><ul><li>Critical Section problem & Solutions </li></ul><ul><li>Classical problems in Synchronization </li></ul><ul><li>Deadlocks </li></ul>
  3. 3. Why study these chapters? <ul><li>This is about getting processes to coordinate with each other. </li></ul><ul><li>How do processes work with resources that must be shared between them </li></ul><ul><li>Very interesting concepts! </li></ul>
  4. 4. A race condition example <ul><li>A race condition is where multiple processes/threads concurrently read and write to a shared memory location and the result depends on the order of the execution. </li></ul><ul><ul><li>This was the cause of a patient death on a radiation therapy machine, the Therac-25 </li></ul></ul><ul><ul><ul><li>http://sunnyday.mit.edu/therac-25.html </li></ul></ul></ul><ul><ul><ul><li>Yakima Software flow </li></ul></ul></ul><ul><li>Also can happen in bank account database transactions with, say a husband and a wife accessing the same account simultaneously from different ATMs </li></ul>
  5. 5. A race condition example (2) <ul><li>We will implement count++ and count-- and run them concurrently </li></ul><ul><ul><li>Let us say they are executed by different threads accessing a global variable </li></ul></ul><ul><ul><li>At the end we expect count's value not to change </li></ul></ul>
  6. 6. A race condition example (3) <ul><li>count++ implementation: </li></ul><ul><ul><li>register1 = count </li></ul></ul><ul><ul><li>register1 = register1 + 1 </li></ul></ul><ul><ul><li>count = register 1 </li></ul></ul><ul><li>count-- implementation: </li></ul><ul><ul><li>register2 = count </li></ul></ul><ul><ul><li>register2 = register2 - 1 </li></ul></ul><ul><ul><li>count = register2 </li></ul></ul><ul><li>Let count = 5 initially. One possible concurrent execution of </li></ul><ul><li>count++ and count-- is </li></ul><ul><ul><li>register1 = count {register1 = 5} </li></ul></ul><ul><ul><li>register1 = register1 + 1 {register1 = 6} </li></ul></ul><ul><ul><li>register2 = count {register2 = 5} </li></ul></ul><ul><ul><li>register2 = register2 - 1 {register2 = 4} </li></ul></ul><ul><ul><li>count = register1 {count = 6} </li></ul></ul><ul><ul><li>count = register2 {count = 4} </li></ul></ul><ul><ul><li>count = 4 after count++ and count--, even though we started with count = 5 </li></ul></ul><ul><ul><li>Easy question: what other values can count be from doing this incorrectly? </li></ul></ul><ul><li>Obviously, we would like to have count++ execute, followed by count-- (or vice versa) </li></ul>
  7. 7. A race condition example (4) <ul><li>Producer/consumer problem is more general form of the previous problem. </li></ul>
  8. 8. Critical Sections <ul><li>A critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. </li></ul><ul><li>The goal is to provide a mechanism by which only one instance of a critical section is executing for a particular shared resource. </li></ul><ul><li>Unfortunately, it is often very difficult to detect critical section bugs </li></ul>
  9. 9. Critical Sections (2) <ul><li>A Critical Section Environment contains: </li></ul><ul><ul><li>Entry Section Code requesting entry into the critical section. </li></ul></ul><ul><ul><li>Critical Section Code in which only one process can execute at any one time. </li></ul></ul><ul><ul><li>Exit Section The end of the critical section, releasing or allowing others in. </li></ul></ul><ul><ul><li>Remainder Section Rest of the code AFTER the critical section. </li></ul></ul>
  10. 10. Critical Sections (3)
  11. 11. Solution to Critical-Section Problem <ul><li>The critical section must ENFORCE ALL THREE of the following rules: </li></ul><ul><li>1 . Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections </li></ul><ul><ul><li>In many calls, this is abbreviated mutex </li></ul></ul><ul><li>2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely </li></ul><ul><li>3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted </li></ul><ul><ul><li>Assume that each process executes at a nonzero speed </li></ul></ul><ul><ul><li>No assumption concerning relative speed of the N processes </li></ul></ul>
  12. 12. Critical Section Solutions <ul><li>Hardware </li></ul><ul><ul><li>Many systems provide hardware support for critical section code </li></ul></ul><ul><ul><ul><li>Uniprocessors – could disable interrupts </li></ul></ul></ul><ul><ul><ul><ul><li>Currently running code would execute without preemption </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Generally too inefficient on multiprocessor systems </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Have to wait for disable to propagate to all processors </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Operating systems using this not broadly scalable </li></ul></ul></ul></ul></ul><ul><ul><ul><li>Modern machines provide special atomic hardware instructions </li></ul></ul></ul><ul><ul><ul><ul><ul><li>Atomic = non-interruptable </li></ul></ul></ul></ul></ul>
  13. 13. Critical Section Solutions <ul><li>Software </li></ul><ul><ul><li>Peterson’s Solution : for two processes only. </li></ul></ul><ul><ul><li>Semaphore : A flag used to indicate that a routine cannot proceed if a shared resource is already in use by another routine. The allowable operations on a semaphore are V(&quot;signal&quot;) and P(&quot;wait&quot;); both are atomic operations. </li></ul></ul><ul><ul><ul><li>Two types: counting and binary (mutex locks). </li></ul></ul></ul>
  14. 14. Some Classical Problems in Synchronization <ul><li>Dining Philosophers. </li></ul>
  15. 15. Deadlocks
  16. 16. Bridge Crossing Example <ul><li>Traffic only in one direction. </li></ul><ul><li>Each section of a bridge can be viewed as a resource. </li></ul><ul><li>If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback). </li></ul><ul><li>Several cars may have to be backed up if a deadlock occurs. </li></ul><ul><li>Starvation is possible. </li></ul>
  17. 17. Deadlocks <ul><li>Deadlock: processes waiting indefinitely with no chance of making progress. </li></ul><ul><li>Starvation: a process waits for a long time to make progress. </li></ul>
  18. 18. Deadlocks <ul><li>Deadlock applications not just OS </li></ul><ul><ul><li>Network </li></ul></ul><ul><ul><ul><li>Two processes may be blocking a send message to the other process if they are both waiting for a message from the other process </li></ul></ul></ul><ul><ul><ul><ul><ul><li>Receive/waiting blocks writing </li></ul></ul></ul></ul></ul><ul><ul><li>Databases. </li></ul></ul><ul><ul><li>Spooling/streaming data. </li></ul></ul>
  19. 19. Deadlock Characterization <ul><li>Mutual exclusion: only one process at a time can use a resource. </li></ul><ul><li>Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. </li></ul><ul><li>No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task. </li></ul><ul><li>Circular wait: there exists a set { P 0 , P 1 , …, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1 , P 1 is waiting for a resource that is held by </li></ul><ul><li>P 2 , …, P n –1 is waiting for a resource that is held by P n , and P 0 is waiting for a resource that is held by P 0 . </li></ul>Deadlock can arise if four conditions hold simultaneously.
  20. 20. Resource-Allocation Graph <ul><li>V is partitioned into two types: </li></ul><ul><ul><li>P = { P 1 , P 2 , …, P n }, the set consisting of all the processes in the system. </li></ul></ul><ul><ul><li>R = { R 1 , R 2 , …, R m }, the set consisting of all resource types in the system. </li></ul></ul><ul><li>request edge – directed edge P 1  R j </li></ul><ul><li>assignment edge – directed edge R j  P i </li></ul>A set of vertices V and a set of edges E .
  21. 21. Resource-Allocation Graph (Cont.) <ul><li>Process </li></ul><ul><li>Resource Type with 4 instances </li></ul><ul><li>P i requests instance of R j </li></ul><ul><li>P i is holding an instance of R j </li></ul>P i P i R j R j
  22. 22. Example of a Resource Allocation Graph
  23. 23. Resource Allocation Graph With A Deadlock
  24. 24. Graph With A Cycle But No Deadlock
  25. 25. Basic Facts <ul><li>If graph contains no cycles  no deadlock. </li></ul><ul><li>If graph contains a cycle  </li></ul><ul><ul><li>if only one instance per resource type, then deadlock. </li></ul></ul><ul><ul><li>if several instances per resource type, possibility of deadlock. </li></ul></ul>
  26. 26. Methods for Handling Deadlocks <ul><li>Ensure that the system will never enter a deadlock state. </li></ul><ul><li>Allow the system to enter a deadlock state and then recover. </li></ul><ul><li>Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX. </li></ul>
  27. 27. Deadlock Prevention <ul><li>Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources. </li></ul><ul><li>Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources. </li></ul><ul><ul><li>Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. </li></ul></ul><ul><ul><li>Low resource utilization; starvation possible. </li></ul></ul>Restrain the ways request can be made.
  28. 28. Deadlock Prevention (Cont.) <ul><li>No Preemption – </li></ul><ul><ul><li>If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released. </li></ul></ul><ul><ul><li>Preempted resources are added to the list of resources for which the process is waiting. </li></ul></ul><ul><ul><li>Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. </li></ul></ul><ul><li>Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. </li></ul>
  29. 29. Deadlock Avoidance <ul><li>Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. </li></ul><ul><li>The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition. </li></ul><ul><li>Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. </li></ul>Requires that the system has some additional a priori information available.
  30. 30. Useful Resources <ul><li>Amal Al-Hammad http://os1h.pbwiki.com/deadlock </li></ul><ul><li>Wajan Tamem http://os3a.pbwiki.com/%D8%A7%D9%84%D8%AC%D9%85%D9%88%D8%AF%20Deadlock </li></ul>

×