operating system 1

283 views

Published on

deadlock

Published in: Education
  • Be the first to comment

  • Be the first to like this

operating system 1

  1. 1. GLORY BE TO MOTHERGLORY BE TO MOTHER SARASWATHISARASWATHI
  2. 2. OPERATING SYSTEM Def : An operating system is just like a resource manager. Example : suppose I attend the phone call and in between an urgent work by some party interrupt me. Now I have to decide whether to continue with the telephone call and asked that party to wait for some or to interrupt the telephone call and do the urgent work and resumed where I leaved the telephone call. Thus an operating system manages the resources of the operating system. Now what are the resources of the computer system ? 1. CPU: performs the execution of an program. How ever the user write the program in high level language and then the program has to compile to create the executable file and this executable file has to be executed by the cpu .
  3. 3. The executable file is stored in disk and when the cpu wants to execute the file ,It has to brought the file from the hard disk to the main memory. The cpu cannot directly access the secondary storage devices like magnetic tape, hard disk,pen drive etc .  DOS based system which is single user system or single programing system, In other words we cannot execute more than one program at a time  UNIX based is a multi user system, that means more than one program can be executed simultaneously at the same time  We see that the machine gives all the time to us, but it does not happen , the machines only give me the part of the duration to us  2.Main Memory . The main memory has to shared among the several user programs. Again its the responsibility of the operating system how to manage the main memory of the system
  4. 4.  3. Secondary storage: magnetic devices, floppy disk and hard disk of the computer and CD. Also known as I/O devices  Why we have to two type so memory even though they have the same work of storage? Ans : the way CPU access the main memory and secondary storage devices. That means the CPU directly access the main memory and indirectly from the secondary devices.  When the CPU do not find the data/program file in main memory, the it asked to the device driver to see whether the required file is in the device is there or not.  4. input/output devices: popularly known as I/o devices.
  5. 5. DISTRIBUTED COMPUTING SYSTEM 1.Having advantages in storage of the data in multiple systems 2.do not need duplication of data Q. to manage theses resources which leads to efficient working of the distributed computing system is done by the operating system. An operating system is also a program which is executed by the machine. RESPONSIBILITY OF THE OPERATING SYSTEM. 1.CPU management/process management: Process : program in execution. That is when a program starts executed Terms : waiting time and turn around time
  6. 6. Waiting time : if the program is submitted to the computer at time instant t0 and the CPU starts execution of the program at time ti , then how much time the program is waiting for its start of execution is ( ti – t0 ) . Thus (ti – t0 ) is called the waiting time. Turn around time: if the program is submitted to the computer at time t0 and the CPU gives me the output of the program at time tp the ( tp - t0 ) is called turn around time. Example : execution time J1 15 J2 8 J3 10 J4 3 If we assumed that the jobs come in this sequence, then W.T for J1 = 0 , W.T for J2 is 15 , W.T for J3 is 23 , W.T for J4 is 33.
  7. 7. Total waiting time is 71 units and this is called the first come and first served. The main aim is to reduce the waiting time.... so ordering should be done to reduce the waiting time. J4 J2 ,J3 AND J4 Thus , WTJ4 = 0 WTJ2 = 3 WTJ3 = 11 WTJ4 = 21 2. shorted job scheduling : the jobs are executed in order of there increasing execution time,,,, that is the job having minimum execution time is executed first.
  8. 8. PROCESS MANAGEMENT I/O Burst of the program :the process waits for some time during its execution like waiting for CPU to complete its I/O operation or reading of a file from the disk etc. this time is called as burst time. CPU Burst of the program: the time in which the program actually executed on the CPU is called CPU Burst time. PROCESS STATE DIAGRAM ELEMENTS: 1.New job: when we initiate a program for execution that is called as new job. 2.Ready job: when the process loaded into the memory and it is ready for execution. CPU will only execute the program if and only if it is in the READY state not in the NEW state. 3. Active: when the CPU actually executes the job, then we say that job is in the active state. 4. Halted: when the CPU completes the execution of the job
  9. 9.  Waiting state: when the CPU does not execute the program may be because of the I/O operation , then the job is suspended and put into the waiting state. At the same time , this time is used for the CPU burst of the another job. However when the I/O operation is completed then the job which is in the waiting state has to move to the active state, but the problem is that the another job which is in executed by the CPU is in progress and has not been completed yet,So the job which is in the waiting state will move to the READY state again.  Thus more than one job can be simultaneously executed by the CPU, but not strictly simultaneously because CPU cannot execute more than one instruction at a time and thus there is a time multiplexing among the jobs.  We cannot predict the next CPU burst of the time of the job, so we need a predictor for the job next CPU burst time. Tn+1= etn +(1-e)Tn ; where Tn+1 is the predicted next burst time, t is the exact burst time and Tn is the predicted burst time of the previous job.  How to predict T0 ? Answers is execute it first come and first served.  T1 = et0 +(1-e)T0 ; here t0 = T0
  10. 10.  Priority scheduling: in this we specify the priority of the job in the scheduling. Thw job with maximum priority will execute first.  A shortest job first is a special case of the priority scheduling when the prority depends on the minimum burst time of the job.  P(J) = 1 / TIME(J); where P is the priority.  The above three scheduling techniques are called non preemptive scheduling.  Preemptive scheduling: in this, the resources of the job will be taking off due to the priority of the another job and thus the CPU will be given to the job having the maximum priority or the minimum burst time.  First come ,first served cannot be preemptive scheduling.  Example : J1 - 15 , J2 - 9, J3 - 3, J4 - 5  Preemptive shortest remaining time scheduling: In this, the jobs in the ready queue which has minimum time burst will executed first, if the newly arrived job has the minimum burst time than the current job remaining burst time, then the current job is suspended and then the newly arrived job will be executed. Again the ready queue will be check for the job having the minimum burst time.  Preemptive priority scheduling: in this, if the newly arrived job has the maximum priority than the current job, then the job is preemptive and the CPU is given to the
  11. 11.  Problem with the above schedule is starvation. This problem can be solved regularly incrementing the waiting time duration of the job in the ready queue. Thus the priority of the job is not only decided by the user but also by computer.  CPU decision time should be negligible.  CPU bound jobs : the jobs which take less time in I/O operation. And have more CPU burst time. In other words, jobs which have more CPU burst time than I/O burst time are called CPU bound jobs.  I/O bound jobs: the jobs which have the more I/O burst time than the CPU burst time are called the I/O bound jobs.  So there are two types of the scheduler : 1. short term scheduler : the scheduler which puts the jobs from the ready queue to the active queue. It decides which jobs has to put into the active state. 2. long term scheduler : the scheduler which puts the job from the new queue to the ready queue. It decides which jobs should reside in the main memory.  In the ready queue, there should be mixer of both the I/O bound and CPU bound jobs so that no one of the resources remain idle and it is the responsibility of the long term scheduler which jobs should be put into the ready queue. The duration of the long term scheduler is long because it has to take decision which has to put into the ready queue but the duration of the short term scheduler is short because it
  12. 12.  Round robin scheduling; we don't make decision on the basis of priority or CPU burst time and all the jobs are treated equally. CPU time is divided into number of quantum. In this, the jobs are executed to maximum of quantum time. If the CPU burst time of the job is more than the quantum time then the job is forcefully terminated and put into the ready queue again and then the next job is selected from the ready queue and then it is executed for the q time.  Job J1-4 , J2-2 , J3 -3 , J4 -6 and q = 2 and first come first served basis. J1 J2 J3 J4 J1 J3 J4 J4 2 2 2 2 2 1 2 2  Multilevel queue scheduling: if we group the jobs on the basis of the CPU burst time then we have multiple queue instead of the single queue.   Time =2  Time = 5  Time >5  In this we give preference to the first queue, if all the jobs in the first queue have been executed then the jobs in the second queue will be executed. Jobs in the third queue have least priority.
  13. 13.  That means the job in the first queue is I/O bound and jobs in the third queue is CPU bound.  Disadvantage is that there is a problem of starvation.  Multilevel feedback queue:  In this I don t know the nature of the job, the jobs in the first queue may be change its behavior form the I/O bound to the CPU burst and thus job is shifted to the next higher queue and if the job changes its behavior from the CPU burst time to the I/O bound time in between in the third queue then the job is moved to the higher queue. Thus there is a dynamic nature of the job. Thus this is a variation of the round robin time scheduling.  That means jobs enter form the first queue, it will execute in the first queue for the q times which specified in the first queue, if the execution of the job is not completed then the job is moved to the next higher level. That means if job have 3.5 and it will execute in the first queue for 2 q then remaining will execute in the
  14. 14.  Two types of model in distributed computing: 1. workstation model: the number of workstations which is connected over the same LAN. 2.processor pool model : in this the user is provided with terminal and there is no processing capabilities at the user side. The processing is to be done at the server side, thus the server ha many CPUs  Allocation techniques: 1.migratory : it is static in nature in the sense that once the job has been allocated to one of the processor, then that job has to be executed by that processor only. And it is fixed. 2. non-migratory: it is dynamic in nature. We can shift the process from one processor to another processor.  CONCURRENT PROCESSING: Precedence graph : from this graph, we can see which of the node can be executed independently. In this we have two types of set, one is called read set R(X) [a set of variables are only referenced and cannot be modified] and other is called write set W(X) [ set of variables which are .
  15. 15.  The process is divided into number of sub task, the condition under which the sub task execute independently is 1.R(Xi ) intersection R(Xj ) = NULL/ NOT NULL 2. R(Xi ) intersection W(Xj ) = NULL 3.W(Xi ) intersection R(Xj ) = NULL 4.W(Xi ) intersection W(Xj ) = NULL
  16. 16. Concurrent management Fork – join and Gobegin and Goend. S1 Cobegin S3 Begin S2 Cobegin S4 S5 Coend S6 End Coend S7 1 3 2 4 765
  17. 17. 3 2 4 765 1 Fork – join
  18. 18. Classical problems: Consumer and Producer: the idea is that procedure is producing the items and consumer consumes the items produced by it. The producer independently produced the item and consumer independently consume the items produced by the producer. So that two process can run concurrently. If the producer produce the items at a faster rate then the some of the items will be lost and if the consumer consumes the items at a faster rate then the consumer has to wait for the availability of the items. Example is the computer and the printer. In this the computer act as the producer as it sends the character to the printer and printer act as a consumer as it prints/consumes character produced by the producer. The problem is that if the computer produce the characters at a faster rate then some of the items will be lost. So problem can be solved by having buffer at both the producer and consumer side. Two types of buffer : unbounded buffer : do not have any limitations in the size of the buffer and bounded buffer : have a limitations in the size of the buffer. The problem in the bounded buffer is that the producer has to wait if the buffer is full and the consumer has to wait if the buffer is empty. But in case of unbounded buffer, the procedure do not have to wait as it always find the empty space in the buffer but consumer has to wait if it does not find the item in th buffer.
  19. 19. Type item = ; Var buffer = array[1...n] of item; In,out(pointers) = [0...n-1]; Nextp, nextc : item; In=0 , out =0; (we assume that the buffer is circular) Cobegin Producer: Begin Repeat … Produce an item in nextp; While[In+1]mode n = out do skip; buffer[In] = nextp; In = In+1; Until false; End ;
  20. 20. Consumer : Begin Repeat While in = out do skip; // buffer is empty Nextc = buffer[out]; Out = (Out+1) mode n; … Consume item in nextc; … Until false End Coend;
  21. 21. Problem with consumer and producer problem: Consider the two process executed in the producer and consumer side: 1. r1= count 4. r2 = count 2. r1 = r1+1 5. r2 =r2-1; 3. count =r1 6. count =r2; If the statements are executed in the order, then and count =7 1 2 4 5 3 6 Count 7 8 7 6 count 8 count 6 This problem is called as critical section problem or critical region problem. That means when the process execute in the critical section( that is it is the part of the code where the shared variable is going to be modified) then no other process will allow to execute in the corresponding critical section. When the producer try to modify the count variable, then the consumer process is not allowed to modify the count variable. That is access to the critical section should be mutually exclusive.
  22. 22. Critical section problem 1. Mutual Exclusion: only process is allow to access the critical section. 2.Progress: suppose we have n number of process that shared the same critical section. Suppose a subset of the processes need to shared the same critical region, then the decision have to take for which process should enter the critical region and time for taking the decision should also be finite. 3.Bounded waiting: when a process has put a request that it has to enter into the critical region after that the number of process entering into the critical section and coming out of the critical section before the request of this process is entered must be bounded. That is a process should not be waiting indefinitely in the queue for entering into the critical section. Entry section of the critical section: when ever the process puts a request tha it wants to enter the critical section, then part of the code will decide whether this process can enter the critical section depending upon the number of the process executing in the critical section. There should be a locking mechanism in the critical section Exit section of the critical section: when ever the process come out of the critical section then the resources should also be undone.
  23. 23. Pi : Critical section Remaining section Exit section Entry section
  24. 24. FIRST APPROACH: P1 and P2 Turn : 1 or 2 Pi : while turn =/ I do loop; CS Turn = j; MUTUAL EXCLUSION :In this if the process finds that the turn = 1, then it enters the critical section. And then it sets the turn =2 for another process thus it leads to mutual exclusion. PROGRESS: suppose the process P1 enters into the critical section and sets the turn variable to 2 and at the same the process P2 do not want to enter into the critical section and again P1 wants to again enter into the critical section, but the P2 does not enter int other critical section and it will never sets the turn=1, thus the P1 will never into the critical section and there will be no progress. Thus solution is to take those processes in the decision making which really wants to enter into the critical section.
  25. 25. SECOND APPROACH : Flag : array{0,1} of boolean If the flag=0 is true, then the P1 is in the critical section If the flag=1 is true, then the P2 is in the critical section Pi : while flag[j] is true do loop; Flagi = true; CS Flagi = false; MUTUAL EXCLUSION: it is not ensured as well as PROGRESS is also not ensured.
  26. 26. THIRED APPROACH Shared variable: Var flag: array[0..1] of boolean; Turn :0..1 Pi : flag[i]=true; turn=j; // its a assertion or it it be a jth process to enter into the CS While( flag[j] and turn=j) do loop; CS; flag[i]=false; This approach meets all the three conditions for CS 1. mutual exclusion: 2. progress 3.bounded waiting
  27. 27. For n process Common variables: Var flag: array[0..n-1] having values{ idle, want_in, in_CS}; Turn : 0..n-1; Pi : var j: 0..n; Repeat flag[i]=want_in; J = turn; While j =/ I do If flag =/ idle Then j = turn; Else J = (j+1) mod n; flag[i] = in_CS; J=0; While ( j<n) and ( j=i or flag[j] =/ in_CS) do J = j+1; Until ( j>=n) and ( turn =i or flag[turn] =idle); Turn = I; CS
  28. 28. J = ( turn +1) mod n; While ( flag[j] =idle) Then do j=(j+1) mod n; Turn =j; flag[i] =idle; RS SEMAPHORE VARIABLE: a semaphore variable is a integer type variable which can be initialized which can be access through the two atomic operations one is called as P(S) operation and another one is called as V(S) operation. Where P(S) : while ( S<=0) do loop; S = S-1; And V(S): S=S+1; Both the operations are in atomic in nature. How to implement the mutual exclusion ? Mutex :semaphore ( we have assumed that the semaphore variable has been initialized so the initial value of mutex is 1) Pi : P(mutex) CS V(mutex) RS
  29. 29. In this when the process Pi finds that the value of the mutex is 1, so it will enter into the critical section because the mutex value is greater than 1 and it will break the while loop and reduce the value of the mutex by 1. so when the another process wants to enter into the critical section it will unable to enter into the critical section because the value of the mutex is 0 and it will unable to break the while loop and thus struck in the while loop. When the process completes its execution in the critical section it will come out of the critical section and again it will set the value of the mutex variable =1; This solution ensures the mutual exclusion and progress but not the bounded waiting Semaphore variable can be used for process synchronization. What is process synchronization ? Process synchronization is required when one process must wait for another to complete some operation before proceeding. In this semaphore variable can be used for synchronization by defining the sync variable.
  30. 30. Sync : semaphore Sync=0; in this we want that SJ should be executed only after SI. In this when the process enters into the synchronization, then the Sync <0 and no other process will enter into the synchronization and Sink(Sync) will be greater than or equal to 0 after every Sink operation. Before execution of the statement Sj P(Sync) Si Sj V(Sync)
  31. 31. Var a , b , c, d , e , f , g and semaphore =0 Begin Cobegin Begin S1; v(a);v(b) end Begin P(a):S2,S4;v(c);v(d) end Begin P(b);S3;v(e) end Begin P(c);S5;v(f) end Begin P(d);P(e);S6;v(g) end Begin P(f);P(g);S7 end Coend End 1 3 2 4 7 65
  32. 32. The problem with above approach is that of busy waiting. CPU Busy waiting is where a process checks repeatedly for a condition- it is "waiting" for the condition, but it is "busy" checking for it. This will make the process eat CPU (usually). So we modified the semaphore variable: that is we define the semaphore variable as a structure. One field will contain the integer value and another field will contain the list of the waiting process which is waiting for that semaphore variable. Type semaphore: record { value : integer, L : list of the waiting processes }; S : semaphore S.value and S.L( accessing the S Semaphore) P(S) :S.value = S.value -1; V(S): S.value = S.value +1; If S.value < 0 then if S.value <=0 then Begin begin Add this process to S.L; remove the process Block; from S.L; End; wakeup(P) end; The wakeup of P will change the state of the P from waiting to ready state.
  33. 33. In the critical section problem, the process which will be enter into the critical section will be decided by the exit section of the critical section and in the semaphore it is decided by the V(s) section. Producer / consumer problem: There are three semaphore variable : full , empty and mutex Other variables of type item : nextp, nextc Full = 0; Empty = n // because n locations in buffer are empty Mutex =1; P: repeat C: repeat Produce an item in nextp; P(full) … P(mutex); P(empty); … P(mutex); remove item from buffer to nextc … .. Add nextp to buffer; V(mutex); … V(empty); V(mutex); … V(full); until false; Until false;
  34. 34. In this the producer and consumer cannot access the buffer at the same time by mutex semaphore variable. After the execution of the V(mutex) operation, the buffer will be freed and V(full) in the producer side indicate that one more item have been added to the buffer. In the consumer side ,the consumer has to perform the P(full) operation because if the value of buffer is less than the 0, then the consumer has to block, so that's why it has to check whether the buffer has some item or not. The value of the V(empty) in the consumer has to be incremented by 1, this indicate one more location has been freed. Reader writer problem: suppose we have a shared file, the file can be access by the number of the reader and writer process, we can allow more than reader access process file and only one writer access process file . Two semaphore variable : mutex=1 and wrt=1 Integer variable :readcount = 0; R: P(mutex) W: P(wrt); … Readcount = readcount+1; writes … If (readcount =1) then P(wrt); V(wrt); V(mutex) … Read … P(mutex); Readcount =readcount-1; If readcount =0 then V(wrt); V(mutex);
  35. 35. In the reader side if the value of the readcount is 1 ,then the process will execute the write operation. In this no other process will execute the write operation because the process is locked by the P(mutex) and V(mutex) pair operation, thus ensuring mutual exclusion. If the second reader process wants to read the file, then it increments the value of the readcount by 1 ,thus readcount =2; if it wants to write into the file, it will not performed the operation because the value of the readcount =2, thus it ensures mutual exclusion. In between a writer process comes, then it will not successfully write into the file because the two reader process lock the file. Thus after the V(mutex) the first and second reader process has to execute the read , P(mutex ) , then decrement the value of the readcount to 0, if the value of the readcount is 0, then the write process can perform the write operation successfully. But it does not solve the problem of starvation. Dead lock problem: Suppose there are two process P1 and P2 and two resources R1 and R2. P1 acquired R1 and require resource R2 in the future ,Similarily P2 acquired R2 and require resource R1 in the future. P1 will only execute if it acquire R2 and P2 will only execute if it acquire R1. Thus both the process wait for the resource and they are not executing as both of them do not release the resources for their execution. This condition is called dead lock
  36. 36. To avoid the condition : 1. the process has to request the resources form the OS 2. use the resource. 3. release the resources. Formal definition of the deadlock: A set of the processes is said to be in the deadlock when every process in the set waits for an event that can only be caused by another process in the set. The event here is the release of the resources which is held by some other process. Four conditions for the deadlock to occur : 1. mutual exclusion : there is at least one resource in the system which cannot be shared by more than one processes in the system 2. hold and wait: a process is holding some resources and waiting for the other resources to be released by some other process. 3. no preemption: once the resources is allocated to the process, the resources cannot be preempted only process will release the resources on completion. 4. circular waiting: it is the extension of hold and wait. We keep it separate for ease of analysis purpose. If any of the above three conditions is not present than we can remove deadlock.
  37. 37. In this, the traffic can only move in the one direction. At each of the junction, the traffic unable to move to its direction. Thus leads to deadlock. 1 2 Mutual exclusion:at the juntion, the traffic can move only in one direction. We cannot have the two traffic can move in its direction simultaneously at the same time at the junction side. The use of the junction by traffic is ME(mutually) Hold and wait: suppose the traffic moving in the south direction is holding the junction 1 and the same time waiting for the junction 2 to be released by the traffic moving in the west direction. No preemption : once the traffic occupied the road, the traffic can only be released by the traffic and it is not forcebily terminated. Circular waiting : the waiting is in circular nature.
  38. 38. Solution to above problem: 1. mutual exclusion: if we make a flyover at each of the junction, then we can avoid deadlock so that the other traffic can over over the flyover. 2. hold and wait: the traffic can occupy the road after the junction if the traffic at the other junction is free. That means if the traffic at the junction 1 can occupy the road after the junction 1 if the traffic at junction 2 is freed. 3.no preemption: at the junction, the traffic which is holding the traffic is forcebily preemptive the resources and be given to the other traffic. 4.circular wait: similarly we can do with it. RESOUCE ALLOCATION GRAPH; A system has finite number of resources and also have various types of the resources. We also assume that each resources have its ID and we call them as the instances of that particular type. Every type has the number of identical instances. G(V,E) where V= processes represented as circle or resources represented as rectangle having number of small dots which represents the number of instances of that resource type and we have two types of edges( one is request edge and another is allocation edge)
  39. 39. example Request edge Allocation edge P1 P2 P3 R1 R2 R3 R4
  40. 40. In the above graph, P3 is not waiting for any other resources, so it will complete its execution and resource R3 will be available. P2 acquire the R1 and R2 and waiting for the R3 to be released by the P3. Thus the R3 will be available and be allocated to the P2. Similarly for P1. Thus the system is deadlock free. Now consider another situation. P1 P2 P3 R1 R2 R3 R4 In this P3 is waiting for the R2 to be freed by P2 and P2 is waiting for R3 to be freed by P3. Thus leads to deadlock. There are two cycles which leads to deadlock. If there is a cycle in the system, it may or may not lead to deadlock.
  41. 41. If add another instance of R2, then that resource type can be allocated to the P3, thus P3 will complete its execution and the R3 will available to the P2. Thus there is no deadlock even there exist a cycle in the system. If every resource type contains a single instance of that type, then cycle in the graph ensures the existence of the deadlock which is not true with multiple instances of that resource type. P1 P2 P3 R1 R2 R3 R4
  42. 42. APPROACHES TO DEADLOCK: 1. DEADLOCK PREVENTION : in this approach, we put the restriction in the way the process request the resources. That means we are breaking one of the four conditions of the deadlock. 2. DEADLOCK AVOIDANCE: in this approach, we don't try to put restriction in the way the process request for the resources. the process can put the request for the resources any time. The system will decide whether to grant the resources to the process immediately or asked the process to wait for the resources to be released by some other processes. Even if system delay for allocating the resources because the system will analyze if the resources allocated to the process will enter into the deadlock or not and such state is called as unsafe state if there exist cycle in the system. 3. DEADLOCK DETECTION AND RECOVERY:in this approach , we allow the process to acquire the resources if they are available and periodically checking if there exist a deadlock or not.
  43. 43. Deadlock prevention: 1. mutual exclusion: there are some resources which are by nature mutually exclusive. Example is the printer. So this cannot be broken. In some cases ME can't be avoided 2.hold and wait: a. if the process request for the resources, whatever it has acquire the resources has to be released by the process before it put request for the other resources. b. the process should put request for all those resources which is needed and acquire all and then execute. Disadvantage is that poor utilization of the resources. However we don not need all the resources at the same time, but still we are holding till the completion of the process. 3. no preemption: when ever a process P1 puts a request for say R1 and that resource is held by some process P2. Then two ways we can do A. when ever P1 puts a request for the R1 then, then P1 what ever it is holding the resources should be released forcibily if it is waiting for R1 and the resources which P1 has released them forcibily will be added to the list of resources which will be requested by P1 in future. B. or we can ask to P2 to release the R1 if P2 is waiting for some other resources. 4.circular wait: it can be broken if we allow the process to request the resources in a particular order F: Ri → integer A process can request the resources if F(Rj ) > F(Ri ). And process should release the resources Ri before putting request for Rj.
  44. 44. However , suppose P1 → R1 → P2 → R2 → P3 → R3 → P4 → R4 → P1 In this P1 has acquired the R4 which is violating the F(Pj ) > F(Pi ); In case of hardware, a process can request for input device, then storage device and then output device. It may also request for input device and then output device. But it can request input device or storage device if it has acquired output device. For doing that, first the process has to release the output device then it can request for the input device and then the storage device. It may lead to the poor utilization of the resources. DEADLOCK AVOIDANCE: in the system should know in advance that what is the maximum need of the resources by the process all through the process need not request all the resources at the same time. So its the responsibility of the system that whether the process should be given the resources to the process or not.
  45. 45. Bankers algorithm: we have n processes and m number of resource types, available is the vector of m dimensional and available[i] =k means k number of instances of resource types I is available. Max is matrix of order nxm. Max[i,j] = k indicates that process I requires k instances of resources types j. Allocation of order nxm , allocation[i,j] =k indicates that the k instances of resource type j is currently allocated to the process I. Need of order nxm , indicates the future requirement of the process. Algorithm: Pi → Request_i 1. If Request_i <= Need_i then go to step 2 Else error 2. if Request_i <= available then go to step 3 Else wait 3. Available_i = Available_i – Request_i// resources are not physically allocated but Allocation_i = Allocation_i + Request_i // data structures are modified Need_i = Need_i – Request_i; 4. check if it safe sequence or not; this can be check by Safety algorithm
  46. 46. Safety algorithm: 1. work = available; Finish = false; 2. Finish[i] = false and Need_i <= work If no such I, go to step 4 3. work = work + allocation_i; finish[i] = true; Go to step 2; 4. if the finish[i] is true for all I ,then the system is safe otherwise not safe; In this, at the step 2 , two possible conditions may occur, one is that there are no process for which the finish[i] = false or may be finish[i] = true but the need_i > work, whatever condition, it may come to step 4.
  47. 47. Example resources A – 10, B – 5, C – 7 5 process :P0 , P1, P2 , P3 , P4. Allocation Max Need Available A B C A B C A B C A B C P0 0 1 0 7 5 3 7 4 3 3 3 2 P1 2 0 0 3 2 2 1 2 2 P2 3 0 2 9 0 2 6 0 0 P3 2 1 1 2 2 2 0 1 1 P4 0 0 2 4 3 3 4 3 1 1. Work 3 3 2 safe sequence : P1 → P3 → P4 → P0 → P2 P1 : Finish[1] = true Work = 5 3 2 2.P3 : Finish[3] = true Work = 7 4 3 3. P4: Finish[4] = true Work = 7 4 5 4. P0: Finish[0] = true Work = 7 5 5 5. P2: Finish[2] = true Work = 10 5 7
  48. 48. suppose a P1 → request 1 0 2 then it will be given to the P1 because the Need [1 2 2] >request [1 0 2 ] and Available [3 3 2] > request [ 1 0 2] thus resources are granted. Available = 2 3 0 Allocation Max Need Available A B C A B C A B C A B C P0 0 1 0 7 5 3 7 4 3 2 3 0 P1 3 0 2 3 2 2 0 2 0 P2 3 0 2 9 0 2 6 0 0 P3 2 1 1 2 2 2 0 1 1 P4 0 0 2 4 3 3 4 3 1 1. P1 : Finish[1] = true safe sequence P1 → P3 → P4 → P0 → P2 Work = 5 3 2 2.P3 : Finish[3] = true Work = 7 4 3 3. P4: Finish[4] = true Work = 7 4 5 4. P0: Finish[0] = true Work = 7 5 5 5. P2: Finish[2] = true Work = 10 5 7
  49. 49. DETECTION AND RECOVERY Available of order m Allocation of order nxm Request of order nxm Algorithm : 1. work = Available Finish = false if Allocation_i =/ 0 // if some of the resources is allocated to process = true 2. find an I such that finish[i] = false and request_i <= work If no such I , then go to step 4 3. work = work + allocation finish[i] = true; Go to step 2 4. if finish[i] = false for some I , then the system is in deadlock. For recovery, which process should be killed a. the process which executes for minimum amount of time. b. the process which holds the minimum number of resources.
  50. 50. The time complexity of both the algorithm is of order O(mxn2 ) deadlock avoidance: in this in addition to request edge and allocation edge, claim edge is also there. And the claim edge is formed before the process starts execution. A claim edge is converted into the request edge when the process actually puts the request for that particular resource type and the request edge is converted into the allocation edge when the resource is actually allocated to the process. When the process release the resources ,then the allocation is converted into the claim edge again which is not done in the actual resource allocation( that means we deleted the allocation edge after the completion of the process). The reason is this that we dont know whether the process may wants the resources again. Detection in the cycle in the graph requires O(n2 ). DEADLOCK IN DISTRIBUTED ENVIRONMENT ; Deadlock avoidance is very difficult to implement in the distributed environment. It is easy to implement deadlock prevention or deadlock detection and recovery.
  51. 51. Deadlock detection and recovery: in this every machine knows its resources but it does not know which process are running on it. Every machine maintain its own resource graph. There should be one central coordinator which maintains the overall resource graph of all the machine. Every machine in the distributed system should be responsible for transferring the resource allocation graph to the central coordinator so that the central coordinator can maintain the global resources. If there is a change in the local computer then that machine should communicate to the central coordinator for the change in its own resource graph or send periodically the changes to the central coordinator. The deadlock detection has to be done by the central coordinator. If the system goes to the deadlock state , the central coordinator has to kill one of the process. Machine0 machine1 global graph PA S S PC PA S PC R R PB T PB --------------> T Suppose the PB release the R and request for T, then there is a claim edge in the PB to T in the global resource graph. This is called false deadlock as there is a cycle in the graph. To avoid it , each machine has to maintain a global time.
  52. 52. The process pb release the resources which is the first message come to the CC at time stamp t1 and then the pb request the T which is the second message comes to the CC at timestamp t2 . If there is a deadlock cycle in the graph, then the CC can send the special messages to all the machine saying that there be possibility of deadlock.This approach is called as centralized deadlock detection . A distributed deadlock detection: every process in the system takes part in finding out the deadlock. P0 P1 P2 P3 P4 P5 P6 P7 P8 P9

×