SlideShare a Scribd company logo
1 of 29
Download to read offline
DEADLOCK AVOIDANCE
Deadlock-prevention algorithms - by restraining how requests can be
made. The restraints ensure that at least one of the necessary conditions for
deadlock cannot occur
Possible side effects of preventing deadlocks - low device utilization and
reduced system throughput.
Deadlock Avoidance - An alternative method for avoiding deadlocks is to
require additional information about how resources are to be requested
Each request requires that the system consider
• the resources currently available
• the resources currently allocated to each process
• the future requests and releases of each process
to decide whether the current request can be satisfied or must wait to avoid
a possible future deadlock.
Resource Allocation State
The resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
Safe sequence
A sequence of processes <p1,p2,..,pn> is a safe sequence for the current
allocation state if, for each Pi, the resources that Pi can still request can be
satisfied by the currently available resources plus the resources held by all
the Pj, with j < i.
A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe
state. Not all unsafe states are deadlocks. An unsafe state may lead to a
deadlock.
Example
Consider a system with 12 magnetic tape drives and 3 processes: P0, P1,
and P2.
At time t0, the system is in a safe state. The sequence <p1,p0,p2> satisfies
the safety condition.
It is possible to go from a safe state to an unsafe state
At time t1, process P2 requests and is allocated 1 more tape drive – No
longer safe state
Resource-Allocation Graph Algorithm
If we have a resource-allocation system with only one instance of each
resource type, a variant of the resource-allocation graph can be used for
deadlock avoidance.
In addition to the request and assignment edges, we introduce a new type of
edge, called a claim edge.
Claim Edge - A claim edge Pi —> Rj indicates that process Pi may request
resource Rj at some time in the future. This edge resembles a request edge
in direction, but is represented by a dashed line.
When process Pi requests resource Rj, the claim edge Pi —>Rj is converted
to a request edge. Similarly, when a resource Rj is released by Pi, the
assignment edge Rj —Pi is reconverted to a claim edge Pi—> Rj.
Conditions
Before process Pi starts executing, all its claim edges must already appear
in the resource-allocation graph.
Relax this condition by allowing a claim edge Pi->Rj to be added to the
graph only if all the edges associated with process Pi are claim edges.
If process Pf requests resource Rj. The request can be granted only if
converting the request edge Pf-> Rj to an assignment edge Rj —> Pf does
not result in the formation of a cycle in the resource-allocation graph.
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in
an unsafe state. Therefore, process Pj will have to wait for its requests to be
satisfied
Example
BANKER'S ALGORITHM
The resource-allocation graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type
When a new process enters the system, it must declare the maximum
number of instances of each resource type that it may need.
This number may not exceed the total number of resources in the system.
When a user requests a set of resources the system-must determine whether
the allocation of the resources will leave the system in a safe state. If it will,
the resources are allocated; otherwise/the process must wait until some other
process releases enough resources
Let n be the number of processes in the system and m be the number of
resource types
Data structures
Available: A vector of length m indicates the number of available resources
of each type. If Available[j] = k, there are k instances of resource type Rj
available.
Max: An nx m matrix defines the maximum demand of each process. If
Max[i,j] - k, then P; may request at most k instances of resource type Rj.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process. If Allocation[i,j]= k, then process Pi- is
currently allocated k instances of resource type Rj.
Need: An n x m matrix indicates the remaining resource need of each
process. If Need[i,j] = k, then Pi may need k more instances of resource type
Rj to complete its task.
Note that Need[i,j] = Max[i,j] - Allocation[i,j].
SAFETY ALGORITHM
The algorithm for finding out whether or not a system is in a safe state can
be described as follows:
Steps of Algorithm:
1. Let Work and Finish be vectors of length m and n respectively.
Initialize Work= Available and Finish[i] = false for i=1 to n
2. Find an index i such that both
a) Finish[i] == false
b) Needi<= Work
If no such i exists go to step 4.
3. Work= Work+ Allocationi
Finish[i]= true
Go to Step 2.
4. If Finish[i]== true for all i, then the system is in a safe state.
RESOURCE-REQUEST ALGORITHM
Let Request i be the request vector for process Pi. If Request[j] = k, then
process Pi, wants k instances of resource type Rj;. When a request for
resources is made by process Pi, the following actions are taken:
1. If Request i < Need i, go to step 2. Otherwise, raise an error condition,
since the process has exceeded its maximum claim.
2. If Requesti < Available, go to step 3. Otherwise, Pi, must wait, since the
resources are not available.
3. Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
Available := Available – Request i
Allocation i := Allocation i + Request i
Need i= Need i — Request i
If the resulting resource-allocation state is safe, the transaction is completed
and process Pi is allocated its resources. However, if the new state is unsafe,
then Pi, must wait for Requesti and the old resource-allocation state is
restored
Example
Consider a system with five processes P0 through P5 and three resource
types A [10 instances], B [5 instances] and C [7 instances] time T0, the
following snapshot of the system has been taken:
Need = Max-allocation
System is in safe state <P1, P3, P4, P2, P0>
Suppose process P1 requests one additional instance of resource type A and
two instances of resource type C, so Request1 = (1,0,2).
Can we grant the request?
First check Request < Available (that is, (1,0,2) < (3,3,2)), which is true.
We then pretend that this request has been fulfilled, and we arrive at the
following new state:
We must determine whether this new system state is safe. To do so, we
execute our safety algorithm and find that the sequence <p1,p3,p4,p0,p2>
satisfies our safety requirement. Hence, we can immediately grant the
request of process p1.
A request for (3,3,0) by P4 cannot be granted, since the resources are not
available.
A request for (0,2,0) by P0 cannot be granted, even though the resources are
available, since the resulting state is unsafe.
DEADLOCK DETECTION
• An algorithm that examines the state of the system to determine
whether a deadlock has occurred.
• An algorithm to recover from the deadlock
Single Instance of Each Resource Type
Uses a variant of the resource-allocation graph, called the wait-for graph.
Wait for graph
We obtain this graph from the resource-allocation graph by removing the
nodes of type resource and collapsing the appropriate edges.
An edge from Pi-> Pj in a wait-for graph implies that process Pi is waiting
for process Pj to release a resource that Pi needs.
An edge Pj —> PJ exists in a wait-for graph if and only if the corresponding
resource allocation graph contains two edges Pi —> Rq and Rq -> Pj; for
some resource: Rq
A deadlock exists in the system if and only if the wait-for graph contains a
cycle
Resource allocation graph and corresponding wait-for graph
Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource-allocation system
with multiple instances of each resource type
DEADLOCK DETECTION ALGORITHM
Data structures used by deadlock detection algorithm
Available: A vector of length m indicates the number of available resources
of each type.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process
Request: An n x m matrix indicates the current request of each process. If
Request[i,j] = k, then process Pi is requesting k more instances of resource
type Rj.
ALGORITHM
1. Let Work and Finish be vectors of length m and n respectively. Initialize
Work= Available. For i=0, 1, …., n-1, if Requesti = 0, then Finish[i] = true;
otherwise, Finish[i]= false.
2. Find an index i such that both
a. Finish[i] = false.
b. Requesti < Work.
If no such i exists, go to step 4.
3. Work := Work + Allocation^
Finish[i] := true
go to step 2.
4.If Finish[i]== false for some i, 1<=i<n, then the system is in a deadlocked
state. Moreover, if Finish[i]==false the process Pi is deadlocked.
Example
Five processes P1through P4 and three resource types A, B, C.
Resource type A has 7 instances, resource type B has 2 instances, and
resource type C has 6 instances
At time T0
The sequence <Po, P2,P3,P1,P4> will result in Finish[i] = true for all i.
Suppose now that process P2 makes one additional request for an instance
of type C
Modified Request Matrix
The system is now deadlocked consisting of the resources P1,P2,P3 and
P4.
Detection-Algorithm Usage
Depends on
1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be
invoked frequently.
RECOVERY FROM DEADLOCK
Methods
1. Process Termination
2. Resource Pre-emption
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods.
In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes
2. Abort one process at a time until the deadlock cycle is eliminated
How to choose the process?
1.What the priority of the process is
2. How long the process has computed, and how much longer the process
will compute before completing its designated task
3. How many and what type of resources the process has used (for example,
whether the resources are simple to preempt)
4. How many more resources the process needs in order to complete
5. How many processes will need to be terminated
6. Whether the process is interactive or batch
Resource Pre-emption
To eliminate deadlocks using resource pre-emption, we successively pre-
empt some resources from processes and give these resources to other
processes until the deadlock cycle is broken
If preemption is required to deal with deadlocks, then three issues need to
be addressed:
1. Selecting a victim
2. Rollback
3. Starvation
COMBINED APPROACH TO DEADLOCK HANDLING
None of the basic approaches for handling deadlocks (prevention,
avoidance, and detection) alone is appropriate for the entire spectrum of
resource-allocation problems encountered in operating systems
One possibility is to combine the three basic approaches, allowing the use
of the optimal approach for each class of resources in the system. The
proposed method is based on the notion that resources can be partitioned
into classes that are hierarchically ordered.
A resource-ordering technique is applied to the classes. Within each class,
the most appropriate technique for handling deadlocks can be used.
Example
Consider a system that consists of the following four classes of
resources:
• Internal resources: Resources used by the system, such as a process
control block
• Central memory: Memory used by a user job
• Job resources: Assignable devices (such as a tape drive) and files
• Swappable space: Space for each user job on the backing store
One mixed deadlock solution for this system orders the classes as shown,
and uses the following approaches to each class:
• Internal resources: Prevention through resource ordering can be used,
since run-time choices between pending requests are unnecessary.
• Central memory: Prevention through preemption can be used, since a job
can always be swapped out, and the central memory can be preempted.
• Job resources: Avoidance can be used, since the information needed about
resource requirements can be obtained from the job-control cards.
• Swappable space: Preallocation can be used, since the maximum storage
requirements are usually known.
SUMMARY
A deadlock state occurs when two or more processes are waiting indefinitely
for an event that can be caused only by one of the waiting processes.
Principally,
There are three methods for dealing with deadlocks:
•Use some protocol to ensure that the system will never enter a deadlock
state.
• Allow the system to enter deadlock state and then recover.
• Ignore the problem all together, and pretend that deadlocks never occur
in the system.
A deadlock situation may occur if and only if four necessary conditions
hold simultaneously in the system: mutual exclusion, hold and wait, no
preemption, and circular wait. To prevent deadlocks, we ensure that at least
one ofthe necessary conditions never holds.
Another method for avoiding deadlocks that is less stringent than the
prevention algorithms is to have a priori information on how each process
will be utilizing the resources. The banker's algorithm needs to know the
maximum number of each resource class that may be requested by each
process. Using thisinformation, we can define a deadlock-avoidance
algorithm.
If a system does not employ a protocol to ensure that deadlocks will never
occur, then a detection and recovery scheme must be employed. A deadlock
detection algorithm must be invoked to determine whether a deadlock has
occurred. If a deadlock is detected, the system must recover either by
terminating some of the deadlocked processes, or by preempting resources
from some of the deadlocked processes.
In a system that selects victims for rollback primarily on the basis of cost
factors, starvation may occur. As a result, the selected process never
completes its designated task.
Finally, researchers have argued that none of these basic approaches
aloneare appropriate for the entire spectrum of resource-allocation problems
in operating systems. The basic approaches can be combined, allowing the
separate selection of an optimal one for each class of resources in a system.
PROCESS SYNCHRONISATION
Cooperating Process
A cooperating process is one that can affect or be affected by the other
processes executing in the system. Cooperating processes may either
directly share a logical address space (that is, both code and data), or be
allowed to share data only through files.
THE CRITICAL-SECTION PROBLEM
Consider a system consisting of n processes {Po,Pi,...,PM-i}.
• Each process has a segment of code, called a critical section, in which
the process may be changing common variables, updating a table,
writing a file, and so on.
• The important feature of the system is that, when one process is
executing in its critical section, no other process is to be allowed to
execute in its critical section.
• The execution of critical sections by the processes is mutually
exclusive in time.
• The critical-section problem is to design a protocol that the processes
can use to cooperate.
• Each process must request permission to enter its critical section.
• The section of code implementing this request is the entry section.
• The critical section may be followed by an exit section.
• The remaining code is the remainder section.
A solution to the critical-section problem must satisfy the following three'
requirements:
1. Mutual Exclusion: If process Pz is executing in its critical section, then
no other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and there exist
some processes that wish to enter their critical sections, then only those
processes that are not executing in their remainder section can participate
in the decision of which will enter its critical section next, and this selection
cannot be postponed indefinitely.
3. Bounded Waiting: There exist a bound on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
General structure of a typical process Pi
SYNCHRONIZATION HARDWARE
Uniprocessor environment: By not allowing interrupts to occur while a
shared variable is being modified.
Disabling interrupts on a multiprocessor can be time-consuming, as the
message is passed to all the processors. This message passing delays entry
into each critical section, and system efficiency decreases. Also, consider
the effect on a system's clock, if the clock is kept updated by interrupts.
Many machines therefore provide special hardware instructions that allow
us either to test and modify the content of a word, or to swap the contents
of two words, atomically.
The important characteristic is that this instruction is executed atomically—
that is, as one uninterruptible unit. Thus, if two Test-and-Set instructions are
executed simultaneously (each on a different CPU), they will be executed
sequentially in some arbitrary order. If the machine supports the Test-and-
Set instruction, then we can implement mutual exclusion by declaring a
Boolean variable lock, initialized to false.
SEMAPHORES
Used for complex problems.
Semaphore is a synchronization tool
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait and signal.
The classical definitions of wait and signal are
wait(S): while S < 0 do no-op;
S := S - 1;
signal(S): S := S + 1;
Modifications to the integer value of the semaphore in the wait and signal
operations must be executed indivisibly.
Conditions
No two process can simultaneously modify the same semaphore value. In
addition, in the case of the wait(S), the testing of the integer value of S (S <
0), and its possible modification (S := S — 1), must also be executed without
interruption
Usage
We can use semaphores to deal with the n-process critical-section problem.
The n processes share a semaphore, mutex (standing for mutual exclusion),
initialized to 1.
Implementation
All require busy waiting.
While a process is in its critical section, any other process that tries to enter
its critical section must loop continuously in the entry code. This continual
lopping is clearly a problem in a real multiprogramming system, where a
single CPU is shared among many processes. Busy waiting wastes CPU
cycles that some other process might be able to use productively. This type
of semaphore is also called a spinlock
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
The definitions of wait and signal are as follows −
• Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is
1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows
Semaphores allow only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more efficient than
some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled
to allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows
Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.
Deadlocks and Starvation
The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes. The event in
question is the execution of a signal operation. When such a state is reached,
these processes are said to be deadlocked.
Example
Another problem related to deadlocks is indefinite blocking or starvation, a
situation where processes wait indefinitely within the semaphore. Indefinite
blocking may occur if we add and remove processes from the list associated
with a semaphore in LIFO order
CLASSICAL PROBLEMS OF SYNCHRONIZATION
The Bounded-Buffer Problem
The bounded-buffer problems (aka the producer-consumer problem) is a
classic example of concurrent access to a shared resource. A bounded buffer
lets multiple producers and multiple consumers share a single buffer. ...
Producers must block if the buffer is full. Consumers must block if the
buffer is empty.
About Producer-Consumer problem
The Producer-Consumer problem is a classic problem this is used for multi-
process synchronization i.e. synchronization between more than one
processes.
In the producer-consumer problem, there is one Producer that is producing
something and there is one Consumer that is consuming the products
produced by the Producer.
The producers and consumers share the same memory buffer that is of fixed-
size.
The job of the Producer is to generate the data, put it into the buffer, and
again start generating data.
While the job of the Consumer is to consume the data from the buffer.
Problems that might occur in the Producer-Consumer
• The producer should produce data only when the buffer is not full. If
the buffer is full, then the producer shouldn't be allowed to put any
data into the buffer.
• The consumer should consume data only when the buffer is not empty.
If the buffer is empty, then the consumer shouldn't be allowed to take
any data from the buffer.
• The producer and consumer should not access the buffer at the same
time.
The structure of the producer process
The empty and full semaphores count the number of empty and full buffers,
respectively. The semaphore empty is initialized to the value n; the
semaphore is initialized to the value 0.
The Readers and Writers Problem
The readers-writers problem relates to an object such as a file that is shared
between multiple processes. Some of these processes are readers i.e. they
only want to read the data from the object and some of the processes are
writers i.e. they want to write into the object.
The readers-writers problem is used to manage synchronization so that there
are no problems with the object data. For example - If two readers access
the object at the same time there is no problem. However if two writers or a
reader and writer access the object at the same time, there may be problems.
To solve this situation, a writer should get exclusive access to an object i.e.
when a writer is accessing the object, no reader or writer may access it.
However, multiple readers can access the object at the same time.
This can be implemented using semaphores. The codes for the reader and
writer process in the reader-writer problem are given as follows
Reader Process
wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
.
. READ THE OBJECT
.
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);
In the above code, mutex and wrt are semaphores that are initialized to 1.
Also, rc is a variable that is initialized to 0. The mutex semaphore ensures
mutual exclusion and wrt handles the writing mechanism and is common to
the reader and writer process code.
The variable rc denotes the number of readers accessing the object. As soon
as rc becomes 1, wait operation is used on wrt. This means that a writer
cannot access the object anymore. After the read operation is done, rc is
decremented. When re becomes 0, signal operation is used on wrt. So a
writer can access the object now.
Writer Process
wait(wrt);
.
. WRITE INTO THE OBJECT
.
signal(wrt);
If a writer wants to access the object, wait operation is performed on wrt.
After that no other writer can access the object. When a writer is done
writing into the object, signal operation is performed on wrt.
The Dining-Philosophers Problem
The dining philosophers problem states that there are 5 philosophers sharing
a circular table and they eat and think alternatively. There is a bowl of rice
for each of the philosophers and 5 chopsticks. A philosopher needs both
their right and left chopstick to eat. A hungry philosopher may only eat if
there are both chopsticks available. Otherwise a philosopher puts down their
chopstick and begin thinking again.
The dining philosopher is a classic synchronization problem as it
demonstrates a large class of concurrency control problems.
Solution of Dining Philosophers Problem
A solution of the Dining Philosophers Problem is to use a semaphore to
represent a chopstick. A chopstick can be picked up by executing a wait
operation on the semaphore and released by executing a signal semaphore.
The structure of the chopstick is shown below
semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks
are on the table and not picked up by a philosopher.
The structure of a random philosopher i is given as follows −
do {
wait(chopstick[i] );
wait(chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
In the above structure, first wait operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the
chopsticks on his sides. Then the eating function is performed.
After that, signal operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put
down the chopsticks on his sides. Then the philosopher goes back to
thinking.
CRITICAL REGIONS
The critical-region construct can be effectively used to solve certain 'general
"synchronization problems
MONITORS
Monitors are a synchronization construct that were created to overcome the
problems caused by semaphores such as timing errors.
Monitors are abstract data types and contain shared data variables and
procedures. The shared data variables cannot be directly accessed by a
process and procedures are required to allow a single process to access the
shared data variables at a time.
This is demonstrated as follows:
monitor monitorName
{
data variables;
Procedure P1(....)
{
}
Procedure P2(....)
{
}
Procedure Pn(....)
{
}
Initialization Code(....)
{
}
}
Only one process can be active in a monitor at a time. Other processes that
need to access the shared variables in a monitor have to line up in a queue
and are only provided access when the previous process release the shared
variables.
MONITOR VS SEMAPHORE
Both semaphores and monitors are used to solve the critical section problem
(as they allow processes to access the shared resources in mutual exclusion)
and to achieve process synchronization in the multiprocessing environment.
MONITOR
A Monitor type high-level synchronization construct. It is an abstract data
type. The Monitor type contains shared variables and the set of procedures
that operate on the shared variable.
When any process wishes to access the shared variables in the monitor, it
needs to access it through the procedures. These processes line up in a queue
and are only provided access when the previous process release the shared
variables. Only one process can be active in a monitor at a time. Monitor
has condition variables.
SEMAPHORE
A Semaphore is a lower-level object. A semaphore is a non-negative integer
variable. The value of Semaphore indicates the number of shared resources
available in the system. The value of semaphore can be modified only by
two functions, namely wait() and signal() operations (apart from the
initialization).
When any process accesses the shared resources, it performs the wait()
operation on the semaphore and when the process releases the shared
resources, it performs the signal() operation on the semaphore. Semaphore
does not have condition variables. When a process is modifying the value
of the semaphore, no other process can simultaneously modify the value of
the semaphore.

More Related Content

What's hot

What's hot (20)

Operating System: Deadlock
Operating System: DeadlockOperating System: Deadlock
Operating System: Deadlock
 
CPU Scheduling in OS Presentation
CPU Scheduling in OS  PresentationCPU Scheduling in OS  Presentation
CPU Scheduling in OS Presentation
 
Deadlock
DeadlockDeadlock
Deadlock
 
Dead Lock in operating system
Dead Lock in operating systemDead Lock in operating system
Dead Lock in operating system
 
DeadLock in Operating-Systems
DeadLock in Operating-SystemsDeadLock in Operating-Systems
DeadLock in Operating-Systems
 
BANKER'S ALGORITHM
BANKER'S ALGORITHMBANKER'S ALGORITHM
BANKER'S ALGORITHM
 
deadlock avoidance
deadlock avoidancedeadlock avoidance
deadlock avoidance
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMS
 
Scheduling
SchedulingScheduling
Scheduling
 
Deadlock
DeadlockDeadlock
Deadlock
 
File access methods.54
File access methods.54File access methods.54
File access methods.54
 
SCHEDULING ALGORITHMS
SCHEDULING ALGORITHMSSCHEDULING ALGORITHMS
SCHEDULING ALGORITHMS
 
Process of operating system
Process of operating systemProcess of operating system
Process of operating system
 
directory structure and file system mounting
directory structure and file system mountingdirectory structure and file system mounting
directory structure and file system mounting
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
 
Deadlock
DeadlockDeadlock
Deadlock
 
Inter Process Communication
Inter Process CommunicationInter Process Communication
Inter Process Communication
 
File system structure
File system structureFile system structure
File system structure
 
Deadlock dbms
Deadlock dbmsDeadlock dbms
Deadlock dbms
 
Critical section problem in operating system.
Critical section problem in operating system.Critical section problem in operating system.
Critical section problem in operating system.
 

Similar to Deadlock Avoidance - OS

Similar to Deadlock Avoidance - OS (20)

OS_Ch8
OS_Ch8OS_Ch8
OS_Ch8
 
OSCh8
OSCh8OSCh8
OSCh8
 
Methods for handling deadlock
Methods for handling deadlockMethods for handling deadlock
Methods for handling deadlock
 
Ice
IceIce
Ice
 
Ch8 OS
Ch8 OSCh8 OS
Ch8 OS
 
14th November - Deadlock Prevention, Avoidance.ppt
14th November - Deadlock Prevention, Avoidance.ppt14th November - Deadlock Prevention, Avoidance.ppt
14th November - Deadlock Prevention, Avoidance.ppt
 
Operating System
Operating SystemOperating System
Operating System
 
Module-2Deadlock.ppt
Module-2Deadlock.pptModule-2Deadlock.ppt
Module-2Deadlock.ppt
 
Module 3 Deadlocks.pptx
Module 3 Deadlocks.pptxModule 3 Deadlocks.pptx
Module 3 Deadlocks.pptx
 
Chapter 5(five).pdf
Chapter 5(five).pdfChapter 5(five).pdf
Chapter 5(five).pdf
 
OSLec14&15(Deadlocksinopratingsystem).pptx
OSLec14&15(Deadlocksinopratingsystem).pptxOSLec14&15(Deadlocksinopratingsystem).pptx
OSLec14&15(Deadlocksinopratingsystem).pptx
 
Os unit 4
Os unit 4Os unit 4
Os unit 4
 
Bankers algorithm
Bankers algorithmBankers algorithm
Bankers algorithm
 
CH07.pdf
CH07.pdfCH07.pdf
CH07.pdf
 
Ch7 deadlocks
Ch7   deadlocksCh7   deadlocks
Ch7 deadlocks
 
Bankers
BankersBankers
Bankers
 
Mca ii os u-3 dead lock & io systems
Mca  ii  os u-3 dead lock & io systemsMca  ii  os u-3 dead lock & io systems
Mca ii os u-3 dead lock & io systems
 
deadlock part5 unit 2.ppt
deadlock part5 unit 2.pptdeadlock part5 unit 2.ppt
deadlock part5 unit 2.ppt
 
Deadlock and Banking Algorithm
Deadlock and Banking AlgorithmDeadlock and Banking Algorithm
Deadlock and Banking Algorithm
 
Os5
Os5Os5
Os5
 

Recently uploaded

EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Mater
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...Technogeeks
 
Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Hr365.us smith
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationBradBedford3
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
How to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfHow to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfLivetecs LLC
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 

Recently uploaded (20)

EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...
 
Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion Application
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
How to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfHow to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdf
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 

Deadlock Avoidance - OS

  • 1. DEADLOCK AVOIDANCE Deadlock-prevention algorithms - by restraining how requests can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot occur Possible side effects of preventing deadlocks - low device utilization and reduced system throughput. Deadlock Avoidance - An alternative method for avoiding deadlocks is to require additional information about how resources are to be requested Each request requires that the system consider • the resources currently available • the resources currently allocated to each process • the future requests and releases of each process to decide whether the current request can be satisfied or must wait to avoid a possible future deadlock. Resource Allocation State The resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Safe State A state is safe if the system can allocate resources to each process (up to its maximum) in some order and still avoid a deadlock. Safe sequence A sequence of processes <p1,p2,..,pn> is a safe sequence for the current allocation state if, for each Pi, the resources that Pi can still request can be satisfied by the currently available resources plus the resources held by all the Pj, with j < i. A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state. Not all unsafe states are deadlocks. An unsafe state may lead to a deadlock.
  • 2. Example Consider a system with 12 magnetic tape drives and 3 processes: P0, P1, and P2. At time t0, the system is in a safe state. The sequence <p1,p0,p2> satisfies the safety condition. It is possible to go from a safe state to an unsafe state At time t1, process P2 requests and is allocated 1 more tape drive – No longer safe state Resource-Allocation Graph Algorithm If we have a resource-allocation system with only one instance of each resource type, a variant of the resource-allocation graph can be used for deadlock avoidance. In addition to the request and assignment edges, we introduce a new type of edge, called a claim edge. Claim Edge - A claim edge Pi —> Rj indicates that process Pi may request resource Rj at some time in the future. This edge resembles a request edge in direction, but is represented by a dashed line. When process Pi requests resource Rj, the claim edge Pi —>Rj is converted to a request edge. Similarly, when a resource Rj is released by Pi, the assignment edge Rj —Pi is reconverted to a claim edge Pi—> Rj. Conditions
  • 3. Before process Pi starts executing, all its claim edges must already appear in the resource-allocation graph. Relax this condition by allowing a claim edge Pi->Rj to be added to the graph only if all the edges associated with process Pi are claim edges. If process Pf requests resource Rj. The request can be granted only if converting the request edge Pf-> Rj to an assignment edge Rj —> Pf does not result in the formation of a cycle in the resource-allocation graph. If no cycle exists, then the allocation of the resource will leave the system in a safe state. If a cycle is found, then the allocation will put the system in an unsafe state. Therefore, process Pj will have to wait for its requests to be satisfied Example
  • 4. BANKER'S ALGORITHM The resource-allocation graph algorithm is not applicable to a resource allocation system with multiple instances of each resource type When a new process enters the system, it must declare the maximum number of instances of each resource type that it may need. This number may not exceed the total number of resources in the system. When a user requests a set of resources the system-must determine whether the allocation of the resources will leave the system in a safe state. If it will, the resources are allocated; otherwise/the process must wait until some other process releases enough resources Let n be the number of processes in the system and m be the number of resource types Data structures Available: A vector of length m indicates the number of available resources of each type. If Available[j] = k, there are k instances of resource type Rj available. Max: An nx m matrix defines the maximum demand of each process. If Max[i,j] - k, then P; may request at most k instances of resource type Rj. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j]= k, then process Pi- is currently allocated k instances of resource type Rj. Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pi may need k more instances of resource type Rj to complete its task. Note that Need[i,j] = Max[i,j] - Allocation[i,j]. SAFETY ALGORITHM The algorithm for finding out whether or not a system is in a safe state can be described as follows: Steps of Algorithm:
  • 5. 1. Let Work and Finish be vectors of length m and n respectively. Initialize Work= Available and Finish[i] = false for i=1 to n 2. Find an index i such that both a) Finish[i] == false b) Needi<= Work If no such i exists go to step 4. 3. Work= Work+ Allocationi Finish[i]= true Go to Step 2. 4. If Finish[i]== true for all i, then the system is in a safe state. RESOURCE-REQUEST ALGORITHM Let Request i be the request vector for process Pi. If Request[j] = k, then process Pi, wants k instances of resource type Rj;. When a request for resources is made by process Pi, the following actions are taken: 1. If Request i < Need i, go to step 2. Otherwise, raise an error condition, since the process has exceeded its maximum claim. 2. If Requesti < Available, go to step 3. Otherwise, Pi, must wait, since the resources are not available. 3. Have the system pretend to have allocated the requested resources to process Pi by modifying the state as follows: Available := Available – Request i Allocation i := Allocation i + Request i Need i= Need i — Request i If the resulting resource-allocation state is safe, the transaction is completed and process Pi is allocated its resources. However, if the new state is unsafe, then Pi, must wait for Requesti and the old resource-allocation state is restored
  • 6. Example Consider a system with five processes P0 through P5 and three resource types A [10 instances], B [5 instances] and C [7 instances] time T0, the following snapshot of the system has been taken: Need = Max-allocation System is in safe state <P1, P3, P4, P2, P0>
  • 7. Suppose process P1 requests one additional instance of resource type A and two instances of resource type C, so Request1 = (1,0,2). Can we grant the request? First check Request < Available (that is, (1,0,2) < (3,3,2)), which is true. We then pretend that this request has been fulfilled, and we arrive at the following new state: We must determine whether this new system state is safe. To do so, we execute our safety algorithm and find that the sequence <p1,p3,p4,p0,p2> satisfies our safety requirement. Hence, we can immediately grant the request of process p1. A request for (3,3,0) by P4 cannot be granted, since the resources are not available. A request for (0,2,0) by P0 cannot be granted, even though the resources are available, since the resulting state is unsafe.
  • 8. DEADLOCK DETECTION • An algorithm that examines the state of the system to determine whether a deadlock has occurred. • An algorithm to recover from the deadlock Single Instance of Each Resource Type Uses a variant of the resource-allocation graph, called the wait-for graph. Wait for graph We obtain this graph from the resource-allocation graph by removing the nodes of type resource and collapsing the appropriate edges. An edge from Pi-> Pj in a wait-for graph implies that process Pi is waiting for process Pj to release a resource that Pi needs. An edge Pj —> PJ exists in a wait-for graph if and only if the corresponding resource allocation graph contains two edges Pi —> Rq and Rq -> Pj; for some resource: Rq A deadlock exists in the system if and only if the wait-for graph contains a cycle Resource allocation graph and corresponding wait-for graph
  • 9. Several Instances of a Resource Type The wait-for graph scheme is not applicable to a resource-allocation system with multiple instances of each resource type DEADLOCK DETECTION ALGORITHM Data structures used by deadlock detection algorithm Available: A vector of length m indicates the number of available resources of each type. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process Request: An n x m matrix indicates the current request of each process. If Request[i,j] = k, then process Pi is requesting k more instances of resource type Rj. ALGORITHM 1. Let Work and Finish be vectors of length m and n respectively. Initialize Work= Available. For i=0, 1, …., n-1, if Requesti = 0, then Finish[i] = true; otherwise, Finish[i]= false. 2. Find an index i such that both a. Finish[i] = false. b. Requesti < Work. If no such i exists, go to step 4. 3. Work := Work + Allocation^ Finish[i] := true go to step 2. 4.If Finish[i]== false for some i, 1<=i<n, then the system is in a deadlocked state. Moreover, if Finish[i]==false the process Pi is deadlocked.
  • 10. Example Five processes P1through P4 and three resource types A, B, C. Resource type A has 7 instances, resource type B has 2 instances, and resource type C has 6 instances At time T0 The sequence <Po, P2,P3,P1,P4> will result in Finish[i] = true for all i. Suppose now that process P2 makes one additional request for an instance of type C Modified Request Matrix The system is now deadlocked consisting of the resources P1,P2,P3 and P4.
  • 11. Detection-Algorithm Usage Depends on 1. How often is a deadlock likely to occur? 2. How many processes will be affected by deadlock when it happens? If deadlocks occur frequently, then the detection algorithm should be invoked frequently. RECOVERY FROM DEADLOCK Methods 1. Process Termination 2. Resource Pre-emption Process Termination To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system reclaims all resources allocated to the terminated processes. 1. Abort all deadlocked processes 2. Abort one process at a time until the deadlock cycle is eliminated How to choose the process? 1.What the priority of the process is 2. How long the process has computed, and how much longer the process will compute before completing its designated task 3. How many and what type of resources the process has used (for example, whether the resources are simple to preempt) 4. How many more resources the process needs in order to complete 5. How many processes will need to be terminated 6. Whether the process is interactive or batch
  • 12. Resource Pre-emption To eliminate deadlocks using resource pre-emption, we successively pre- empt some resources from processes and give these resources to other processes until the deadlock cycle is broken If preemption is required to deal with deadlocks, then three issues need to be addressed: 1. Selecting a victim 2. Rollback 3. Starvation COMBINED APPROACH TO DEADLOCK HANDLING None of the basic approaches for handling deadlocks (prevention, avoidance, and detection) alone is appropriate for the entire spectrum of resource-allocation problems encountered in operating systems One possibility is to combine the three basic approaches, allowing the use of the optimal approach for each class of resources in the system. The proposed method is based on the notion that resources can be partitioned into classes that are hierarchically ordered. A resource-ordering technique is applied to the classes. Within each class, the most appropriate technique for handling deadlocks can be used. Example Consider a system that consists of the following four classes of resources: • Internal resources: Resources used by the system, such as a process control block • Central memory: Memory used by a user job • Job resources: Assignable devices (such as a tape drive) and files • Swappable space: Space for each user job on the backing store
  • 13. One mixed deadlock solution for this system orders the classes as shown, and uses the following approaches to each class: • Internal resources: Prevention through resource ordering can be used, since run-time choices between pending requests are unnecessary. • Central memory: Prevention through preemption can be used, since a job can always be swapped out, and the central memory can be preempted. • Job resources: Avoidance can be used, since the information needed about resource requirements can be obtained from the job-control cards. • Swappable space: Preallocation can be used, since the maximum storage requirements are usually known. SUMMARY A deadlock state occurs when two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes. Principally, There are three methods for dealing with deadlocks: •Use some protocol to ensure that the system will never enter a deadlock state. • Allow the system to enter deadlock state and then recover. • Ignore the problem all together, and pretend that deadlocks never occur in the system. A deadlock situation may occur if and only if four necessary conditions hold simultaneously in the system: mutual exclusion, hold and wait, no preemption, and circular wait. To prevent deadlocks, we ensure that at least one ofthe necessary conditions never holds.
  • 14. Another method for avoiding deadlocks that is less stringent than the prevention algorithms is to have a priori information on how each process will be utilizing the resources. The banker's algorithm needs to know the maximum number of each resource class that may be requested by each process. Using thisinformation, we can define a deadlock-avoidance algorithm. If a system does not employ a protocol to ensure that deadlocks will never occur, then a detection and recovery scheme must be employed. A deadlock detection algorithm must be invoked to determine whether a deadlock has occurred. If a deadlock is detected, the system must recover either by terminating some of the deadlocked processes, or by preempting resources from some of the deadlocked processes. In a system that selects victims for rollback primarily on the basis of cost factors, starvation may occur. As a result, the selected process never completes its designated task. Finally, researchers have argued that none of these basic approaches aloneare appropriate for the entire spectrum of resource-allocation problems in operating systems. The basic approaches can be combined, allowing the separate selection of an optimal one for each class of resources in a system.
  • 15. PROCESS SYNCHRONISATION Cooperating Process A cooperating process is one that can affect or be affected by the other processes executing in the system. Cooperating processes may either directly share a logical address space (that is, both code and data), or be allowed to share data only through files. THE CRITICAL-SECTION PROBLEM Consider a system consisting of n processes {Po,Pi,...,PM-i}. • Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. • The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. • The execution of critical sections by the processes is mutually exclusive in time. • The critical-section problem is to design a protocol that the processes can use to cooperate. • Each process must request permission to enter its critical section. • The section of code implementing this request is the entry section. • The critical section may be followed by an exit section. • The remaining code is the remainder section. A solution to the critical-section problem must satisfy the following three' requirements: 1. Mutual Exclusion: If process Pz is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress: If no process is executing in its critical section and there exist some processes that wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate
  • 16. in the decision of which will enter its critical section next, and this selection cannot be postponed indefinitely. 3. Bounded Waiting: There exist a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. General structure of a typical process Pi SYNCHRONIZATION HARDWARE Uniprocessor environment: By not allowing interrupts to occur while a shared variable is being modified. Disabling interrupts on a multiprocessor can be time-consuming, as the message is passed to all the processors. This message passing delays entry into each critical section, and system efficiency decreases. Also, consider the effect on a system's clock, if the clock is kept updated by interrupts. Many machines therefore provide special hardware instructions that allow us either to test and modify the content of a word, or to swap the contents of two words, atomically.
  • 17. The important characteristic is that this instruction is executed atomically— that is, as one uninterruptible unit. Thus, if two Test-and-Set instructions are executed simultaneously (each on a different CPU), they will be executed sequentially in some arbitrary order. If the machine supports the Test-and- Set instruction, then we can implement mutual exclusion by declaring a Boolean variable lock, initialized to false. SEMAPHORES Used for complex problems. Semaphore is a synchronization tool A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait and signal. The classical definitions of wait and signal are wait(S): while S < 0 do no-op; S := S - 1; signal(S): S := S + 1; Modifications to the integer value of the semaphore in the wait and signal operations must be executed indivisibly. Conditions No two process can simultaneously modify the same semaphore value. In addition, in the case of the wait(S), the testing of the integer value of S (S < 0), and its possible modification (S := S — 1), must also be executed without interruption Usage We can use semaphores to deal with the n-process critical-section problem. The n processes share a semaphore, mutex (standing for mutual exclusion), initialized to 1.
  • 18. Implementation All require busy waiting. While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code. This continual lopping is clearly a problem in a real multiprogramming system, where a single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization.
  • 19. The definitions of wait and signal are as follows − • Wait The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed. wait(S) { while (S<=0); S--; } Signal The signal operation increments the value of its argument S. signal(S) { S++; } Types of Semaphores There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows − Counting Semaphores These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource access, where the semaphore count is the number of available resources. If the resources are added, semaphore count automatically incremented and if the resources are removed, the count is decremented. Binary Semaphores The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores than counting semaphores.
  • 20. Advantages of Semaphores Some of the advantages of semaphores are as follows Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly and are much more efficient than some other methods of synchronization. There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical section. Semaphores are implemented in the machine independent code of the microkernel. So they are machine independent. Disadvantages of Semaphores Some of the disadvantages of semaphores are as follows Semaphores are complicated so the wait and signal operations must be implemented in the correct order to prevent deadlocks. Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the wait and signal operations prevent the creation of a structured layout for the system. Semaphores may lead to a priority inversion where low priority processes may access the critical section first and high priority processes later. Deadlocks and Starvation The implementation of a semaphore with a waiting queue may result in a situation where two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. The event in question is the execution of a signal operation. When such a state is reached, these processes are said to be deadlocked.
  • 21. Example Another problem related to deadlocks is indefinite blocking or starvation, a situation where processes wait indefinitely within the semaphore. Indefinite blocking may occur if we add and remove processes from the list associated with a semaphore in LIFO order CLASSICAL PROBLEMS OF SYNCHRONIZATION The Bounded-Buffer Problem The bounded-buffer problems (aka the producer-consumer problem) is a classic example of concurrent access to a shared resource. A bounded buffer lets multiple producers and multiple consumers share a single buffer. ... Producers must block if the buffer is full. Consumers must block if the buffer is empty. About Producer-Consumer problem The Producer-Consumer problem is a classic problem this is used for multi- process synchronization i.e. synchronization between more than one processes. In the producer-consumer problem, there is one Producer that is producing something and there is one Consumer that is consuming the products produced by the Producer. The producers and consumers share the same memory buffer that is of fixed- size.
  • 22. The job of the Producer is to generate the data, put it into the buffer, and again start generating data. While the job of the Consumer is to consume the data from the buffer. Problems that might occur in the Producer-Consumer • The producer should produce data only when the buffer is not full. If the buffer is full, then the producer shouldn't be allowed to put any data into the buffer. • The consumer should consume data only when the buffer is not empty. If the buffer is empty, then the consumer shouldn't be allowed to take any data from the buffer. • The producer and consumer should not access the buffer at the same time. The structure of the producer process The empty and full semaphores count the number of empty and full buffers, respectively. The semaphore empty is initialized to the value n; the semaphore is initialized to the value 0.
  • 23. The Readers and Writers Problem The readers-writers problem relates to an object such as a file that is shared between multiple processes. Some of these processes are readers i.e. they only want to read the data from the object and some of the processes are writers i.e. they want to write into the object. The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems. To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time. This can be implemented using semaphores. The codes for the reader and writer process in the reader-writer problem are given as follows Reader Process wait (mutex); rc ++; if (rc == 1) wait (wrt); signal(mutex); . . READ THE OBJECT . wait(mutex); rc --; if (rc == 0) signal (wrt); signal(mutex);
  • 24. In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles the writing mechanism and is common to the reader and writer process code. The variable rc denotes the number of readers accessing the object. As soon as rc becomes 1, wait operation is used on wrt. This means that a writer cannot access the object anymore. After the read operation is done, rc is decremented. When re becomes 0, signal operation is used on wrt. So a writer can access the object now. Writer Process wait(wrt); . . WRITE INTO THE OBJECT . signal(wrt); If a writer wants to access the object, wait operation is performed on wrt. After that no other writer can access the object. When a writer is done writing into the object, signal operation is performed on wrt. The Dining-Philosophers Problem The dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both chopsticks available. Otherwise a philosopher puts down their chopstick and begin thinking again. The dining philosopher is a classic synchronization problem as it demonstrates a large class of concurrency control problems.
  • 25. Solution of Dining Philosophers Problem A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A chopstick can be picked up by executing a wait operation on the semaphore and released by executing a signal semaphore. The structure of the chopstick is shown below semaphore chopstick [5]; Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and not picked up by a philosopher. The structure of a random philosopher i is given as follows − do { wait(chopstick[i] ); wait(chopstick[ (i+1) % 5] ); . . . EATING THE RICE .
  • 26. signal( chopstick[i] ); signal( chopstick[ (i+1) % 5] ); . . THINKING . } while(1); In the above structure, first wait operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the eating function is performed. After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher goes back to thinking.
  • 27. CRITICAL REGIONS The critical-region construct can be effectively used to solve certain 'general "synchronization problems MONITORS Monitors are a synchronization construct that were created to overcome the problems caused by semaphores such as timing errors. Monitors are abstract data types and contain shared data variables and procedures. The shared data variables cannot be directly accessed by a process and procedures are required to allow a single process to access the shared data variables at a time. This is demonstrated as follows: monitor monitorName { data variables; Procedure P1(....) { } Procedure P2(....) { } Procedure Pn(....) { }
  • 28. Initialization Code(....) { } } Only one process can be active in a monitor at a time. Other processes that need to access the shared variables in a monitor have to line up in a queue and are only provided access when the previous process release the shared variables. MONITOR VS SEMAPHORE Both semaphores and monitors are used to solve the critical section problem (as they allow processes to access the shared resources in mutual exclusion) and to achieve process synchronization in the multiprocessing environment. MONITOR A Monitor type high-level synchronization construct. It is an abstract data type. The Monitor type contains shared variables and the set of procedures that operate on the shared variable. When any process wishes to access the shared variables in the monitor, it needs to access it through the procedures. These processes line up in a queue and are only provided access when the previous process release the shared variables. Only one process can be active in a monitor at a time. Monitor has condition variables. SEMAPHORE A Semaphore is a lower-level object. A semaphore is a non-negative integer variable. The value of Semaphore indicates the number of shared resources available in the system. The value of semaphore can be modified only by two functions, namely wait() and signal() operations (apart from the initialization). When any process accesses the shared resources, it performs the wait() operation on the semaphore and when the process releases the shared resources, it performs the signal() operation on the semaphore. Semaphore
  • 29. does not have condition variables. When a process is modifying the value of the semaphore, no other process can simultaneously modify the value of the semaphore.