Operating Systems
Process Scheduling
Synchronisation
Deadlock
Ajit K Nayak, Ph.D.
SOA University
AKN/OSII.2Introduction to Operating Systems
Communication Models
 Processes within a system may be either independent or
cooperating
 Independent processes is a process cannot affect or
be affected by the other processes.
 i.e. does not share data with any other process
 Cooperating process can affect or be affected by
other processes, including sharing data
 Advantages of cooperating processes:
 Information sharing
 Computation speedup: breaking a task into multiple subtasks
 Modularity: dividing the system functions into separate
processes or threads.
 Convenience: Individual user may work on many tasks at the
same time.
AKN/OSII.3Introduction to Operating Systems
Interprocess Communication
 Cooperating processes need interprocess communication (IPC) to
allow them to exchange data and information.
 Two models of IPC
 Shared memory: Processes can exchange information by reading
and writing data to the shared region.
 Message passing: communication takes place by exchanging
messages between the cooperating processes.
 Message passing is useful for exchanging smaller amounts of data
easier to implement than is shared memory for inter-computer
communication.
 Shared memory is faster than message passing as it can be done
at memory speeds when within a computer.
 Message passing systems are typically implemented using system
calls and thus require the more time consuming task of kernel
intervention
AKN/OSII.4Introduction to Operating Systems
Communications Models
(a) Message passing. (b) shared memory.
AKN/OSII.5Introduction to Operating Systems
Shared Memory
 In cooperating processes paradigm, a producer process
produces information that is consumed by a consumer
process. (Producer-consumer problem, bounded-buffer
problem)
 To allow producer and consumer processes to run
concurrently, a shared buffer of items must be available that
can be filled by the producer and emptied by the consumer.
 The producer and consumer must be synchronized, so that
the consumer does not try to consume an item that has not
yet been produced.
Producer
Consumer
Reading
Writing
Shared buffer
Which data structure
would be suitable?
AKN/OSII.6Introduction to Operating Systems
Producer-Consumer Problem
 May be implemented using two types of buffers
 unbounded-buffer
 No limit on the size of the buffer
 The consumer may have to wait for new items, but the
producer can always produce new items.
 bounded-buffer
 Fixed buffer size
 The consumer must wait if the buffer is empty, and the
producer must wait if the buffer is full.
AKN/OSII.7Introduction to Operating Systems
Implementation
 May be implemented as a circular queue with two
pointers: in and out.
 in points to the next free position in the buffer;
 out points to the first full position in the buffer.
 The buffer is empty when in == out;
 The buffer is full when ((in + 1) % BUFFER_SIZE) == out.
 Max elements allowed are BUFFER_SIZE – 1;
AKN/OSII.8Introduction to Operating Systems
Message Passing
 Provides mechanism for processes to communicate
and to synchronize their actions.
 Unlike shared buffer, in this case the communicating
processes may reside on different computers connected
by a network.
 Ex. A chat program in Internet
 Message passing system provides at least two
operations:
 send(message)
 receive(message)
 If processes P and Q wish to communicate, they need
to:
 Establish a communication link between them
 Exchange messages via send/receive
AKN/OSII.9Introduction to Operating Systems
Message Passing (Cont.)
 Implementation issues:
 How are links established?
 Can a link be associated with more than two processes?
 How many links can there be between every pair of
communicating processes?
 What is the capacity of a link?
 Is the size of a message that the link can accommodate fixed or
variable?
 Is a link unidirectional or bi-directional?
AKN/OSII.10Introduction to Operating Systems
Message Passing (Cont.)
 Implementation of communication link
 Physical:
 Shared memory
 Hardware bus
 Network
 Logical:
 Direct or indirect
 Synchronous or asynchronous
 Automatic or explicit buffering
AKN/OSII.11Introduction to Operating Systems
Direct Communication
 Processes must name each other explicitly:
 send (P, message) – send a message to process P
 receive(Q, message) – receive a message from
process Q
 Properties of communication link
 Links are established automatically
 A link is associated with exactly one pair of
communicating processes
 Between each pair there exists exactly one link
 The link may be unidirectional, but is usually bi-
directional
AKN/OSII.12Introduction to Operating Systems
Indirect Communication
 Messages are directed and received from
mailboxes (also referred to as ports)
 Each mailbox has a unique id
 Processes can communicate only if they share a
mailbox
 Properties of communication link
 Link established only if processes share a common
mailbox
 A link may be associated with many processes
 Each pair of processes may share several
communication links
 Link may be unidirectional or bi-directional
AKN/OSII.13Introduction to Operating Systems
Indirect Communication
 Operations
 create a new mailbox (port)
 send and receive messages through mailbox
 destroy a mailbox
 Primitives are defined as:
send(A, message) – send a message to mailbox
A
receive(A, message) – receive a message
from mailbox A
AKN/OSII.14Introduction to Operating Systems
Indirect Communication
 Mailbox sharing
 P1, P2, and P3 share mailbox A
 P1, sends; P2 and P3 receive
 Who gets the message?
 Solutions
 Allow a link to be associated with at most two
processes
 Allow only one process at a time to execute a
receive operation
 Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
AKN/OSII.15Introduction to Operating Systems
CPU Scheduling
 CPU and I/O burst
 CPU Scheduler
 Preemption / non-preemption
 Problems with preemptive scheduling
 Dispatcher, Dispatch latency
 Scheduling criteria (CPU utilisation, throughput,
waiting time, turn around time, response time)
 Gantt Chart
 FCFS
 SJF (non-premptive, preemptive)
 Priority Scheduling
 Round-Robin Scheduling
AKN/OSII.16Introduction to Operating Systems
CPU Scheduling
 Whenever the CPU becomes idle,
the operating system must select
one of the processes in the ready
queue to be executed.
 The selection process is carried out
by the short-term scheduler (or
CPU scheduler).
 Process execution consists of a
cycle of CPU execution and I/O
wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main
concern
AKN/OSII.17Introduction to Operating Systems
CPU Scheduler
 CPU scheduling decisions may take place in different
situations
 Process switches from running to waiting state (I/O or wait())
 Switches from running to ready state (interrupt)
 Switches from waiting to ready state (I/O complete)
 Terminates
 Scheduling under 1 and 4 is nonpreemptive and others
are preemptive
 Problems associated with Preemptive scheduling
 two processes shares data, one is preempted by other while
updating data
 While changing important kernel data (for instance, I/0 queues)
 Consider interrupts occurring during crucial OS activities
AKN/OSII.18Introduction to Operating Systems
Dispatcher
 Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler;
 Context switching
 switching to user mode
 jumping to the proper location in the user program to restart
that program
 Dispatch latency – time it takes for the dispatcher to
stop one process and start another running
AKN/OSII.19Introduction to Operating Systems
Scheduling Criteria
 To design an efficient scheduling algorithm, following
criteria may be considered
 CPU utilization – keep the CPU as busy as possible (% )
 Throughput – number of processes that complete their
execution per time unit
 Turnaround time – amount of time to execute a particular
process
 Waiting time – amount of time a process has been waiting in
the ready queue
 Response time – amount of time it takes from when a request
was submitted and the CPU/device starts responding (not
producing output)
AKN/OSII.20Introduction to Operating Systems
First- Come, First-Served (FCFS)
Scheduling
Process Burst Time
P1 24 ms
P2 3 ms
P3 3 ms
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P P P1 2 3
0 24 3027
 In this scheme, the process that requests the CPU first is allocated
the CPU first. The implementation of the FCFS policy is easily
managed with a FIFO queue.
Gantt chart, is a bar chart that illustrates a particular schedule
 Waiting time for P1 = 0; P2 = 24; P3 = 27 ms
 Average waiting time: (0 + 24 + 27)/3 = 17 ms
AKN/OSII.21Introduction to Operating Systems
FCFS contd.
 Find the average waiting time, if arrival order is P2, P3,
P1.
 Avg. waiting time = 3 ms, much better than previous
 There is a convoy effect as all the other processes wait
for the one big process to get off the CPU.
 Results in lower CPU and device utilization.
 FCFS scheduling algorithm is nonpreemptive.
 Therefore, not suitable for time-sharing systems
AKN/OSII.22Introduction to Operating Systems
Shortest-Job-First (SJF)
 When the CPU is available, it is assigned to the process
that has the smallest next CPU burst.
 shortest-next-CPU-burst algorithm
 If the next CPU bursts of two processes are the same,
FCFS scheduling is used to break the tie.
 Example:
Process BurstTime
P1 6
P2 8
P3 7
P4 3
P3
0 3 24
P4
P1
169
P2
Average Waiting Time?
(0+3+9+16)/4=7ms
AKN/OSII.23Introduction to Operating Systems
SJF Contd.
 The SJF scheduling algorithm is optimal,
 It gives the minimum average waiting time for a given set of
processes.
 It can be used for long-term (job) scheduling in a batch
system.
 Limitation
 there is no way to know the length of the next CPU burst in
case of a short term scheduler.
 However, we may predict its value by assuming that
the next CPU burst will be similar in length to the
previous ones. (exponential averaging)
nnn t  )1(1 Where
n+1 : next CPU burst prediction, tn: current CPU burst
0    1: weight factor normally  = 1/2
AKN/OSII.24Introduction to Operating Systems
SJF Contd.
 SJF can either be preemptive or non- preemptive
 Preemptive SJF is known as shortest-remaining-time-first
 Example
Proc. A.Time Burst
P1 0 8
P2 1 4
P3 2 9
P4 3 5
P1 P2 P4 P3
0 8 12 17 26
Non-preemptive
P4
0 1 26
P1
P2
10
P3
P1
5 17
Preemptive
 Average waiting time for non-preemptive SJF?
 [(0-0)+(8-1)+(12-3)+(17-2) ]/ 4 = 7.75ms
 Average Waiting time for preemptive SJF?
 [(0-0)+(1-1)+(5-3) +(10-1) +(17-2)]/4 = 6.5 ms
AKN/OSII.25Introduction to Operating Systems
Priority Scheduling
 A priority is associated with each process, and the CPU
is allocated to the process with the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the
priority (p) is the inverse of the (predicted) next CPU
burst.
 The larger the CPU burst, the lower the priority, and vice versa.
 Example (lowest integer is highest priority)
Proc. Burst Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Average Waiting time?
 [0+1+6+16+18]/5 = 8.2 ms
P2 P5 P1 P3 P4
0 1 196 16 18
AKN/OSII.26Introduction to Operating Systems
Priority Scheduling contd.
 Priorities can be defined either internally or externally.
 Factors for Internally defined
 time limits, memory requirements, the number of open files, and
the ratio of average I/0 burst to average CPU burst etc.
 Factors for External defined priorities
 importance of the process, the type and amount of funds
being paid for computer use, the department sponsoring the
work, political factors etc.
 Priority scheduling can be either preemptive or
nonpreemptive.
 A preemptive algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the
currently running process.
 A nonpreemptive algorithm will simply put the new process at
the head of the ready queue.
AKN/OSII.27Introduction to Operating Systems
Example: Preemptive Priority Sch.
P Priority AT CBT
1 6 0 4
2 5 1 2
3 4 2 3
4 1 3 5
5 3 4 1
6 0 5 4
7 2 6 6
P1
1 3 5 9 12 22
P2
2
P3 P4 P6 P4 P7
18
P5
19
P3
21
P2 P1
25
3/
1/
2/
3/
0/
0/
0/
0/
0/
0/
0/
Average Waiting time:
P1= 22-1 =21
P2= 21-2 =19
P3= 19-3 =16
P4= 9-5 =4
P5= 18-4 =14
P6= 5-5 =0
P7= 12-6 =6
Average Waiting time = 11.43ms
AKN/OSII.28Introduction to Operating Systems
Priority Scheduling contd.
 It suffers with a problem called indefinite blocking or
starvation.
 In a heavily loaded computer system, a stream of higher-
priority processes can prevent a low-priority process from ever
getting the CPU.
 Rumor when they shut down the IBM 7094 at MIT in 1973, they
found a low-priority process that had been submitted in 1967
and had not yet been run.
 Aging is a solution to the above problem
 is a technique of gradually increasing the priority of processes
that wait in the system for a long time.
 Eventually, even a process with an initial low priority would
have the highest priority in the system and would be executed.
AKN/OSII.29Introduction to Operating Systems
Round-Robin Scheduling
 The round-robin (RR) scheduling is designed especially
for timesharing systems.
 It is FCFS scheduling with preemption to enable the
system to switch between processes.
 A small unit of time, called a time quantum or time slice,
is defined.
 A time quantum is generally from 10 to 100 ms in length.
 The ready queue is treated as a circular queue.
 The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time interval of
up to 1 time quantum.
 If CPU burst < 1 time quantum => process releases the CPU
 Else preemption=>context switch => process put to the tail of RQ
AKN/OSII.30Introduction to Operating Systems
Round-Robin Example with TQ=4
 Process Burst
P1 24
P2 3
P3 3
P P P1 1 1
0 18 3026144 7 10 22
P2
P3
P1
P1
P1
 Average waiting time?
 [(10-4) + 4+ 7]/3 = 5.67ms
 If there are n processes in the ready queue and the time quantum is
q, then each process gets 1/n of the CPU time in chunks of at most q
time units.
 Each process must wait no longer than (n - 1) x q time units until its
next time quantum.
 Example: Five processes and a time quantum of 20ms.
 i.e. each process will get up to 20 milliseconds in every 100ms.
AKN/OSII.31Introduction to Operating Systems
Round-Robin Scheduling contd.
 if the time quantum is extremely large, the RR policy is
the same as the FCFS policy.
 if the time quantum is extremely small (1 ms), the RR
approach is called processor sharing
 i.e. creates the appearance that each of n processes has its own
processor running at 1/n the speed of the real processor.
 Effect of time quantum on context switching
 Less time quantum => more context switch=> system slowdown
 Context switch time should be a small fraction of the time
quantum
 Effect of time quantum on turnaround time
 average turnaround time does not necessarily improve as the
time-quantum size increases.
 It can be improved if CPU bursts are ≤ time quantum.
AKN/OSII.32Introduction to Operating Systems
Multilevel Queue Scheduling I
 Different processes may have different response-time
requirements and different scheduling needs.
 Ex: foreground processes (interactive) may have priority over
background (batch) processes.
 A multilevel queue scheduling algorithm partitions the
ready queue into several separate queues
AKN/OSII.33Introduction to Operating Systems
Multi-level Queue Scheduling II
 Queue are created based on some property of the
process, such as memory size, process priority, or process
type.
 Each queue has its own scheduling algorithm.
 The foreground queue may be scheduled using RR algorithm,
while the background queue is scheduled by FCFS algorithm.
 Also there must be scheduling among the queues,
 Commonly implemented as fixed-priority preemptive scheduling.
 Or time-slice among the queues. Each queue gets a certain
portion of the CPU time, which it can then schedule among its
various processes.
AKN/OSII.34Introduction to Operating Systems
Multi-level Feedback Queue Sch.
 The multilevel feedback queue scheduling algorithm,
allows a process to move between queues.
 If a process uses too much CPU time, it will be moved to a lower-
priority queue.
 This scheme leaves I/O-bound and interactive processes in the
higher-priority queues.
 A process that waits too long in a lower-priority queue
may be moved to a higher-priority queue.
 This form of aging prevents starvation.
 A multilevel feedback queue scheduler is defined by:
 The number of queues
 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a
higher priority queue or to demote to a lower priority queue
AKN/OSII.35Introduction to Operating Systems
Example
 A process entering the ready
queue is put in queue 0.
 If it does not finish within 8ms, it is
moved to the tail of queue 1.
 If queue 0 is empty, the process at
the head of queue 1.
 If it does not complete in 16ms, it is
preempted and is put into queue
2.
 Processes in queue 2 are run on an FCFS basis but are run only
when queues 0 and 1 are empty.
 This scheduling algorithm gives highest priority to any process with
a CPU burst of 8 milliseconds or less.
AKN/OSII.36Introduction to Operating Systems
Example: Preemptive Priority Sch.
P Priority AT CBT
1 6 0 4
2 5 1 2
3 4 2 3
4 1 3 5
5 3 4 1
6 0 5 4
7 2 6 6
P1
1 3 5 9 12 22
P2
2
P3 P4 P6 P4 P7
18
P5
19
P3
21
P2 P1
25
3/
1/
2/
3/
0/
0/
0/
0/
0/
0/
0/
Average Waiting time:
P1= 22-1 =21
P2= 21-2 =19
P3= 19-3 =16
P4= 9-5 =4
P5= 18-4 =14
P6= 5-5 =0
P7= 12-6 =6
Average Waiting time = 11.43ms
AKN/OSII.37Introduction to Operating Systems
Examples
Proc. BT Pri
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
Draw Gantt chart,
find avg. turnaround time
avg. Waiting time for
1. FCFS
2. SJF
3. Priority (small=> highest)
4. RR
P1 P2 P3 P4 P5
0
FCFS
10 11 13 1914
FCFS: Avg. turnaround time=
(10+11+13+14+19)/5 = 13.4
Avg waiting time = (0+10+11+13+14)/5 = 9.6
P2 P4 P3 P5 P1
0
SJF
1 2 4 199
SJF: Avg. turnaround time=
(19+1+4+2+9)/5 = 7
Avg waiting time = (9+0+2+1+4)/5 = 3.2
AKN/OSII.38Introduction to Operating Systems
Multilevel Queue Scheduling I
 Different processes may have different response-time
requirements and different scheduling needs.
 Ex: foreground processes (interactive) may have priority over
background (batch) processes.
 A multilevel queue scheduling algorithm partitions the
ready queue into several separate queues
AKN/OSII.39Introduction to Operating Systems
Multi-level Queue Scheduling II
 Queue are created based on some property of the
process, such as memory size, process priority, or process
type.
 Each queue has its own scheduling algorithm.
 The foreground queue may be scheduled using RR algorithm,
while the background queue is scheduled by FCFS algorithm.
 Also there must be scheduling among the queues,
 Commonly implemented as fixed-priority preemptive scheduling.
 Or time-slice among the queues. Each queue gets a certain
portion of the CPU time, which it can then schedule among its
various processes.
AKN/OSII.40Introduction to Operating Systems
Multi-level Feedback Queue Sch.
 The multilevel feedback queue scheduling algorithm,
allows a process to move between queues.
 If a process uses too much CPU time, it will be moved to a lower-
priority queue.
 This scheme leaves I/O-bound and interactive processes in the
higher-priority queues.
 A process that waits too long in a lower-priority queue
may be moved to a higher-priority queue.
 This form of aging prevents starvation.
 A multilevel feedback queue scheduler is defined by:
 The number of queues
 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a
higher priority queue or to demote to a lower priority queue
AKN/OSII.41Introduction to Operating Systems
Example
 A process entering the ready
queue is put in queue 0.
 If it does not finish within 8ms, it is
moved to the tail of queue 1.
 If queue 0 is empty, the process at
the head of queue 1 is considered
for execution.
 If it does not complete in 16ms, it is
preempted and is put into queue 2.
 Processes in queue 2 are run on an FCFS basis but are run only
when queues 0 and 1 are empty.
 This scheduling algorithm gives highest priority to any process with
a CPU burst of 8 milliseconds or less.
AKN/OSII.42Introduction to Operating Systems
Process Synchronization
 Processes can execute concurrently
 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data
inconsistency
 Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
 Illustration: (producer-consumer problem)
 Lets use a variable counter that keeps track of the number of
full buffers.
 counter initialized to 0
 It is incremented by the producer after it produces a new item
and is decremented by the consumer after it consumes an
item.
AKN/OSII.43Introduction to Operating Systems
Producer
while (true) {
/* produce an item in next_produced*/
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
} while (true) {
while (counter == 0) ;
/* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next_consumed*/
}
Consumer
AKN/OSII.44Introduction to Operating Systems
Illustration contd.
 Both the routines function correctly when executed separately
 May not function correctly when executed concurrently.
 counter++ could be implemented in M/C language as
register1 = counter
register1 = register1 + 1
counter = register1
 counter-- could be implemented in M/C language as
register2 = counter
register2 = register2 - 1
counter = register2
 Consider an interleaving execution with “count = 5” initially:
S0: producer: register1 = counter
S1: producer: register1 = register1 + 1
S2: consumer: register2 = counter
S3: consumer: register2 = register2 – 1
S4: producer: counter = register1
S5: consumer: counter = register2
Find the value of counter after above execution sequence
{register1 = 5}
{register1 = 6}
{register2 = 5}
{register2 = 4}
{counter = 6}
{counter = 4}
AKN/OSII.45Introduction to Operating Systems
Race Condition
 The concurrent execution of counter++ and counter-- is
equivalent to a sequential execution in which the lower-
level statements are interleaved in some arbitrary order
 It arrives at the incorrect state counter == 4, indicating
that four buffers are full, when, in fact, five buffers are full.
 Further, if the order of the statements S4 and S5 are
changed , we would arrive at another incorrect state
counter== 6.
 this incorrect state is caused by allowing both processes
to manipulate the variable counter concurrently.
 If several processes access and manipulate the same
data concurrently and the outcome depends on the
particular order of access, is called a race condition.
AKN/OSII.46Introduction to Operating Systems
Process Synchronization
 Race condition occur frequently in operating systems.
 Further, with the growth of multicore systems,
applications use several threads - sharing data that
may lead to race condition more often.
 To guard against the race condition , we need to
ensure that only one process at a time can be
manipulating the variable counter.
 To make such a guarantee, we require that the
processes be synchronized.
 Process Synchronization is achieved by solving a Critical Section
problem.
 A Critical Section is a code segment that accesses shared
variables and need to be executed as an atomic action.
AKN/OSII.47Introduction to Operating Systems
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
where
 Processes may be changing shared variables, updating
table, writing file, etc
 It is required that, no two processes are executing in their
critical sections at the same time
 Critical section problem is to design protocols so that
processes cooperate.
 i.e. each process must ask permission to enter critical section in
entry section code
 exits in exit section code,
 Remaining code is in remainder section
AKN/OSII.48Introduction to Operating Systems
Critical Section of a Process
 A solution to the critical-section
problem must satisfy the following
three requirements:
 Mutual Exclusion - If process Pi is
executing in its critical section, then no
other processes can be executing in
their critical sections
 Progress - If no process is executing in its
critical section and there exist some
processes that wish to enter their critical
section, then the selection of the processes to execute in
critical section next, cannot be postponed indefinitely
 Bounded Waiting - A bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section
and before that request is granted
AKN/OSII.49Introduction to Operating Systems
Critical-Section Handling in OS
 Two approaches depending on if kernel is preemptive
or non- preemptive
 Preemptive– allows preemption of process when running in
kernel mode
 Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU
Essentially free of race conditions in kernel mode
 A Preemptive kernel is more suitable for real-time
programming
 A preemptive kernel is more responsive, since no
kernel-mode process can run for an arbitrarily long
period.
AKN/OSII.50Introduction to Operating SystemsOperating System Concepts
Semaphores
 Semaphore S is an integer variable that can only be
accessed via two indivisible (atomic) operations,
wait(), signal()
wait (S){
/*busy wait*/
while (S  0);
S--;
}
signal (S){
S++;
}
 When one process modifies the
semaphore value, no other process
can simultaneously modify the same
semaphore.
 The testing of S (S  0) and S - - must
be executed uninterruptedly
AKN/OSII.51Introduction to Operating SystemsOperating System Concepts
Usage - I
 May be used either as a counting semaphore or
binary semaphore.
 The value of a counting semaphore can range over
an unrestricted domain, but the binary semaphore
can range only between 0 & 1
 Binary semaphore are called mutex locks
 Counting semaphores can be used to control access to a
given resource consisting of a finite number of instances.
 The semaphore is initialized to the number of resources
available.
 to use a resource => wait() operation on the semaphore.
 to releases a resource =>a signal() operation.
 count for the semaphore goes to 0, => all resources are being
used.
 After that, processes that wish to use a resource will block until
the count becomes greater than 0.
AKN/OSII.52Introduction to Operating SystemsOperating System Concepts
Usage - II
 Binary semaphores may be used to deal with the
critical-section problem for multiple processes.
 Then processes share a semaphore, initialized to 1.
do {
wait (mutex) ;
/* critical section */
signal(mutex);
/*remainder section */
} while (TRUE);
 Suppose we require that S2 be executed only after S1
has completed.
In P1
s1;
signal(synch) ;
In P2
wait(synch) ;
s2;
AKN/OSII.53Introduction to Operating SystemsOperating System Concepts
Implementation - I
 Disadvantage of the semaphore definition is that, it
requires busy waiting
 i.e. while a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in
the entry code.
 Where a single CPU is shared among many processes. Busy
waiting wastes CPU cycles that some other process might be
able to use productively.
 This type of semaphore is also called a spinlock because the
process "spins" while waiting for the lock.
 When a process executes the wait () operation and
finds that the semaphore value is not positive, it must
block itself instead of busy wait.
 The block operation places a process into a waiting queue
associated with the semaphore. CPU selects a new process.
AKN/OSII.54Introduction to Operating SystemsOperating System Concepts
Implementation II
 Semaphore now is defined as
typedef struct{
int val;
struct process *list;
} semaphore;
wait(semaphore *S):
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
} signal(semaphore *S):
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
AKN/OSII.55Introduction to Operating SystemsOperating System Concepts
Implementation - III
 When a process must wait on a semaphore, it is added
to the list of processes.
 A signal() operation removes one process from the list
of waiting processes and awakens that process.
 The block() operation suspends the process that
invokes it.
 The wakeup(P) operation resumes the execution of a
blocked process P.
 Unlike classical definition of “semaphores with busy
waiting” semaphore values may be negative in
“waiting queue” implementation.
 i.e. its magnitude is the number of processes waiting on that
semaphore
AKN/OSII.56Introduction to Operating SystemsOperating System Concepts
Deadlock and Starvation - I
 Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
 Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
 
signal(S); signal(Q);
signal(Q) signal(S);
 Suppose that Po executes wait (S) and then P1 executes wait (Q).
 When Po executes wait (Q), it must wait until P1 executes signal
(Q).
 Similarly, when P1 executes wait (S), it must wait until Po executes
signal(S).
AKN/OSII.57Introduction to Operating SystemsOperating System Concepts
Deadlock and Starvation - II
 Since these signal() operations can't be executed, Po and P1 are
deadlocked.
 A set of processes is in a deadlock state when every process in
the set is waiting for an event that can be caused only by
another process in the set.
 Starvation – indefinite blocking. A process may never be
removed from the semaphore queue in which it is suspended.
 Indefinite blocking may occur if we remove processes from the
list associated with a semaphore.
AKN/OSII.58Introduction to Operating Systems
Priority Inversion
 Consider three processes with their priority as L < M < H and
Assume that process H requires resource R, which is currently
being accessed by process L.
 Ordinarily, process H would wait for L to finish using resource R.
 However, now suppose that process M preempts process L.
 Indirectly, a process with a lower priority (M) has affected how
long a high priority process (H) must wait.
 This is known as priority inversion problem.
 This problem practically occurred in the Mars Pathfinder for which
the system was reset at least 6times after landing.
 Solution: Priority inheritance
 All processes that are accessing resources needed by a higher-
priority process inherit the higher priority until they are finished with
the resources.
AKN/OSII.59Introduction to Operating Systems
Classic Problems of Synchronization
 Classic problems used to test any newly-
proposed synchronization schemes
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
AKN/OSII.60Introduction to Operating Systems
Bounded-Buffer Problem
 n buffers, each can hold one item.
 Producer writes into the buffer, consumer reads from
the buffer
 Semaphore mutex
 initialized to the value 1
 Provides mutual exclusion for access to the buffer pool.
 Semaphore full
 initialized to the value 0
 Counts the number of full buffers
 Semaphore empty
 initialized to the value n
 Counts the number of empty buffers
AKN/OSII.61Introduction to Operating Systems
Producer Process
 The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next_produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
AKN/OSII.62Introduction to Operating Systems
Consumer process
 The structure of the consumer process
do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
AKN/OSII.63Introduction to Operating Systems
Readers-Writers Problem
 A database is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time
 Only one single writer can access the shared data at the same time
 The readers-writers problem has several variations
 A first readers-writers problem: requires that no reader be kept waiting
unless a writer has already obtained permission to use the shared
object.
 Shared variables
 Semaphore rw_mutex, provides mutual exclusion to writers, initialized
to 1
 Semaphore mutex, provides mutual exclusion to update readcount,
initialized to 1
 Integer read_count, used to keep track of how many processes are
currently reading dataset, initialized to 0
AKN/OSII.64Introduction to Operating Systems
The structure of processes
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
Writer Process
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
}while (true);
Reader Process
AKN/OSII.65Introduction to Operating Systems
Reader-Writer Problem
 The solution referred to as the first readers-
writers problem.
 It requires that no reader be kept waiting unless a
writer has already obtained permission to use the
shared object.
 The second readers-writers problem requires
that
 Once a writer is ready, that writer performs its write
as soon as possible.
 In other words, if a writer is waiting to access the
object, no new readers may start reading.
 Both the solutions may lead to starvation
AKN/OSII.66Introduction to Operating Systems
Dining-Philosophers Problem
 It is a simple representation of allocating several resources among
several processes in a deadlock-free and starvation-free manner.
 represent each chopstick with a semaphore.
 Semaphore chopstick [5]
 All elements are initialized to 1
 Philosophers spend their lives alternating
thinking and eating
 Don’t interact with their colleagues, when
hungry, try to pick up 2 chopsticks (one at a
time) that are closest to her
 When a hungry philosopher has both her
chopsticks at the same time, she eats.
 When she is finished eating, she puts
down both of her chopsticks and starts
thinking again.
AKN/OSII.67Introduction to Operating Systems
The structure of Philosopher i
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
 Solution guarantees that no two neighbors are eating
simultaneously
 But may result in a deadlock
 If all five philosophers become hungry simultaneously and each
grabs her left chopstick.
 All the elements of chopstick will now be equal to 0. When they
try to grab their right chopstick, they will be delayed forever.
AKN/OSII.68Introduction to Operating Systems
Deadlock handling
 Deadlock handling
 Soln 1: Allow at most 4 philosophers to be sitting
simultaneously at the table.
 Soln 2: Allow a philosopher to pick up the forks only
if both are available (picking must be done in a
critical section)
 Soln 3: An asymmetric solution
 An odd-numbered philosopher picks up first the left
chopstick and then the right chopstick.
 Even-numbered philosopher picks up first the right
chopstick and then the left chopstick.
AKN/OSII.69Introduction to Operating Systems
Problems with Semaphores
 Incorrect use of semaphore operations may
result in Deadlock and starvation situtaions
 signal (mutex) …. wait (mutex)
 wait (mutex) … wait (mutex)
 Omitting of wait (mutex) or signal (mutex) (or both)
AKN/OSII.70Introduction to Operating Systems
Monitors
 A high-level abstraction that provides a convenient
and effective mechanism for process synchronization.
 Monitor is an Abstract Data Type (ADT),
 encapsulates private data with public methods to operate on
that data
 Only one process may be active within the monitor at a
time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
AKN/OSII.71Introduction to Operating Systems
Monitor Contd.
 Conditional variables may
be declared
 condition x, y;
 These variables can access
wait and signal
 i.e. when a process invokes
x. wait(); , the process is
suspended till another process
invokes x.signal();
 The x. signal() operation
resumes exactly one
suspended process.
 It has no effect if no process is
suspended
AKN/OSII.72Introduction to Operating Systems
Monitor Solution to Bounded Buffer
monitor BoundedBuffer {
condition full, empty;
int count;
void addItem(item){
if (count == BUFFER_SIZE)
full.wait(); // if buffer is full, block
// put item in buffer
count = count + 1;
if (count == 1)
empty.signal(); // awake consumer
}
item removeItem(){
if (count == 0)
empty.wait(); // if buffer is empty, block
// remove item from buffer
count = count - 1;
if (count == BUFFER_SIZE-1)
full.signal(); // awake producer
return item;
} initializationCode() { count =0; }
} // end monitor
AKN/OSII.73Introduction to Operating Systems
Monitor Solution to Bounded Buffer Contd.
void producer() {
while (true) {
item = produceItem();
BoundedBuffer.addItem(item);
}
}
void consumer() {
while (true) {
item = BoundedBuffer.removeItem();
consumeItem(item);
}
}
AKN/OSII.74Introduction to Operating Systems
Monitor Solution to Reader-Writer.
monitor ReaderWriter{
condition read, readWrite;
int readCount;
bool busy;
void startRead(){
if (busy)
read.wait();
readCount=readCount+1;
read.signal();
}
void endRead(){
readCount=readCount-1;
if (readCount==0)
readWrite.signal();
}
AKN/OSII.75Introduction to Operating Systems
Monitor Solution to Reader-Writer contd.
void startWrite(){
if (busy || readCount != 0){
readWrite.wait();
busy = true;
}
}
void endWrite(){
busy = false;
if (read.queue())
read.signal();
else
readWrite.signal();
}
initializationCode(){ readCount =0;
busy= false;
}
} // end monitor;
AKN/OSII.76Introduction to Operating Systems
Monitor Solution to Dining Philosophers
monitor DiningPhilosophers {
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i); //makes eating if two chopsticks found
if (state[i] != EATING) self[i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
AKN/OSII.77Introduction to Operating Systems
Solution to Dining Philosophers (Cont.)
void test (int i) {
if ((state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initializationCode() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
AKN/OSII.78Introduction to Operating Systems
 Each philosopher i invokes the operations pickup()
and putdown() in the following sequence:
DiningPhilosophers.pickup(i);
EAT
DiningPhilosophers.putdown(i);
 No deadlock, but starvation is possible
Solution to Dining Philosophers (Cont.)
AKN/OSII.79Introduction to Operating Systems
DEADLOCK
 Several processes may compete for a finite
number of resources.
 A process requests resources; if the resources
are not available at that time, the process
enters a waiting state.
 Sometimes, a waiting process is never again
able to change state, because the resources
it has requested are held by other waiting
processes.
 This situation is called a deadlock
AKN/OSII.80Introduction to Operating Systems
System Model
 System consists of resources
 Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
 request : either gets the resource or waits
 use
 Release
 A set of processes is in a deadlocked state when
every process in the set is waiting for an event that
can be caused only by another process in the set.
R1
R2
P1 P2
holds
waits holds
waits
AKN/OSII.81Introduction to Operating Systems
Necessary Conditions
 Deadlock can arise if four conditions hold simultaneously.
 Mutual exclusion: only one process at a time can use
a resource.
 Hold and wait: a process holding at least one
resource is waiting to acquire additional resources
held by other processes
 No preemption: a resource can be released only
voluntarily by the process holding it
 Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0 is waiting for a resource that is
held by P1, P1 is waiting for a resource that is held by P2,
…, Pn–1 is waiting for a resource that is held by Pn, and
Pn is waiting for a resource that is held by P0.
AKN/OSII.82Introduction to Operating Systems
Resource-Allocation Graph
 Deadlocks can be described more precisely in terms of
a directed graph called Resource-Allocation Graph
 Vertices are partitioned into two types:
 P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system
 Represented as a circle
 R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system
 Represented as a rectangle
 Each instance of a resource type is represented as dots
 Two types of edges
 request edge – directed edge Pi  Rj
 allocated edge – directed edge Rj  Pi
P1
Pi
Rj
Pi
Rj
AKN/OSII.83Introduction to Operating Systems
Resource-Allocation Graph contd.
 If graph contains no cycles  no deadlock
 If graph contains a cycle 
 if only one instance per resource type, then
deadlock
 if several instances per resource type, possibility of
deadlock
AKN/OSII.84Introduction to Operating Systems
Example I
 P = {P1, P2, P3}, R= {R1, R2, R3, R4}
 Resource instances:
 One instance of R1 and R3
 Two instances of R2
 Three instances R4~
 Process states:
 Process P1 is holding an instance of R2
and is waiting for an instance of R1.
 Process P2 is holding an instance of R1
and an instance of R2 and is waiting for
an instance of R3.
 Process P3 is holding an instance of R3 .
No cycles => No deadlock
AKN/OSII.85Introduction to Operating Systems
Example II
 In addition to previous example
 Process states:
 P3 requests one instance of R2.
 Cycles now exist in RAG
 P1 -> R 1 -> P2 -> R3 -> P3 -> R2 -> P1
 P2 -> R3 -> P3 -> R2 -> P2
 So, Processes P1, P2, and P3 are
deadlocked.
AKN/OSII.86Introduction to Operating Systems
Example III
 P = {P1, P2, P3 ,P4}, R= {R1, R2}
 Resource instances:
 Two instances of R1 and R2
 Process states:
 Process P1 is holding an instance of R2
and is waiting for an instance of R1.
 Process P2 is holding an instance of R1
 Process P3 is holding an instance of R1
and waiting for R2 .
 P4 is holding an instance of R2.
 Cycle exists
 P1 -> R1 -> P3 -> R2 -> P1
 But there is no deadlock, as P4 may
release R2 that can be allocated to P3
AKN/OSII.87Introduction to Operating Systems
Methods for Handling Deadlocks
 Deadlock may be dealt in one of the following ways
 Use protocols to prevent or avoid deadlocks
 Allow the system to enter deadlock state, then recover it
 Ignore the problem altogether and pretend that deadlocks never
occur in the system.
 The third method is used by most of the OS, including UNIX and
Windows
 Deadlock prevention provides a set of methods for ensuring that
at least one of the necessary conditions cannot hold.
 Deadlock avoidance requires prior information regarding the
resource requirement of process, so that OS decides
allocation/release of resources so that deadlock is avoided.
 If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may
arise. the system then provides algorithms to recover from the
deadlock
AKN/OSII.88Introduction to Operating Systems
Deadlock Prevention
 Restrain the ways request can be made
 Mutual Exclusion – not required for sharable resources
(e.g., read-only files); must hold for non-sharable
resources (printer)
 A process never needs to wait for a sharable resource.
 The mutual-exclusion condition must hold for nonsharable
resources.
 Hold and Wait – must guarantee that whenever a
process requests a resource, it does not hold any other
resources
 Require process to request and be allocated all its resources
before it begins execution, or allow process to request
resources only when the process has none allocated to it.
 Low resource utilization; starvation possible
AKN/OSII.89Introduction to Operating Systems
Deadlock Prevention (Cont.)
 No Preemption –
 If a process holding resources requests another resource that
cannot be immediately allocated to it, then all resources
currently being held by the requesting process are released
 Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting
 Circular Wait
 impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of
enumeration
 Example protocol: Each process can request resources only in
an increasing order.
 That is, a process can initially request any number of instances
of a resource type -say, Ri.
 After that, the process can request instances of resource type
Rj if and only if F(Rj) > F(Ri).
AKN/OSII.90Introduction to Operating Systems
Protocol to avoid circular wait
 3) A process requesting an instance of resource type Rj must
have released any resources Ri such that F(Ri) > F(Rj).
 Proof: circular-wait can’t hold (proof by contradiction)
 Let the set of processes involved in the circular wait be { P0 , P1,
... , Pn}, and resources R = {R0, R1, . . . , Rn}
 Where, Pi is waiting for a resource Ri, which is held by process
Pi+1, . . . Pn is waiting for a resource Rn held by P0
 Then, since process Pi+1 is holding resource Ri , while requesting
resource Ri+1 we must have F(Ri) < F(Ri+1)  i.
 But this condition means that F(R0) < F(R1) < ... < F(Rn) < F (R0).
 By transitivity, F(R0) < F(R0), which is impossible
 Therefore, there can be no circular-wait. (proved)
 Note: Ordering doesn’t prevent circular-wait; developing
programs that ‘follows ordering’ prevents it.
AKN/OSII.91Introduction to Operating Systems
Deadlock Avoidance
 Requires that the system has some additional a priori
information available
 Protocol: each process declares the maximum
number of resources of each type that it may need
 The deadlock-avoidance algorithm dynamically
examines the resource-allocation state and decide for
each request whether or not the process should wait
in order to avoid a possible future deadlock.
 i.e. a circular-wait can never exist.
 Resource-allocation state: the number of available and
allocated resources, and the maximum demands of the
processes
AKN/OSII.92Introduction to Operating Systems
Deadlock Avoidance: Safe State
 When a process requests an available resource, system
must decide, if immediate allocation leaves the system
in a safe state
 safe state: A system is said to be in safe state, only if
there exists a safe sequence
 A sequence of processes <P1, P2, …, Pn> is a safe sequence, if for
each Pi
 the resources that Pi can still request can be satisfied by currently
available resources + resources held by all the Pj, with j < i
 That is, If resource needs of Pi are immediately not available,
then Pi can wait until all Pj have finished
 When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate
 When Pi terminates, Pi +1 can obtain its needed resources, and so
on
AKN/OSII.93Introduction to Operating Systems
Safe state contd.
 If a system is in safe state  no deadlocks
 Deadlock state is a unsafe state, but not all unsafe
states are deadlocks. But an unsafe state may lead to
deadlock.
 i.e. unsafe state  possibility of deadlock
 Avoidance  ensure that a system will never enter an
unsafe state.
 Example: 12 recourses and 3 processes
 t0: system is in safe state
 t1: resource allocated to p1 => system in safe state
 t1: resource allocated to p2 => unsafe => deadlock
AKN/OSII.94Introduction to Operating Systems
Avoidance Algorithms
 Single instance of a resource type
 Resource-Allocation Graph Algorithm
 Multiple instances of a resource type
 Banker’s algorithm
AKN/OSII.95Introduction to Operating Systems
Resource-Allocation Graph
 Claim edge Pi  Rj indicated that process Pj may
request resource Rj; represented by a dashed line
 Claim edge converts to request edge when a
process requests a resource
 Request edge converted to an assignment
edge when the resource is allocated to the
process
 When a resource is released by a process, assignment
edge reconverts to a claim edge
 Resources must be claimed a priori in the system
 Alg: The request of process Pi for resource Rj is
granted, only if converting the request edge to
an assignment edge does not result in the
formation of a cycle in the resource allocation
graph
 Exp: If P2 requests R2 => it can’t be granted
 Because by converting edge p2 -> r2 to a request edge forms a cycle
AKN/OSII.96Introduction to Operating Systems
Banker’s Algorithm
 Used when there are multiple instances per
resource type.
 Each process must claim maximum use in
advance
 When a process requests a resource it may
have to wait if
 resource not available or
 Allocating a resource leaves the system in unsafe
state
 When a process gets all its resources it must
return them in a finite amount of time
AKN/OSII.97Introduction to Operating Systems
Data Structures for the Banker’s Algorithm
 Let n = # of processes, and m = # of resource types
 Available: Vector of length m.
 If available [ J ] = k, there are k instances of resource type RJ
available
 Max: n x m matrix.
 If Max [ i, J ] = k, then process Pi may request at most k
instances of resource type RJ
 Allocation: n x m matrix.
 If Allocation[ i, J ] = k, then Pi is allocated with k instances of RJ
 Need: n x m matrix.
 If Need[i, J] = k, then Pi may need k more instances of RJ to
complete its task
Need [ i, J ] = Max[ i, J ] – Allocation [ i, J]
AKN/OSII.98Introduction to Operating Systems
Banker’s Safety Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively. Initialize:
Work = Available
Finish [ i ] = false for i = 0, 1, …, n- 1
2. Find an i such that both:
(a) Finish [ i ] = false
(b) Needi  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe
state
AKN/OSII.99Introduction to Operating Systems
Resource-Request Algorithm for Process Pi
 Requesti = request vector for Pi.
 Requesti [ J ] = k , Pi wants k instances of RJ
 Algorithm
1. If Requesti  Needi go to step 2.
a) Otherwise, raise error, since it has exceeded its maximum claim
2. If Requesti  Available, go to step 3.
a) Otherwise Pi must wait, since resource not available
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
a) Available = Available – Requesti;
b) Allocationi = Allocationi + Requesti;
c) Needi = Needi – Requesti;
4. If safe  the resources are allocated to Pi
a) Otherwise Pi must wait, and the old resource-allocation state is
restored
AKN/OSII.100Introduction to Operating Systems
Example of Banker’s Algorithm
 5 processes P0 . . . P4; and 3 resource types
 A (10 instances), B (5instances), and C (7 instances)
 Snapshot at time T0:
Allocation MaxClaim Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
 Calculate Need as
 MaxClaim - Allocation
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
AKN/OSII.101Introduction to Operating Systems
P1 Request (1,0,2)
 Check that Request  Available (that is, (1,0,2)  (3,3,2)  true
 So find the new state at time T1
Allocation Need Available
A B C A B C A B C
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
 Now execute safety algorithm and find the safe sequence
1. Work = Available, Finish [ i ] = false for i = 0, 1, 2, 3, 4
2. Find an i such that
Finish [ i ] = false & Needi  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true, go to step 2
4. If Finish [i] == true for all i, is in a safe state
 Safe sequence is < P1, P3, P4, P0, P2>, so request allowed.
 Can request for (3,3,0) by P4 be granted? At T0, if yes then find the
safety sequence
 Can request for (0,2,0) by P0 be granted? At T0 , if yes then find
the safety sequence
AKN/OSII.102Introduction to Operating Systems
Deadlock Detection/Recovery
 Three steps of action
 Allow system to enter deadlock state
 Algorithm to detect the deadlock state
 Apply recovery scheme
 Single Instance of each resource type
 wait-for graph (variation of RAG)
 Only process nodes (no resource node)
 Pi  PJ if Pi is waiting for PJ to release a resource
 If there is a cycle, => a deadlock
 Algorithm requires order of n2 operations, n is
the vertices in the graph.
AKN/OSII.103Introduction to Operating Systems
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph
Corresponding wait-for graph
AKN/OSII.104Introduction to Operating Systems
Several Instances of a Resource Type
 Available:
 A vector of length m => the number of available
resources instances
 Allocation:
 An n x m matrix => the number of resources
instances of each type currently allocated.
 Request:
 An n x m matrix => the current request of each
process.
 If Request [i][J] = k, then process Pi is requesting k
more instances of resource type RJ.
AKN/OSII.105Introduction to Operating Systems
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, initialized as
follows
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that:
(a) Finish[i] == false
(b) Requesti  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1  i  n, then the system is in
deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked
Algorithm requires an order of
O(m x n2) operations to detect
a deadlocked state
AKN/OSII.106Introduction to Operating Systems
Example of Detection Algorithm
 Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances)
 Snapshot at time T0:
Allocation Request Available
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
 Find if there is a deadlock?, if no then find the sequence.
 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
 Suppose P2 requests an additional instance of type C, then find the
system status
 i.e. No deadlock => process execution sequence
 Deadlock => list of deadlocked process
AKN/OSII.107Introduction to Operating Systems
Detection-Algorithm Usage
 When, and how often, to invoke depends on:
 How often a deadlock is likely to occur?
 How many processes will be affected?
 Frequent invocation => overhead
 Infrequent invocation => many deadlocks and difficult
to detect the originating one
 Invoke either at a defined interval (say 1 hr) or when
the CPU utilization drops below a certain point (say
40%)
 Recovery from deadlock
 Process Termination
 Resource Preemption
AKN/OSII.108Introduction to Operating Systems
Recovery from Deadlock: Process Termination
 Abort all deadlocked processes
 Very expensive w.r.t. CPU time.
 i.e. partial computation by all processes are lost
 Abort one process at a time until the deadlock cycle is
eliminated
 Very expensive w.r.t. CPU Overhead as after each abort, the
deadlock detection algorithm to be executed.
 Factors to consider to choose a process to abort
1. Priority of the process
2. How long process has computed, and how much longer to
completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
AKN/OSII.109Introduction to Operating Systems
Recovery from Deadlock: Resource Preemption
 Selecting a victim – minimize cost
 i.e. number of resources holding, amount of time it
has already executed
 Rollback – return to some safe state, restart
process from that state
 Difficult to choose a safe state, total rollback is
preferred.
 Starvation – same process may always be
picked as victim
 include number of rollback in cost factor

Operating Systems Part II-Process Scheduling, Synchronisation & Deadlock

  • 1.
  • 2.
    AKN/OSII.2Introduction to OperatingSystems Communication Models  Processes within a system may be either independent or cooperating  Independent processes is a process cannot affect or be affected by the other processes.  i.e. does not share data with any other process  Cooperating process can affect or be affected by other processes, including sharing data  Advantages of cooperating processes:  Information sharing  Computation speedup: breaking a task into multiple subtasks  Modularity: dividing the system functions into separate processes or threads.  Convenience: Individual user may work on many tasks at the same time.
  • 3.
    AKN/OSII.3Introduction to OperatingSystems Interprocess Communication  Cooperating processes need interprocess communication (IPC) to allow them to exchange data and information.  Two models of IPC  Shared memory: Processes can exchange information by reading and writing data to the shared region.  Message passing: communication takes place by exchanging messages between the cooperating processes.  Message passing is useful for exchanging smaller amounts of data easier to implement than is shared memory for inter-computer communication.  Shared memory is faster than message passing as it can be done at memory speeds when within a computer.  Message passing systems are typically implemented using system calls and thus require the more time consuming task of kernel intervention
  • 4.
    AKN/OSII.4Introduction to OperatingSystems Communications Models (a) Message passing. (b) shared memory.
  • 5.
    AKN/OSII.5Introduction to OperatingSystems Shared Memory  In cooperating processes paradigm, a producer process produces information that is consumed by a consumer process. (Producer-consumer problem, bounded-buffer problem)  To allow producer and consumer processes to run concurrently, a shared buffer of items must be available that can be filled by the producer and emptied by the consumer.  The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced. Producer Consumer Reading Writing Shared buffer Which data structure would be suitable?
  • 6.
    AKN/OSII.6Introduction to OperatingSystems Producer-Consumer Problem  May be implemented using two types of buffers  unbounded-buffer  No limit on the size of the buffer  The consumer may have to wait for new items, but the producer can always produce new items.  bounded-buffer  Fixed buffer size  The consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
  • 7.
    AKN/OSII.7Introduction to OperatingSystems Implementation  May be implemented as a circular queue with two pointers: in and out.  in points to the next free position in the buffer;  out points to the first full position in the buffer.  The buffer is empty when in == out;  The buffer is full when ((in + 1) % BUFFER_SIZE) == out.  Max elements allowed are BUFFER_SIZE – 1;
  • 8.
    AKN/OSII.8Introduction to OperatingSystems Message Passing  Provides mechanism for processes to communicate and to synchronize their actions.  Unlike shared buffer, in this case the communicating processes may reside on different computers connected by a network.  Ex. A chat program in Internet  Message passing system provides at least two operations:  send(message)  receive(message)  If processes P and Q wish to communicate, they need to:  Establish a communication link between them  Exchange messages via send/receive
  • 9.
    AKN/OSII.9Introduction to OperatingSystems Message Passing (Cont.)  Implementation issues:  How are links established?  Can a link be associated with more than two processes?  How many links can there be between every pair of communicating processes?  What is the capacity of a link?  Is the size of a message that the link can accommodate fixed or variable?  Is a link unidirectional or bi-directional?
  • 10.
    AKN/OSII.10Introduction to OperatingSystems Message Passing (Cont.)  Implementation of communication link  Physical:  Shared memory  Hardware bus  Network  Logical:  Direct or indirect  Synchronous or asynchronous  Automatic or explicit buffering
  • 11.
    AKN/OSII.11Introduction to OperatingSystems Direct Communication  Processes must name each other explicitly:  send (P, message) – send a message to process P  receive(Q, message) – receive a message from process Q  Properties of communication link  Links are established automatically  A link is associated with exactly one pair of communicating processes  Between each pair there exists exactly one link  The link may be unidirectional, but is usually bi- directional
  • 12.
    AKN/OSII.12Introduction to OperatingSystems Indirect Communication  Messages are directed and received from mailboxes (also referred to as ports)  Each mailbox has a unique id  Processes can communicate only if they share a mailbox  Properties of communication link  Link established only if processes share a common mailbox  A link may be associated with many processes  Each pair of processes may share several communication links  Link may be unidirectional or bi-directional
  • 13.
    AKN/OSII.13Introduction to OperatingSystems Indirect Communication  Operations  create a new mailbox (port)  send and receive messages through mailbox  destroy a mailbox  Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a message from mailbox A
  • 14.
    AKN/OSII.14Introduction to OperatingSystems Indirect Communication  Mailbox sharing  P1, P2, and P3 share mailbox A  P1, sends; P2 and P3 receive  Who gets the message?  Solutions  Allow a link to be associated with at most two processes  Allow only one process at a time to execute a receive operation  Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
  • 15.
    AKN/OSII.15Introduction to OperatingSystems CPU Scheduling  CPU and I/O burst  CPU Scheduler  Preemption / non-preemption  Problems with preemptive scheduling  Dispatcher, Dispatch latency  Scheduling criteria (CPU utilisation, throughput, waiting time, turn around time, response time)  Gantt Chart  FCFS  SJF (non-premptive, preemptive)  Priority Scheduling  Round-Robin Scheduling
  • 16.
    AKN/OSII.16Introduction to OperatingSystems CPU Scheduling  Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed.  The selection process is carried out by the short-term scheduler (or CPU scheduler).  Process execution consists of a cycle of CPU execution and I/O wait  CPU burst followed by I/O burst  CPU burst distribution is of main concern
  • 17.
    AKN/OSII.17Introduction to OperatingSystems CPU Scheduler  CPU scheduling decisions may take place in different situations  Process switches from running to waiting state (I/O or wait())  Switches from running to ready state (interrupt)  Switches from waiting to ready state (I/O complete)  Terminates  Scheduling under 1 and 4 is nonpreemptive and others are preemptive  Problems associated with Preemptive scheduling  two processes shares data, one is preempted by other while updating data  While changing important kernel data (for instance, I/0 queues)  Consider interrupts occurring during crucial OS activities
  • 18.
    AKN/OSII.18Introduction to OperatingSystems Dispatcher  Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;  Context switching  switching to user mode  jumping to the proper location in the user program to restart that program  Dispatch latency – time it takes for the dispatcher to stop one process and start another running
  • 19.
    AKN/OSII.19Introduction to OperatingSystems Scheduling Criteria  To design an efficient scheduling algorithm, following criteria may be considered  CPU utilization – keep the CPU as busy as possible (% )  Throughput – number of processes that complete their execution per time unit  Turnaround time – amount of time to execute a particular process  Waiting time – amount of time a process has been waiting in the ready queue  Response time – amount of time it takes from when a request was submitted and the CPU/device starts responding (not producing output)
  • 20.
    AKN/OSII.20Introduction to OperatingSystems First- Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 ms P2 3 ms P3 3 ms  Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P P P1 2 3 0 24 3027  In this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. Gantt chart, is a bar chart that illustrates a particular schedule  Waiting time for P1 = 0; P2 = 24; P3 = 27 ms  Average waiting time: (0 + 24 + 27)/3 = 17 ms
  • 21.
    AKN/OSII.21Introduction to OperatingSystems FCFS contd.  Find the average waiting time, if arrival order is P2, P3, P1.  Avg. waiting time = 3 ms, much better than previous  There is a convoy effect as all the other processes wait for the one big process to get off the CPU.  Results in lower CPU and device utilization.  FCFS scheduling algorithm is nonpreemptive.  Therefore, not suitable for time-sharing systems
  • 22.
    AKN/OSII.22Introduction to OperatingSystems Shortest-Job-First (SJF)  When the CPU is available, it is assigned to the process that has the smallest next CPU burst.  shortest-next-CPU-burst algorithm  If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.  Example: Process BurstTime P1 6 P2 8 P3 7 P4 3 P3 0 3 24 P4 P1 169 P2 Average Waiting Time? (0+3+9+16)/4=7ms
  • 23.
    AKN/OSII.23Introduction to OperatingSystems SJF Contd.  The SJF scheduling algorithm is optimal,  It gives the minimum average waiting time for a given set of processes.  It can be used for long-term (job) scheduling in a batch system.  Limitation  there is no way to know the length of the next CPU burst in case of a short term scheduler.  However, we may predict its value by assuming that the next CPU burst will be similar in length to the previous ones. (exponential averaging) nnn t  )1(1 Where n+1 : next CPU burst prediction, tn: current CPU burst 0    1: weight factor normally  = 1/2
  • 24.
    AKN/OSII.24Introduction to OperatingSystems SJF Contd.  SJF can either be preemptive or non- preemptive  Preemptive SJF is known as shortest-remaining-time-first  Example Proc. A.Time Burst P1 0 8 P2 1 4 P3 2 9 P4 3 5 P1 P2 P4 P3 0 8 12 17 26 Non-preemptive P4 0 1 26 P1 P2 10 P3 P1 5 17 Preemptive  Average waiting time for non-preemptive SJF?  [(0-0)+(8-1)+(12-3)+(17-2) ]/ 4 = 7.75ms  Average Waiting time for preemptive SJF?  [(0-0)+(1-1)+(5-3) +(10-1) +(17-2)]/4 = 6.5 ms
  • 25.
    AKN/OSII.25Introduction to OperatingSystems Priority Scheduling  A priority is associated with each process, and the CPU is allocated to the process with the highest priority.  Equal-priority processes are scheduled in FCFS order.  An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst.  The larger the CPU burst, the lower the priority, and vice versa.  Example (lowest integer is highest priority) Proc. Burst Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2  Average Waiting time?  [0+1+6+16+18]/5 = 8.2 ms P2 P5 P1 P3 P4 0 1 196 16 18
  • 26.
    AKN/OSII.26Introduction to OperatingSystems Priority Scheduling contd.  Priorities can be defined either internally or externally.  Factors for Internally defined  time limits, memory requirements, the number of open files, and the ratio of average I/0 burst to average CPU burst etc.  Factors for External defined priorities  importance of the process, the type and amount of funds being paid for computer use, the department sponsoring the work, political factors etc.  Priority scheduling can be either preemptive or nonpreemptive.  A preemptive algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.  A nonpreemptive algorithm will simply put the new process at the head of the ready queue.
  • 27.
    AKN/OSII.27Introduction to OperatingSystems Example: Preemptive Priority Sch. P Priority AT CBT 1 6 0 4 2 5 1 2 3 4 2 3 4 1 3 5 5 3 4 1 6 0 5 4 7 2 6 6 P1 1 3 5 9 12 22 P2 2 P3 P4 P6 P4 P7 18 P5 19 P3 21 P2 P1 25 3/ 1/ 2/ 3/ 0/ 0/ 0/ 0/ 0/ 0/ 0/ Average Waiting time: P1= 22-1 =21 P2= 21-2 =19 P3= 19-3 =16 P4= 9-5 =4 P5= 18-4 =14 P6= 5-5 =0 P7= 12-6 =6 Average Waiting time = 11.43ms
  • 28.
    AKN/OSII.28Introduction to OperatingSystems Priority Scheduling contd.  It suffers with a problem called indefinite blocking or starvation.  In a heavily loaded computer system, a stream of higher- priority processes can prevent a low-priority process from ever getting the CPU.  Rumor when they shut down the IBM 7094 at MIT in 1973, they found a low-priority process that had been submitted in 1967 and had not yet been run.  Aging is a solution to the above problem  is a technique of gradually increasing the priority of processes that wait in the system for a long time.  Eventually, even a process with an initial low priority would have the highest priority in the system and would be executed.
  • 29.
    AKN/OSII.29Introduction to OperatingSystems Round-Robin Scheduling  The round-robin (RR) scheduling is designed especially for timesharing systems.  It is FCFS scheduling with preemption to enable the system to switch between processes.  A small unit of time, called a time quantum or time slice, is defined.  A time quantum is generally from 10 to 100 ms in length.  The ready queue is treated as a circular queue.  The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.  If CPU burst < 1 time quantum => process releases the CPU  Else preemption=>context switch => process put to the tail of RQ
  • 30.
    AKN/OSII.30Introduction to OperatingSystems Round-Robin Example with TQ=4  Process Burst P1 24 P2 3 P3 3 P P P1 1 1 0 18 3026144 7 10 22 P2 P3 P1 P1 P1  Average waiting time?  [(10-4) + 4+ 7]/3 = 5.67ms  If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units.  Each process must wait no longer than (n - 1) x q time units until its next time quantum.  Example: Five processes and a time quantum of 20ms.  i.e. each process will get up to 20 milliseconds in every 100ms.
  • 31.
    AKN/OSII.31Introduction to OperatingSystems Round-Robin Scheduling contd.  if the time quantum is extremely large, the RR policy is the same as the FCFS policy.  if the time quantum is extremely small (1 ms), the RR approach is called processor sharing  i.e. creates the appearance that each of n processes has its own processor running at 1/n the speed of the real processor.  Effect of time quantum on context switching  Less time quantum => more context switch=> system slowdown  Context switch time should be a small fraction of the time quantum  Effect of time quantum on turnaround time  average turnaround time does not necessarily improve as the time-quantum size increases.  It can be improved if CPU bursts are ≤ time quantum.
  • 32.
    AKN/OSII.32Introduction to OperatingSystems Multilevel Queue Scheduling I  Different processes may have different response-time requirements and different scheduling needs.  Ex: foreground processes (interactive) may have priority over background (batch) processes.  A multilevel queue scheduling algorithm partitions the ready queue into several separate queues
  • 33.
    AKN/OSII.33Introduction to OperatingSystems Multi-level Queue Scheduling II  Queue are created based on some property of the process, such as memory size, process priority, or process type.  Each queue has its own scheduling algorithm.  The foreground queue may be scheduled using RR algorithm, while the background queue is scheduled by FCFS algorithm.  Also there must be scheduling among the queues,  Commonly implemented as fixed-priority preemptive scheduling.  Or time-slice among the queues. Each queue gets a certain portion of the CPU time, which it can then schedule among its various processes.
  • 34.
    AKN/OSII.34Introduction to OperatingSystems Multi-level Feedback Queue Sch.  The multilevel feedback queue scheduling algorithm, allows a process to move between queues.  If a process uses too much CPU time, it will be moved to a lower- priority queue.  This scheme leaves I/O-bound and interactive processes in the higher-priority queues.  A process that waits too long in a lower-priority queue may be moved to a higher-priority queue.  This form of aging prevents starvation.  A multilevel feedback queue scheduler is defined by:  The number of queues  The scheduling algorithm for each queue  The method used to determine when to upgrade a process to a higher priority queue or to demote to a lower priority queue
  • 35.
    AKN/OSII.35Introduction to OperatingSystems Example  A process entering the ready queue is put in queue 0.  If it does not finish within 8ms, it is moved to the tail of queue 1.  If queue 0 is empty, the process at the head of queue 1.  If it does not complete in 16ms, it is preempted and is put into queue 2.  Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty.  This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less.
  • 36.
    AKN/OSII.36Introduction to OperatingSystems Example: Preemptive Priority Sch. P Priority AT CBT 1 6 0 4 2 5 1 2 3 4 2 3 4 1 3 5 5 3 4 1 6 0 5 4 7 2 6 6 P1 1 3 5 9 12 22 P2 2 P3 P4 P6 P4 P7 18 P5 19 P3 21 P2 P1 25 3/ 1/ 2/ 3/ 0/ 0/ 0/ 0/ 0/ 0/ 0/ Average Waiting time: P1= 22-1 =21 P2= 21-2 =19 P3= 19-3 =16 P4= 9-5 =4 P5= 18-4 =14 P6= 5-5 =0 P7= 12-6 =6 Average Waiting time = 11.43ms
  • 37.
    AKN/OSII.37Introduction to OperatingSystems Examples Proc. BT Pri P1 10 3 P2 1 1 P3 2 3 P4 1 4 P5 5 2 Draw Gantt chart, find avg. turnaround time avg. Waiting time for 1. FCFS 2. SJF 3. Priority (small=> highest) 4. RR P1 P2 P3 P4 P5 0 FCFS 10 11 13 1914 FCFS: Avg. turnaround time= (10+11+13+14+19)/5 = 13.4 Avg waiting time = (0+10+11+13+14)/5 = 9.6 P2 P4 P3 P5 P1 0 SJF 1 2 4 199 SJF: Avg. turnaround time= (19+1+4+2+9)/5 = 7 Avg waiting time = (9+0+2+1+4)/5 = 3.2
  • 38.
    AKN/OSII.38Introduction to OperatingSystems Multilevel Queue Scheduling I  Different processes may have different response-time requirements and different scheduling needs.  Ex: foreground processes (interactive) may have priority over background (batch) processes.  A multilevel queue scheduling algorithm partitions the ready queue into several separate queues
  • 39.
    AKN/OSII.39Introduction to OperatingSystems Multi-level Queue Scheduling II  Queue are created based on some property of the process, such as memory size, process priority, or process type.  Each queue has its own scheduling algorithm.  The foreground queue may be scheduled using RR algorithm, while the background queue is scheduled by FCFS algorithm.  Also there must be scheduling among the queues,  Commonly implemented as fixed-priority preemptive scheduling.  Or time-slice among the queues. Each queue gets a certain portion of the CPU time, which it can then schedule among its various processes.
  • 40.
    AKN/OSII.40Introduction to OperatingSystems Multi-level Feedback Queue Sch.  The multilevel feedback queue scheduling algorithm, allows a process to move between queues.  If a process uses too much CPU time, it will be moved to a lower- priority queue.  This scheme leaves I/O-bound and interactive processes in the higher-priority queues.  A process that waits too long in a lower-priority queue may be moved to a higher-priority queue.  This form of aging prevents starvation.  A multilevel feedback queue scheduler is defined by:  The number of queues  The scheduling algorithm for each queue  The method used to determine when to upgrade a process to a higher priority queue or to demote to a lower priority queue
  • 41.
    AKN/OSII.41Introduction to OperatingSystems Example  A process entering the ready queue is put in queue 0.  If it does not finish within 8ms, it is moved to the tail of queue 1.  If queue 0 is empty, the process at the head of queue 1 is considered for execution.  If it does not complete in 16ms, it is preempted and is put into queue 2.  Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty.  This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less.
  • 42.
    AKN/OSII.42Introduction to OperatingSystems Process Synchronization  Processes can execute concurrently  May be interrupted at any time, partially completing execution  Concurrent access to shared data may result in data inconsistency  Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes  Illustration: (producer-consumer problem)  Lets use a variable counter that keeps track of the number of full buffers.  counter initialized to 0  It is incremented by the producer after it produces a new item and is decremented by the consumer after it consumes an item.
  • 43.
    AKN/OSII.43Introduction to OperatingSystems Producer while (true) { /* produce an item in next_produced*/ while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; counter++; } while (true) { while (counter == 0) ; /* do nothing */ next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; /* consume the item in next_consumed*/ } Consumer
  • 44.
    AKN/OSII.44Introduction to OperatingSystems Illustration contd.  Both the routines function correctly when executed separately  May not function correctly when executed concurrently.  counter++ could be implemented in M/C language as register1 = counter register1 = register1 + 1 counter = register1  counter-- could be implemented in M/C language as register2 = counter register2 = register2 - 1 counter = register2  Consider an interleaving execution with “count = 5” initially: S0: producer: register1 = counter S1: producer: register1 = register1 + 1 S2: consumer: register2 = counter S3: consumer: register2 = register2 – 1 S4: producer: counter = register1 S5: consumer: counter = register2 Find the value of counter after above execution sequence {register1 = 5} {register1 = 6} {register2 = 5} {register2 = 4} {counter = 6} {counter = 4}
  • 45.
    AKN/OSII.45Introduction to OperatingSystems Race Condition  The concurrent execution of counter++ and counter-- is equivalent to a sequential execution in which the lower- level statements are interleaved in some arbitrary order  It arrives at the incorrect state counter == 4, indicating that four buffers are full, when, in fact, five buffers are full.  Further, if the order of the statements S4 and S5 are changed , we would arrive at another incorrect state counter== 6.  this incorrect state is caused by allowing both processes to manipulate the variable counter concurrently.  If several processes access and manipulate the same data concurrently and the outcome depends on the particular order of access, is called a race condition.
  • 46.
    AKN/OSII.46Introduction to OperatingSystems Process Synchronization  Race condition occur frequently in operating systems.  Further, with the growth of multicore systems, applications use several threads - sharing data that may lead to race condition more often.  To guard against the race condition , we need to ensure that only one process at a time can be manipulating the variable counter.  To make such a guarantee, we require that the processes be synchronized.  Process Synchronization is achieved by solving a Critical Section problem.  A Critical Section is a code segment that accesses shared variables and need to be executed as an atomic action.
  • 47.
    AKN/OSII.47Introduction to OperatingSystems Critical Section Problem  Consider system of n processes {p0, p1, … pn-1}  Each process has critical section segment of code where  Processes may be changing shared variables, updating table, writing file, etc  It is required that, no two processes are executing in their critical sections at the same time  Critical section problem is to design protocols so that processes cooperate.  i.e. each process must ask permission to enter critical section in entry section code  exits in exit section code,  Remaining code is in remainder section
  • 48.
    AKN/OSII.48Introduction to OperatingSystems Critical Section of a Process  A solution to the critical-section problem must satisfy the following three requirements:  Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections  Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes to execute in critical section next, cannot be postponed indefinitely  Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
  • 49.
    AKN/OSII.49Introduction to OperatingSystems Critical-Section Handling in OS  Two approaches depending on if kernel is preemptive or non- preemptive  Preemptive– allows preemption of process when running in kernel mode  Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU Essentially free of race conditions in kernel mode  A Preemptive kernel is more suitable for real-time programming  A preemptive kernel is more responsive, since no kernel-mode process can run for an arbitrarily long period.
  • 50.
    AKN/OSII.50Introduction to OperatingSystemsOperating System Concepts Semaphores  Semaphore S is an integer variable that can only be accessed via two indivisible (atomic) operations, wait(), signal() wait (S){ /*busy wait*/ while (S  0); S--; } signal (S){ S++; }  When one process modifies the semaphore value, no other process can simultaneously modify the same semaphore.  The testing of S (S  0) and S - - must be executed uninterruptedly
  • 51.
    AKN/OSII.51Introduction to OperatingSystemsOperating System Concepts Usage - I  May be used either as a counting semaphore or binary semaphore.  The value of a counting semaphore can range over an unrestricted domain, but the binary semaphore can range only between 0 & 1  Binary semaphore are called mutex locks  Counting semaphores can be used to control access to a given resource consisting of a finite number of instances.  The semaphore is initialized to the number of resources available.  to use a resource => wait() operation on the semaphore.  to releases a resource =>a signal() operation.  count for the semaphore goes to 0, => all resources are being used.  After that, processes that wish to use a resource will block until the count becomes greater than 0.
  • 52.
    AKN/OSII.52Introduction to OperatingSystemsOperating System Concepts Usage - II  Binary semaphores may be used to deal with the critical-section problem for multiple processes.  Then processes share a semaphore, initialized to 1. do { wait (mutex) ; /* critical section */ signal(mutex); /*remainder section */ } while (TRUE);  Suppose we require that S2 be executed only after S1 has completed. In P1 s1; signal(synch) ; In P2 wait(synch) ; s2;
  • 53.
    AKN/OSII.53Introduction to OperatingSystemsOperating System Concepts Implementation - I  Disadvantage of the semaphore definition is that, it requires busy waiting  i.e. while a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code.  Where a single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other process might be able to use productively.  This type of semaphore is also called a spinlock because the process "spins" while waiting for the lock.  When a process executes the wait () operation and finds that the semaphore value is not positive, it must block itself instead of busy wait.  The block operation places a process into a waiting queue associated with the semaphore. CPU selects a new process.
  • 54.
    AKN/OSII.54Introduction to OperatingSystemsOperating System Concepts Implementation II  Semaphore now is defined as typedef struct{ int val; struct process *list; } semaphore; wait(semaphore *S): S->value--; if (S->value < 0) { add this process to S->list; block(); } signal(semaphore *S): S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); }
  • 55.
    AKN/OSII.55Introduction to OperatingSystemsOperating System Concepts Implementation - III  When a process must wait on a semaphore, it is added to the list of processes.  A signal() operation removes one process from the list of waiting processes and awakens that process.  The block() operation suspends the process that invokes it.  The wakeup(P) operation resumes the execution of a blocked process P.  Unlike classical definition of “semaphores with busy waiting” semaphore values may be negative in “waiting queue” implementation.  i.e. its magnitude is the number of processes waiting on that semaphore
  • 56.
    AKN/OSII.56Introduction to OperatingSystemsOperating System Concepts Deadlock and Starvation - I  Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.  Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q) signal(S);  Suppose that Po executes wait (S) and then P1 executes wait (Q).  When Po executes wait (Q), it must wait until P1 executes signal (Q).  Similarly, when P1 executes wait (S), it must wait until Po executes signal(S).
  • 57.
    AKN/OSII.57Introduction to OperatingSystemsOperating System Concepts Deadlock and Starvation - II  Since these signal() operations can't be executed, Po and P1 are deadlocked.  A set of processes is in a deadlock state when every process in the set is waiting for an event that can be caused only by another process in the set.  Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.  Indefinite blocking may occur if we remove processes from the list associated with a semaphore.
  • 58.
    AKN/OSII.58Introduction to OperatingSystems Priority Inversion  Consider three processes with their priority as L < M < H and Assume that process H requires resource R, which is currently being accessed by process L.  Ordinarily, process H would wait for L to finish using resource R.  However, now suppose that process M preempts process L.  Indirectly, a process with a lower priority (M) has affected how long a high priority process (H) must wait.  This is known as priority inversion problem.  This problem practically occurred in the Mars Pathfinder for which the system was reset at least 6times after landing.  Solution: Priority inheritance  All processes that are accessing resources needed by a higher- priority process inherit the higher priority until they are finished with the resources.
  • 59.
    AKN/OSII.59Introduction to OperatingSystems Classic Problems of Synchronization  Classic problems used to test any newly- proposed synchronization schemes  Bounded-Buffer Problem  Readers and Writers Problem  Dining-Philosophers Problem
  • 60.
    AKN/OSII.60Introduction to OperatingSystems Bounded-Buffer Problem  n buffers, each can hold one item.  Producer writes into the buffer, consumer reads from the buffer  Semaphore mutex  initialized to the value 1  Provides mutual exclusion for access to the buffer pool.  Semaphore full  initialized to the value 0  Counts the number of full buffers  Semaphore empty  initialized to the value n  Counts the number of empty buffers
  • 61.
    AKN/OSII.61Introduction to OperatingSystems Producer Process  The structure of the producer process do { ... /* produce an item in next_produced */ ... wait(empty); wait(mutex); ... /* add next_produced to the buffer */ ... signal(mutex); signal(full); } while (true);
  • 62.
    AKN/OSII.62Introduction to OperatingSystems Consumer process  The structure of the consumer process do { wait(full); wait(mutex); ... /* remove an item from buffer to next_consumed */ ... signal(mutex); signal(empty); ... /* consume the item in next consumed */ ... } while (true);
  • 63.
    AKN/OSII.63Introduction to OperatingSystems Readers-Writers Problem  A database is shared among a number of concurrent processes  Readers – only read the data set; they do not perform any updates  Writers – can both read and write  Problem – allow multiple readers to read at the same time  Only one single writer can access the shared data at the same time  The readers-writers problem has several variations  A first readers-writers problem: requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object.  Shared variables  Semaphore rw_mutex, provides mutual exclusion to writers, initialized to 1  Semaphore mutex, provides mutual exclusion to update readcount, initialized to 1  Integer read_count, used to keep track of how many processes are currently reading dataset, initialized to 0
  • 64.
    AKN/OSII.64Introduction to OperatingSystems The structure of processes do { wait(mutex); read_count++; if (read_count == 1) wait(rw_mutex); signal(mutex); ... /* reading is performed */ ... wait(mutex); read count--; if (read_count == 0) signal(rw_mutex); signal(mutex); } while (true); Writer Process do { wait(rw_mutex); ... /* writing is performed */ ... signal(rw_mutex); }while (true); Reader Process
  • 65.
    AKN/OSII.65Introduction to OperatingSystems Reader-Writer Problem  The solution referred to as the first readers- writers problem.  It requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object.  The second readers-writers problem requires that  Once a writer is ready, that writer performs its write as soon as possible.  In other words, if a writer is waiting to access the object, no new readers may start reading.  Both the solutions may lead to starvation
  • 66.
    AKN/OSII.66Introduction to OperatingSystems Dining-Philosophers Problem  It is a simple representation of allocating several resources among several processes in a deadlock-free and starvation-free manner.  represent each chopstick with a semaphore.  Semaphore chopstick [5]  All elements are initialized to 1  Philosophers spend their lives alternating thinking and eating  Don’t interact with their colleagues, when hungry, try to pick up 2 chopsticks (one at a time) that are closest to her  When a hungry philosopher has both her chopsticks at the same time, she eats.  When she is finished eating, she puts down both of her chopsticks and starts thinking again.
  • 67.
    AKN/OSII.67Introduction to OperatingSystems The structure of Philosopher i do { wait (chopstick[i] ); wait (chopStick[ (i + 1) % 5] ); // eat signal (chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE);  Solution guarantees that no two neighbors are eating simultaneously  But may result in a deadlock  If all five philosophers become hungry simultaneously and each grabs her left chopstick.  All the elements of chopstick will now be equal to 0. When they try to grab their right chopstick, they will be delayed forever.
  • 68.
    AKN/OSII.68Introduction to OperatingSystems Deadlock handling  Deadlock handling  Soln 1: Allow at most 4 philosophers to be sitting simultaneously at the table.  Soln 2: Allow a philosopher to pick up the forks only if both are available (picking must be done in a critical section)  Soln 3: An asymmetric solution  An odd-numbered philosopher picks up first the left chopstick and then the right chopstick.  Even-numbered philosopher picks up first the right chopstick and then the left chopstick.
  • 69.
    AKN/OSII.69Introduction to OperatingSystems Problems with Semaphores  Incorrect use of semaphore operations may result in Deadlock and starvation situtaions  signal (mutex) …. wait (mutex)  wait (mutex) … wait (mutex)  Omitting of wait (mutex) or signal (mutex) (or both)
  • 70.
    AKN/OSII.70Introduction to OperatingSystems Monitors  A high-level abstraction that provides a convenient and effective mechanism for process synchronization.  Monitor is an Abstract Data Type (ADT),  encapsulates private data with public methods to operate on that data  Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } procedure Pn (…) {……} Initialization code (…) { … } }
  • 71.
    AKN/OSII.71Introduction to OperatingSystems Monitor Contd.  Conditional variables may be declared  condition x, y;  These variables can access wait and signal  i.e. when a process invokes x. wait(); , the process is suspended till another process invokes x.signal();  The x. signal() operation resumes exactly one suspended process.  It has no effect if no process is suspended
  • 72.
    AKN/OSII.72Introduction to OperatingSystems Monitor Solution to Bounded Buffer monitor BoundedBuffer { condition full, empty; int count; void addItem(item){ if (count == BUFFER_SIZE) full.wait(); // if buffer is full, block // put item in buffer count = count + 1; if (count == 1) empty.signal(); // awake consumer } item removeItem(){ if (count == 0) empty.wait(); // if buffer is empty, block // remove item from buffer count = count - 1; if (count == BUFFER_SIZE-1) full.signal(); // awake producer return item; } initializationCode() { count =0; } } // end monitor
  • 73.
    AKN/OSII.73Introduction to OperatingSystems Monitor Solution to Bounded Buffer Contd. void producer() { while (true) { item = produceItem(); BoundedBuffer.addItem(item); } } void consumer() { while (true) { item = BoundedBuffer.removeItem(); consumeItem(item); } }
  • 74.
    AKN/OSII.74Introduction to OperatingSystems Monitor Solution to Reader-Writer. monitor ReaderWriter{ condition read, readWrite; int readCount; bool busy; void startRead(){ if (busy) read.wait(); readCount=readCount+1; read.signal(); } void endRead(){ readCount=readCount-1; if (readCount==0) readWrite.signal(); }
  • 75.
    AKN/OSII.75Introduction to OperatingSystems Monitor Solution to Reader-Writer contd. void startWrite(){ if (busy || readCount != 0){ readWrite.wait(); busy = true; } } void endWrite(){ busy = false; if (read.queue()) read.signal(); else readWrite.signal(); } initializationCode(){ readCount =0; busy= false; } } // end monitor;
  • 76.
    AKN/OSII.76Introduction to OperatingSystems Monitor Solution to Dining Philosophers monitor DiningPhilosophers { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); //makes eating if two chopsticks found if (state[i] != EATING) self[i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); }
  • 77.
    AKN/OSII.77Introduction to OperatingSystems Solution to Dining Philosophers (Cont.) void test (int i) { if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } } initializationCode() { for (int i = 0; i < 5; i++) state[i] = THINKING; } }
  • 78.
    AKN/OSII.78Introduction to OperatingSystems  Each philosopher i invokes the operations pickup() and putdown() in the following sequence: DiningPhilosophers.pickup(i); EAT DiningPhilosophers.putdown(i);  No deadlock, but starvation is possible Solution to Dining Philosophers (Cont.)
  • 79.
    AKN/OSII.79Introduction to OperatingSystems DEADLOCK  Several processes may compete for a finite number of resources.  A process requests resources; if the resources are not available at that time, the process enters a waiting state.  Sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes.  This situation is called a deadlock
  • 80.
    AKN/OSII.80Introduction to OperatingSystems System Model  System consists of resources  Resource types R1, R2, . . ., Rm CPU cycles, memory space, I/O devices  Each resource type Ri has Wi instances.  Each process utilizes a resource as follows:  request : either gets the resource or waits  use  Release  A set of processes is in a deadlocked state when every process in the set is waiting for an event that can be caused only by another process in the set. R1 R2 P1 P2 holds waits holds waits
  • 81.
    AKN/OSII.81Introduction to OperatingSystems Necessary Conditions  Deadlock can arise if four conditions hold simultaneously.  Mutual exclusion: only one process at a time can use a resource.  Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes  No preemption: a resource can be released only voluntarily by the process holding it  Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.
  • 82.
    AKN/OSII.82Introduction to OperatingSystems Resource-Allocation Graph  Deadlocks can be described more precisely in terms of a directed graph called Resource-Allocation Graph  Vertices are partitioned into two types:  P = {P1, P2, …, Pn}, the set consisting of all the processes in the system  Represented as a circle  R = {R1, R2, …, Rm}, the set consisting of all resource types in the system  Represented as a rectangle  Each instance of a resource type is represented as dots  Two types of edges  request edge – directed edge Pi  Rj  allocated edge – directed edge Rj  Pi P1 Pi Rj Pi Rj
  • 83.
    AKN/OSII.83Introduction to OperatingSystems Resource-Allocation Graph contd.  If graph contains no cycles  no deadlock  If graph contains a cycle   if only one instance per resource type, then deadlock  if several instances per resource type, possibility of deadlock
  • 84.
    AKN/OSII.84Introduction to OperatingSystems Example I  P = {P1, P2, P3}, R= {R1, R2, R3, R4}  Resource instances:  One instance of R1 and R3  Two instances of R2  Three instances R4~  Process states:  Process P1 is holding an instance of R2 and is waiting for an instance of R1.  Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3.  Process P3 is holding an instance of R3 . No cycles => No deadlock
  • 85.
    AKN/OSII.85Introduction to OperatingSystems Example II  In addition to previous example  Process states:  P3 requests one instance of R2.  Cycles now exist in RAG  P1 -> R 1 -> P2 -> R3 -> P3 -> R2 -> P1  P2 -> R3 -> P3 -> R2 -> P2  So, Processes P1, P2, and P3 are deadlocked.
  • 86.
    AKN/OSII.86Introduction to OperatingSystems Example III  P = {P1, P2, P3 ,P4}, R= {R1, R2}  Resource instances:  Two instances of R1 and R2  Process states:  Process P1 is holding an instance of R2 and is waiting for an instance of R1.  Process P2 is holding an instance of R1  Process P3 is holding an instance of R1 and waiting for R2 .  P4 is holding an instance of R2.  Cycle exists  P1 -> R1 -> P3 -> R2 -> P1  But there is no deadlock, as P4 may release R2 that can be allocated to P3
  • 87.
    AKN/OSII.87Introduction to OperatingSystems Methods for Handling Deadlocks  Deadlock may be dealt in one of the following ways  Use protocols to prevent or avoid deadlocks  Allow the system to enter deadlock state, then recover it  Ignore the problem altogether and pretend that deadlocks never occur in the system.  The third method is used by most of the OS, including UNIX and Windows  Deadlock prevention provides a set of methods for ensuring that at least one of the necessary conditions cannot hold.  Deadlock avoidance requires prior information regarding the resource requirement of process, so that OS decides allocation/release of resources so that deadlock is avoided.  If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock situation may arise. the system then provides algorithms to recover from the deadlock
  • 88.
    AKN/OSII.88Introduction to OperatingSystems Deadlock Prevention  Restrain the ways request can be made  Mutual Exclusion – not required for sharable resources (e.g., read-only files); must hold for non-sharable resources (printer)  A process never needs to wait for a sharable resource.  The mutual-exclusion condition must hold for nonsharable resources.  Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources  Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none allocated to it.  Low resource utilization; starvation possible
  • 89.
    AKN/OSII.89Introduction to OperatingSystems Deadlock Prevention (Cont.)  No Preemption –  If a process holding resources requests another resource that cannot be immediately allocated to it, then all resources currently being held by the requesting process are released  Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting  Circular Wait  impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration  Example protocol: Each process can request resources only in an increasing order.  That is, a process can initially request any number of instances of a resource type -say, Ri.  After that, the process can request instances of resource type Rj if and only if F(Rj) > F(Ri).
  • 90.
    AKN/OSII.90Introduction to OperatingSystems Protocol to avoid circular wait  3) A process requesting an instance of resource type Rj must have released any resources Ri such that F(Ri) > F(Rj).  Proof: circular-wait can’t hold (proof by contradiction)  Let the set of processes involved in the circular wait be { P0 , P1, ... , Pn}, and resources R = {R0, R1, . . . , Rn}  Where, Pi is waiting for a resource Ri, which is held by process Pi+1, . . . Pn is waiting for a resource Rn held by P0  Then, since process Pi+1 is holding resource Ri , while requesting resource Ri+1 we must have F(Ri) < F(Ri+1)  i.  But this condition means that F(R0) < F(R1) < ... < F(Rn) < F (R0).  By transitivity, F(R0) < F(R0), which is impossible  Therefore, there can be no circular-wait. (proved)  Note: Ordering doesn’t prevent circular-wait; developing programs that ‘follows ordering’ prevents it.
  • 91.
    AKN/OSII.91Introduction to OperatingSystems Deadlock Avoidance  Requires that the system has some additional a priori information available  Protocol: each process declares the maximum number of resources of each type that it may need  The deadlock-avoidance algorithm dynamically examines the resource-allocation state and decide for each request whether or not the process should wait in order to avoid a possible future deadlock.  i.e. a circular-wait can never exist.  Resource-allocation state: the number of available and allocated resources, and the maximum demands of the processes
  • 92.
    AKN/OSII.92Introduction to OperatingSystems Deadlock Avoidance: Safe State  When a process requests an available resource, system must decide, if immediate allocation leaves the system in a safe state  safe state: A system is said to be in safe state, only if there exists a safe sequence  A sequence of processes <P1, P2, …, Pn> is a safe sequence, if for each Pi  the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < i  That is, If resource needs of Pi are immediately not available, then Pi can wait until all Pj have finished  When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate  When Pi terminates, Pi +1 can obtain its needed resources, and so on
  • 93.
    AKN/OSII.93Introduction to OperatingSystems Safe state contd.  If a system is in safe state  no deadlocks  Deadlock state is a unsafe state, but not all unsafe states are deadlocks. But an unsafe state may lead to deadlock.  i.e. unsafe state  possibility of deadlock  Avoidance  ensure that a system will never enter an unsafe state.  Example: 12 recourses and 3 processes  t0: system is in safe state  t1: resource allocated to p1 => system in safe state  t1: resource allocated to p2 => unsafe => deadlock
  • 94.
    AKN/OSII.94Introduction to OperatingSystems Avoidance Algorithms  Single instance of a resource type  Resource-Allocation Graph Algorithm  Multiple instances of a resource type  Banker’s algorithm
  • 95.
    AKN/OSII.95Introduction to OperatingSystems Resource-Allocation Graph  Claim edge Pi  Rj indicated that process Pj may request resource Rj; represented by a dashed line  Claim edge converts to request edge when a process requests a resource  Request edge converted to an assignment edge when the resource is allocated to the process  When a resource is released by a process, assignment edge reconverts to a claim edge  Resources must be claimed a priori in the system  Alg: The request of process Pi for resource Rj is granted, only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph  Exp: If P2 requests R2 => it can’t be granted  Because by converting edge p2 -> r2 to a request edge forms a cycle
  • 96.
    AKN/OSII.96Introduction to OperatingSystems Banker’s Algorithm  Used when there are multiple instances per resource type.  Each process must claim maximum use in advance  When a process requests a resource it may have to wait if  resource not available or  Allocating a resource leaves the system in unsafe state  When a process gets all its resources it must return them in a finite amount of time
  • 97.
    AKN/OSII.97Introduction to OperatingSystems Data Structures for the Banker’s Algorithm  Let n = # of processes, and m = # of resource types  Available: Vector of length m.  If available [ J ] = k, there are k instances of resource type RJ available  Max: n x m matrix.  If Max [ i, J ] = k, then process Pi may request at most k instances of resource type RJ  Allocation: n x m matrix.  If Allocation[ i, J ] = k, then Pi is allocated with k instances of RJ  Need: n x m matrix.  If Need[i, J] = k, then Pi may need k more instances of RJ to complete its task Need [ i, J ] = Max[ i, J ] – Allocation [ i, J]
  • 98.
    AKN/OSII.98Introduction to OperatingSystems Banker’s Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [ i ] = false for i = 0, 1, …, n- 1 2. Find an i such that both: (a) Finish [ i ] = false (b) Needi  Work If no such i exists, go to step 4 3. Work = Work + Allocationi Finish[i] = true go to step 2 4. If Finish [i] == true for all i, then the system is in a safe state
  • 99.
    AKN/OSII.99Introduction to OperatingSystems Resource-Request Algorithm for Process Pi  Requesti = request vector for Pi.  Requesti [ J ] = k , Pi wants k instances of RJ  Algorithm 1. If Requesti  Needi go to step 2. a) Otherwise, raise error, since it has exceeded its maximum claim 2. If Requesti  Available, go to step 3. a) Otherwise Pi must wait, since resource not available 3. Pretend to allocate requested resources to Pi by modifying the state as follows: a) Available = Available – Requesti; b) Allocationi = Allocationi + Requesti; c) Needi = Needi – Requesti; 4. If safe  the resources are allocated to Pi a) Otherwise Pi must wait, and the old resource-allocation state is restored
  • 100.
    AKN/OSII.100Introduction to OperatingSystems Example of Banker’s Algorithm  5 processes P0 . . . P4; and 3 resource types  A (10 instances), B (5instances), and C (7 instances)  Snapshot at time T0: Allocation MaxClaim Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3  Calculate Need as  MaxClaim - Allocation Need A B C P0 7 4 3 P1 1 2 2 P2 6 0 0 P3 0 1 1 P4 4 3 1
  • 101.
    AKN/OSII.101Introduction to OperatingSystems P1 Request (1,0,2)  Check that Request  Available (that is, (1,0,2)  (3,3,2)  true  So find the new state at time T1 Allocation Need Available A B C A B C A B C P0 0 1 0 7 4 3 2 3 0 P1 3 0 2 0 2 0 P2 3 0 2 6 0 0 P3 2 1 1 0 1 1 P4 0 0 2 4 3 1  Now execute safety algorithm and find the safe sequence 1. Work = Available, Finish [ i ] = false for i = 0, 1, 2, 3, 4 2. Find an i such that Finish [ i ] = false & Needi  Work If no such i exists, go to step 4 3. Work = Work + Allocationi Finish[i] = true, go to step 2 4. If Finish [i] == true for all i, is in a safe state  Safe sequence is < P1, P3, P4, P0, P2>, so request allowed.  Can request for (3,3,0) by P4 be granted? At T0, if yes then find the safety sequence  Can request for (0,2,0) by P0 be granted? At T0 , if yes then find the safety sequence
  • 102.
    AKN/OSII.102Introduction to OperatingSystems Deadlock Detection/Recovery  Three steps of action  Allow system to enter deadlock state  Algorithm to detect the deadlock state  Apply recovery scheme  Single Instance of each resource type  wait-for graph (variation of RAG)  Only process nodes (no resource node)  Pi  PJ if Pi is waiting for PJ to release a resource  If there is a cycle, => a deadlock  Algorithm requires order of n2 operations, n is the vertices in the graph.
  • 103.
    AKN/OSII.103Introduction to OperatingSystems Resource-Allocation Graph and Wait-for Graph Resource-Allocation Graph Corresponding wait-for graph
  • 104.
    AKN/OSII.104Introduction to OperatingSystems Several Instances of a Resource Type  Available:  A vector of length m => the number of available resources instances  Allocation:  An n x m matrix => the number of resources instances of each type currently allocated.  Request:  An n x m matrix => the current request of each process.  If Request [i][J] = k, then process Pi is requesting k more instances of resource type RJ.
  • 105.
    AKN/OSII.105Introduction to OperatingSystems Detection Algorithm 1. Let Work and Finish be vectors of length m and n, initialized as follows (a) Work = Available (b) For i = 1,2, …, n, if Allocationi  0, then Finish[i] = false; otherwise, Finish[i] = true 2. Find an index i such that: (a) Finish[i] == false (b) Requesti  Work If no such i exists, go to step 4 3. Work = Work + Allocationi Finish[i] = true go to step 2 4. If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked Algorithm requires an order of O(m x n2) operations to detect a deadlocked state
  • 106.
    AKN/OSII.106Introduction to OperatingSystems Example of Detection Algorithm  Five processes P0 through P4; three resource types A (7 instances), B (2 instances), and C (6 instances)  Snapshot at time T0: Allocation Request Available A B C A B C A B C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 0 P3 2 1 1 1 0 0 P4 0 0 2 0 0 2  Find if there is a deadlock?, if no then find the sequence.  Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i  Suppose P2 requests an additional instance of type C, then find the system status  i.e. No deadlock => process execution sequence  Deadlock => list of deadlocked process
  • 107.
    AKN/OSII.107Introduction to OperatingSystems Detection-Algorithm Usage  When, and how often, to invoke depends on:  How often a deadlock is likely to occur?  How many processes will be affected?  Frequent invocation => overhead  Infrequent invocation => many deadlocks and difficult to detect the originating one  Invoke either at a defined interval (say 1 hr) or when the CPU utilization drops below a certain point (say 40%)  Recovery from deadlock  Process Termination  Resource Preemption
  • 108.
    AKN/OSII.108Introduction to OperatingSystems Recovery from Deadlock: Process Termination  Abort all deadlocked processes  Very expensive w.r.t. CPU time.  i.e. partial computation by all processes are lost  Abort one process at a time until the deadlock cycle is eliminated  Very expensive w.r.t. CPU Overhead as after each abort, the deadlock detection algorithm to be executed.  Factors to consider to choose a process to abort 1. Priority of the process 2. How long process has computed, and how much longer to completion 3. Resources the process has used 4. Resources process needs to complete 5. How many processes will need to be terminated 6. Is process interactive or batch?
  • 109.
    AKN/OSII.109Introduction to OperatingSystems Recovery from Deadlock: Resource Preemption  Selecting a victim – minimize cost  i.e. number of resources holding, amount of time it has already executed  Rollback – return to some safe state, restart process from that state  Difficult to choose a safe state, total rollback is preferred.  Starvation – same process may always be picked as victim  include number of rollback in cost factor