SlideShare a Scribd company logo
1 of 154
CPU SCHEDULING
Scheduling is a process of allowing one process
to use the CPU resources, keeping on hold the
execution of another process due to the
unavailability of resources CPU.
Concepts of CPU Scheduling
CPU–I/O Burst Cycle
CPU Scheduler
Preemptive Scheduling
Dispatcher
CPU–I/O Burst Cycle
Process execution consists of a cycle of CPU
execution and I/O wait.
Process execution begins with a CPU burst.
That is followed by an I/O burst.
Processes alternate between these two states.
The final CPU burst ends with a system request
to terminate execution.
CPU Scheduler
Whenever the CPU becomes idle, the
operating system must select one of the
processes in the ready queue to be
executed.
The selection process is carried out by
the Short-Term Scheduler or CPU
scheduler.
Preemptive Scheduling
Preemptive scheduling is used when a process
switches from the running state to the ready state or
from the waiting state to the ready state.
The resources (mainly CPU cycles) are allocated to the
process for a limited amount of time and then taken
away, and the process is again placed back in the ready
queue if that process still has CPU burst time
remaining.
That process stays in the ready queue till it gets its
next chance to execute.
Non-Preemptive Scheduling
Non-preemptive Scheduling is used when a
process terminates, or a process switches
from running to the waiting state.
In this scheduling, once the resources (CPU
cycles) are allocated to a process, the
process holds the CPU till it gets terminated
or reaches a waiting state.
Dispatcher
The dispatcher is the module that gives
control of the CPU to the process selected
by the short-term scheduler. Dispatcher
function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user
program to restart that program.
The dispatcher should be as fast as possible,
since it is invoked during every process
switch.
The time it takes for the dispatcher to stop
one process and start another process
running is known as the Dispatch Latency.
The aim of the scheduling algorithm is to maximize and
minimize the following:
Maximize:
CPU utilization - It makes sure that the CPU is operating at
its peak and is busy.
Through output - It is the number of processes that complete
their execution per unit of time.
Minimize:
Waiting time- It is the amount of waiting time in the queue.
Response time- Time retired for generating the first request
after submission.
Turn around time- It is the amount of time required to
execute a specific process.
TurnAroundTime=Compilationtime−Arrivaltime.
Arrival Time
In CPU Scheduling, the arrival time refers to the moment in
time when a process enters the ready queue and is awaiting
execution by the CPU. In other words, it is the point at which
a process becomes eligible for scheduling.
Burst Time
Burst time, also referred to as “execution time”. It is the
amount of CPU time the process requires to complete its
execution. It is the amount of processing time required by a
process to execute a specific task or unit of a job.
Completion Time
Completion time is when a process finishes execution and
is no longer being processed by the CPU.
First Come First Serve (FCFS)
In FCFS, the process that requests the CPU first is allocated the CPU
first.
Once the CPU has been allocated to a process, it keeps the CPU until it
releases the CPU.
FCFS can be implemented by using FIFO queues.
When a process enters the ready queue, its PCB is linked onto the tail of
the queue.
When the CPU is free, it is allocated to the process at the head of the
queue.
The running process is then removed from the queue.
Generalized Activity Normalization Time Table
(GANTT) chart is a bar chart that is used to illustrate
a particular schedule including the start and finish
times of each of the participating processes.
Process ArrivalT
ime
(T0)
BurstTi
me
(ΔT)
Finish
Time
(T1)
Turnaround
Time (TAT =
T1 - T0)
Waiting
Time
(WT =
TAT -
ΔT)
P0 0 10 10 10 0
P1 1 6 16 15 9
P2 3 2 18 15 13
P3 5 4 22 17 13
Gantt Chart
🞂Average Turn around
Time:
(10+15+15+17)/4 = 14.25ms.
🞂Average Waiting Time: (0+9+13+13)/4 = 8.75ms.
Advantages of FCFS
The First come first serve process
scheduling algorithm is one of the simple
and easy processes scheduling algorithms.
The process that arrives first will be
executed first.
Disadvantages of FCFS
This algorithm makes other small
processes wait until the current program
completes.
Short processes have to wait for a long time
until the bigger process arrives before it.
The waiting time is usually high.
This scheduling algorithm is not ideal for time-
sharing systems.
Example of FCFS
Billing counters in a supermarket is a real-life
example of the FCFS algorithm.
The first person to come to the billing counter
will be served first, later the next person, and so
on.
The second customer will only be served if the
first person's complete billing is done.
Shortest-Job-First Scheduling (SJF)
The Process that requires shortest time to
complete execution is reserved first.
The SJF algorithm is defined as “when the CPU
is available, it is assigned to the process that has
the smallest next CPU burst”.
If the next CPU bursts of two processes are the
same, FCFS scheduling is used between two
processes.
SJF is also called the Shortest-Next CPU-Burst
algorithm, because scheduling depends on the
length of the next CPU burst of a process, rather
than its total length.
SJF Scheduling can be used in both preemptive
and non-preemptive mode.
Preemptive mode of Shortest Job First is called
as Shortest Remaining Time First (SRTF).
Proce
ss
ArrivalTi
me
(T0)
BurstTi
me
(ΔT)
Finish
Time
(T1)
Turnaround
Time (TAT
= T1 - T0)
Waiting
Time (WT =
TAT - ΔT)
P0 0 10 10 10 0
P1 1 6 22 21 15
P2 3 2 12 9 7
P3 5 4 16 11 7
🞂AverageTurnaroundTime: (10+21+9+11)/4 = 12.75ms.
🞂AverageWaitingTime: (0+15+7+7)/4 = 7.25ms.
If the CPU scheduling policy is SJF non-preemptive, calculate the
average waiting time and average turn around time.
Process P1 is started at time 0, since it is the only process in the
queue. Process P2 arrives at time 1.
The remaining time for process P1 (7 milliseconds) is larger than the
time required by process P2 (4 milliseconds), so process P1 is
preempted, and process P2 is scheduled
Advantages-
SRTF is optimal and guarantees the minimum average waiting
time.
It provides a standard for other algorithms since no other
algorithm performs better than it.
Disadvantages-
It can not be implemented practically since burst time of the
processes can not be known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.
Problem-03:
Consider the set of 5 processes whose arrival time and burst time are
given below-
If the CPU scheduling policy is SJF non-preemptive,
calculate the average waiting time and average turn
around time.
Problem-04:
Consider the set of 5 processes whose arrival time and burst time are
given below-
If the CPU scheduling policy is SJF preemptive,
calculate the average waiting time and average turn
around time.
Round-Robin Scheduling (RR)
Round-Robin (RR) scheduling algorithm is designed
especially for Time Sharing systems.
Each selected process is assigned a time interval called
time quantum or time slice.
Process is allowed to run only for this time interval.
After the time quantum expires, the running
process is preempted and sent to the ready
queue.
Then, the processor is assigned to the next
arrived process.
It is always preemptive in nature.
If we use a time quantum of 4 milliseconds
Advantages-
It gives the best performance in terms of average response
time.
It is best suited for time sharing system, client server
architecture and interactive system.
Disadvantages-
It leads to starvation for processes with larger burst time as
they have to repeat the cycle many times.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
Priority Scheduling
A priority is associated with each process and the CPU is
allocated to the process with the highest priority.
Equal-priority processes are scheduled in FCFS order.
An SJF algorithm is a special kind of priority scheduling
algorithm where small CPU bursts will have higher priority.
Priorities can be defined based on time limits, memory
requirements, the number of open files etc.
Priority scheduling can be either
Preemptive or Non-preemptive.
A Preemptive Priority Scheduling
algorithm will preempt the CPU if the
priority of the newly arrived process is
higher than the priority of the currently
running process.
Problem: Starvation or Indefinite Blocking
•In priority Scheduling when there is a
continuous flow of higher priority processes to
the ready queue then all the lower priority
processes must wait for the CPU until all the
higher priority processes execution completes.
•This leads to lower priority processes blocked
from getting CPU for a long period of time. This
situation is called Starvation or Indefinite
blocking.
Solution: Aging
Aging involves gradually increasing the priority of processes
that wait in the system for a long time.
To prevent starvation of any process, we can use the concept
of aging where we keep on increasing the priority of low-
priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each
day of waiting, then if a process with priority 20(which is
comparitively low priority) comes in the ready queue.
After one day of waiting, its priority is increased to 19.5 and
so on.
Doing so, we can ensure that no process will have to wait for
indefinite time for getting CPU time for processing.
Multilevel Queue Scheduling Algorithm
In the Multilevel Queue Scheduling algorithm the processes
are classified into different groups.
Priority of foreground processes are higher than background
processes.
The multilevel queue scheduling algorithm partitions the ready
queue into several separate queues.
Processes are assigned to the queue depending on
some properties of the process.
Property may be memory size, process priority or
process type.
Each queue is associated with its own scheduling
algorithm.
Ex:- Foreground process might scheduled by RR
algorithm and the background queue is scheduled by
FCFS algorithm.
The Description of the processes in the above diagram is as
follows:
System Process
The Operating system itself has its own process to run and is
termed as System Process.
Interactive Process
The Interactive Process is a process in which there should be
the same kind of interaction (basically an online game ).
Batch Processes
Batch processing is basically a technique in the Operating
system that collects the programs and data together in the
form of the batch before the processing starts.
Student Process
The system process always gets the highest priority while the
student processes always get the lowest priority.
No process in the batch queue could run
unless the queues for system processes,
interactive processes and interactive
editing processes were all empty.
If an interactive editing process entered the
ready queue while a batch process was
running, the batch process will be
preempted.
Priority of queue 1 is greater than the priority of queue 2.
Here Queue 1 uses RR(as Time quantum =2 and queue 2
uses FCFS and Calculate the average TAT and average
WT
Disadvantage: Starvation of Lower level queue
The multilevel queue scheduling algorithm is inflexible.
The processes are permanently assigned to a queue when they
enter the system.
Processes are not allowed to move from one queue to another
queue.
There is a chance that lower level queues will be in starvation
because unless the higher level queues are empty no lower
level queues will be executing.
If at any instant of time if there is a process in higher priority
queue then there is no chance that lower level process can be
executed eternally.
Multilevel Feedback Queue Scheduling (MLFQ)
Multilevel feedback queue scheduling algorithm allows a
process to move between queues.
Processes are separated according to the characteristics of
their CPU bursts.
If a process uses too much CPU time, it will be moved to a
lower-priority queue.
A process that waits too long in a lower-priority queue moved
to a higher-priority queue.
This form of aging prevents starvation.
A process entering the ready queue is put in queue 0.
Process in queue 0 has given time quantum 8ms.
If it does not finish within this time, it is moved to the tail of
queue 1.
If queue 0 is empty, the process at the head of the queue 1 is
given a quantum of 16ms.
If it does not complete, it is preempted and is put into queue
2.
Use RR for the first two queues and the FCFS for
third queue. Time quantum of queue 1 is 3 and
queue 2 is 5. Calculate the average TAT and
average WT
It is important to note that a process that is in a lower priority
queue can only execute only when the higher priority queues
are empty.
Any running process in the lower priority queue can be
interrupted by a process arriving in the higher priority queue.
Advantages of MFQS
This is a flexible Scheduling Algorithm
This scheduling algorithm allows different processes to move
between different queues.
In this algorithm, A process that waits too long in a lower
priority queue may be moved to a higher priority queue which
helps in preventing starvation.
Inter Process Communication Mechanisms:
Inter process communication is the mechanism
provided by the operating system that allows processes
to communicate with each other.
This communication could involve a process letting
another process know that some event has occurred or
the transferring of data from one process to another.
Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
Independent Process: Any process that does not share data
with any other process.
An Independent process does not affect or be affected by the
other processes executing in the system.
Cooperating Process: Any process that shares data with other
processes. A cooperating process can affect or be affected by
the other processes executing in the system.
Cooperating processes require an Inter-Process
Communication (IPC) mechanism that will allow them to
exchange data and information.
Reasons for providing cooperative process environment:
Information Sharing: Several users may be interested in the
same piece of information (i.e. a shared file), we must provide
an environment to allow concurrent access to such
information.
Computation Speedup: If we want a particular task to run
faster, we must break it into subtasks. Each task will be
executing in parallel with the other tasks.
Modularity: Dividing the system functions into separate
processes or threads.
Convenience: Even an individual user may work on many
tasks at the same time.
For example a user may be editing, listening to music and
compiling in parallel.
There are two models of IPC:
i. Message passing
ii. Shared memory.
Message-Passing Systems In the message-passing model,
communication takes place by means of messages
exchanged between the cooperating processes.
Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without
sharing the same address space.
It is particularly useful in a distributed environment, where
the communicating processes may reside on different
computers connected by a network.
A message-passing facility provides two operations: send,
receive.
Messages sent by a process can be either fixed or variable in
size.
Methods for implementing a logical
communication links are:
1. Naming
2. Synchronization
3. Buffering
1. Naming Processes that want to
communicate use either Direct or
Indirect communication.
In Direct communication, each process that
wants to communicate must explicitly
name the recipient or sender of the
communication.
send(P, message) — Send a message to
processP.
receive(Q, message) — Receive a message
from processQ.
A communication link in direct communication
scheme has the following properties:
A link is established automatically between
every pair of processes that want to
communicate.
The processes need to know only each other’s
identity to communicate.
A link is associated with exactly two processes.
(i.e.) between each pair of processes, there
exists exactly onelink.
In Indirect communication, the messages are
sent to and received from mailboxes or ports.
A mailbox can be viewed abstractly as an object
into which messages can be placed by
processes and from which messages can be
removed.
Each mailbox has a unique integer identification
value.
A process can communicate with another
process via a number of different mailboxes,
but two processes can communicate only if
they have a shared mailbox.
send (A, message) — Send a message to
mailboxA.
receive (A, message) — Receive a message
from mailboxA.
In this scheme, a communication link has the
following properties:
A link is established between a pair of processes
only if both members of the pair have a
shared mailbox.
A link may be associated with more than two
processes.
Between each pair of communicating processes,
a number of different links may exist, with
each link corresponding to one mailbox.
2. Synchronization Message passing done in two
ways:
i. Synchronous or Blocking
ii. Asynchronous or Non-Blocking
Blocking send: The sending process is blocked
until the message is received by the receiving
process or by the mail box.
Non-blocking send: The sending process sends
the message and resumes operation.
Blocking receive: The receiver blocks until a
message is available.
Non-blocking receive: The receiver retrieves
either a valid message or a null.
3. Buffering
Messages exchanged by communicating
processes reside in a temporary queue.
Those queues can be implemented in three
ways:
Zero Capacity: Zero-capacity is called as a
message system with no buffering.
The sender must block until the recipient
receives the message.
Bounded Capacity:
The queue has finite length n. Hence at most n
messages can reside in it.
If the queue is not full when a new message is
sent, the message is placed in the queue and
the sender can continue execution without
waiting. If the link is full, the sender must
block until space is available in the queue.
Unbounded Capacity:
The queue’s length is potentially infinite. Hence
any number of messages can wait in it. The
sender never blocks.
ii. Shared memory :
In the shared-memory model, a region of
memory will be shared by cooperating
processes.
Processes can exchange information by reading
and writing data to the shared region.
Unbounded Capacity:
The queue’s length is potentially infinite. Hence
any number of messages can wait in it. The
sender never blocks.
ii. Shared memory :
In the shared-memory model, a region of
memory will be shared by cooperating
processes.
Processes can exchange information by reading
and writing data to the shared region.
Multi Processor Scheduling
In multiple processor scheduling multiple CPU’s
are available and hence load sharing becomes
possible.
It is more complex than single processor system
In multi processor system, we can use any
processor available to run any process in the
queue.
Approaches to Multiple-Processor Scheduling
Asymmetric scheduling
When all the scheduling decisions and I/O
processing are handled by a single processor
which is called the master server and the
other processors executes only the user
mode.
This is simple and reduces the need of sharing.
This entire scenario is called the Asymmetric
scheduling.
Symmetric scheduling
All processes may be in a common ready queue
or each processor may have its own private
queue for ready processes.
The scheduling proceeds further by having the
scheduler for each processor examine the
ready queue and select a process to execute.
Processor affinity
System try to avoid migration of processes from
one process to another and try to keep a
process running on the same processor.
This is known as Processor affinity.
Soft Affinity:- When an operating system has
policy of attempting to keep a process
running on the same processor but not
guarantee it will do so.
Hard Affinity:-
Some system such as linux also provide some
system calls that support hard affinity which
allows a process to migrate between
processors.
Load balancing:-
Load balancing is the phenomena which keeps
the work load equally distributed across all
processors .
2 approaches are to load balancing
Push migration:-
It is a task routinely checks the load on each
processor and if it finds a imbalance then it
evenly distributes load on each processors by
moving the processes from overloaded to idle
or less busy processors.
Pull Migration:-
It occurs when an idle processor pulls a waiting
task from a busy processor for its execution.
Multi core processors:-
In multi core processors multiple processors cores are
placed on the same physical chip.
Each core has a register set to maintain its
architectural state and thus appear to the operating
system as a separate physical processor.
Memory Stall:-
When processor access memory the it spends a
significant amount of time waiting for the
data to become available.
This situation is called memory stall.
To solve this problem recent hardware designers
have implemented the multithreaded
processor cores in two or more hardware
threads are assigned to each core.
If one thread stalls while waiting for the
memory, core can switch to another thread.
Process Management and Synchronization
It is the task phenomenon of coordinating the
execution of processes in such a way that no two
processes can have access to the same shared
data and resources.
Process Synchronization is mainly needed in a
multi-process system when multiple processes
are running together, and more than one
processes try to gain access to the same shared
resource or any data at the same time.
The Critical-Section Problem
Critical section is a code segment that can be
accessed by only one process at a time. Critical
section contains shared variables which need to
be synchronized to maintain consistency of data
variables.
Consider a system consisting of n processes {P0,
P1, ..., Pn−1}.
Each process has a segment of code, called a
Critical Section, in which the process may be
changing common variables, updating a table,
writing a file and soon.
When one process is executing in its critical
section, no other process is allowed to execute in
its critical section.
It cannot be executed by more than one process
at a time.
The entry section handles the entry into the
critical section.
It acquires the resources needed for execution by
the process.
The exit section handles the exit from the
critical section. It releases the resources and also
informs the other processes that the critical
section is free.
Solution to the critical section Problem
The critical section problem needs a solution to
synchronize the different processes.
The solution to the critical section problem must
satisfy the following conditions:
Mutual Exclusion
Progress
Bounded waiting
Mutual Exclusion
It implies that only one process can be inside the
critical section at any time.
If any other processes require the critical
section, they must wait until it is free.
Ex:- Like the person waiting in queue.
Progress
Progress means that if a process is not using the
critical section, then it should not stop any other
process from accessing it.
In other words, any process can enter a critical
section if it is free.
Ex:- Like the person can enter if telephone
booth is free, here no one should restrict if booth
is free.
Bounded waiting
Bounded waiting means that each process must
have a limited waiting time.
It should not wait endlessly to access the critical
section.
Ex:- Like we can set a time limit for every
person to talk, so person in queue no need to
wait endlessly.
Two general approaches are used to handle
critical sections in operating systems:
1. Preemptive Kernel allows a process to be
preempted while it is running in kernel mode.
2. Non-preemptive Kernel does not allow a
process running in kernel mode to be preempted
Peterson solution problem
This is widely used and software-based solution
to critical section problems.
Peterson's solution was developed by a
computer scientist Peterson that's why it is
named so.
The solution because it provides a good
algorithmic description of solving the critical-
section problem and illustrates some of the
complexities involved in designing software that
addresses the requirements of mutual exclusion,
progress, and bounded waiting.
The processes are numbered P0 and P1.
Peterson’s solution requires the two processes to
share two data items: int turn; boolean flag[2];
The variable turn indicates whose turn it is to
enter its critical section.
At any point of time the turn value will be either
0 or 1 but not both.
The above shows the structure of process Pi in
Peterson's solution.
Suppose there are N processes (P1, P2, ...
PN) and as at some point of time every process
requires to enter in the Critical Section.
A FLAG[] array of size N is maintained here
which is by default false.
Whenever a process requires to enter in the
critical section, it has to set its flag as true.
Example: If Pi wants to enter it will
set FLAG[i]=TRUE.
Another variable is called TURN and is used to
indicate the process number that is currently
waiting to enter into the critical section.
The process that enters into the critical section
while exiting would change the TURN to
another number from the list of processes that
are ready.
Example: If the turn is 3 then P3 enters the
Critical section and while exiting turn=4 and
therefore P4 breaks out of the wait loop.
Synchronization Hardware
Many systems provide hardware support for
critical section code.
The critical section problem could be solved
easily in a single-processor environment if we
could disallow interrupts to occur while a shared
variable or resource is being modified.
Mutex Locks
As the synchronization hardware solution is not
easy to implement for everyone, a strict software
approach called Mutex Locks was introduced.
In this approach, in the entry section of code, a
LOCK is acquired over the critical resources
modified and used inside the critical section, and
in the exit section that LOCK is released.
As the resource is locked while a process
executes its critical section hence no other
process can access it.
Test and Set Lock –
Test and Set Lock (TSL) is a hardware solution
to the synchronization mechanism.
It uses a test and set instruction to provide the
synchronization among the processes executing
concurrently.
We have a shared lock variable which can take
either of the two values 0 or 1.
If one process is currently executing
a test-and-set, no other process is
allowed to begin another test-and-
set until the first process test-and-set
is finished.
If you the process P0 go will from
queue to the entry section.
When it enters the entry section,
initially the value of the lock will be 0.
When it enters into the critical section,
a register will come into the picture.
The value of the register will be 1.
Now in the entry section itself, the
exchange of values between the register
and the lock takes place.
Hence, the value of lock will become 1
and the value of register will become 0.
Here once again if process
P0 will go from entry section
to the critical section.
In critical section, only one
process can occur at a time.
Now the process P1 will move from the queue to the
entry section.
Now again , then there will be a problem as only one
process can enter into the critical section.
Process P0 is already inside the critical section, so
process P1 has to wait for P0 to finish the process.
As process P0 is already inside the critical section, the
value of the register will become 1 and process P1 will
go into the while loop and will never enter into the
critical section.
As process P0 is already inside the critical section, the
value of the register will become 1 and process P1 will
go into the while loop and will never enter into the
critical section.
An alert message that process P1 needs to wait for
process P0 to leave the critical section.
The process P0 will go from critical section to the exit
section.
In exit section the value of lock as well the value of
the register will become 0.
Again in the same time if you press P1 then again the
value of lock will become 1.
In the similar way if process P1 is in the critical
section and process P0 is in entry section, then again if
you press P0 the process P0 cannot enter into the
critical section.
The value of the register will become 1 from zero and
process P0 will enter into an infinite while loop
refraining it from entering it into the critical section.
An alert message will come that process P0 needs to
wait for process P1 to leave the critical section.
Mutual Exclusion
Mutual Exclusion is guaranteed in TSL mechanism
since a process can never be preempted just before
setting the lock variable.
Only one process can see the lock variable as 0 at a
particular time and that's why, the mutual exclusion is
guaranteed.
Progress
According to the definition of the progress, a process
which doesn't want to enter in the critical section
should not stop other processes to get into it.
In TSL mechanism, a process will execute the TSL
instruction only when it wants to get into the critical
section.
The value of the lock will always be 0 if no process
doesn't want to enter into the critical section hence the
progress is always guaranteed in TSL.
Bounded Waiting
Bounded Waiting is not guaranteed in TSL.
Some process might not get a chance for so
long.
We cannot predict for a process that it will
definitely get a chance to enter in critical
section after a certain time.
SWAP instruction
Swap algorithm is a lot like the TestAndSet
algorithm.
Instead of directly setting lock to true in the
swap function, key is set to true and then
swapped with lock.
First process will be executed, and in
while(key),
since key=true ,
swap will take place and hence lock=true
and key=false.
Again next iteration takes place while(key)
but key=false ,
so while loop breaks and first process will
enter in critical section.
Now another process will try to enter in
Critical section, so again key=true and
hence while(key) loop will run and swap
takes place so, lock=true and key=true
(since lock=true in first process).
Again on next iteration while(key) is true
so this will keep on executing and another
process will not be able to enter in critical
section.
Therefore Mutual exclusion is ensured.
Again, out of the critical section, lock is
changed to false, so any process finding it
gets enter the critical section. Progress is
ensured..
Semaphores
Semaphores are integer variables that are used to solve
the critical section problem.
It is simply a variable used to solve the critical section
problem and to achieve process synchronization in the
multiprocessing environment.
Semaphore uses the two atomic operations
Wait() and Signal()
All modifications to the integer value of the
semaphore in the wait( ) and signal( ) operations
must be executed all at once.
When one process modifies the semaphore
value, no other process can simultaneously
modify that same semaphore value.
Wait()
The wait operation decrements the value of its argument S, if
it is positive. If S is negative or zero, then no operation is
performed.
Signal
The signal operation increments the value of its argument S.
Types of Semaphores
Semaphores are mainly of two types in Operating
system
Binary Semaphore:
It is a special form of semaphore used for
implementing mutual exclusion, hence it is often
called a Mutex.
A binary semaphore is initialized to 1 and only takes
the values 0 and 1 during the execution of a program.
In Binary Semaphore, the wait operation
works only if the value of semaphore = 1,
and the signal operation succeeds when the
semaphore= 0.
Binary Semaphores are easier to implement
than counting semaphores.
Counting Semaphores:
These are used to implement bounded concurrency.
The Counting semaphores can range over
an unrestricted domain.
These can be used to control access to a given
resource that consists of a finite number of Instances.
Here the semaphore count is used to indicate the
number of available resources.
If the resources are added then the semaphore count
automatically gets incremented and if the resources are
removed, the count is decremented.
Counting Semaphore has no mutual exclusion.
Advantages of Semaphores
Semaphores allow only one process into the critical
section.
They follow the mutual exclusion principle strictly and
are much more efficient than some other methods of
synchronization.
There is no resource wastage because of busy waiting
in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
Readers writer problem
Readers writer problem is another example of a
classic synchronization problem.
The Problem Statement
There is a shared resource that should be
accessed by multiple processes.
There are two types of processes in this context.
They are reader and writer.
Any number of reader can read from the shared
resource simultaneously, but only
one writer can write to the shared resource.
When a writer is writing data to the resource,
no other process can access the resource.
A writer cannot write to the resource if there are
non-zero readers accessing the resource at that
time.
The Solution
From the above problem statement, it is evident
that readers have higher priority than writer.
If a writer wants to write to the resource, it must
wait until there are no readers currently
accessing that resource.
Here, we use one mutex m and
a semaphore w.
An integer variable read_count is used to
maintain the number of readers currently
accessing the resource.
The variable read_count is initialized to 0.
A value of 1 is given initially to m and w.
Instead of having the process to
acquire a lock on the shared
resource, we use the mutex m to
make the process to acquire and
release lock whenever it is updating
the read_count variable.
The code for the writer process looks like
this:
And, the code for the reader process looks like
this:
As seen above in the code for the writer, the
writer just waits on the w semaphore until it
gets a chance to write to the resource.
After performing the write operation, it
increments w so that the next writer can
access the resource.
On the other hand, in the code for the reader,
the lock is acquired whenever
the read_count is updated by a process.
When a reader wants to access the resource,
first it increments the read_count value,
then accesses the resource, and then
decrements the read_count value.
The semaphore w is used by the first reader
which enters the critical section and the last
reader who exits the critical section.
The reason for this is, when the first
readers enter the critical section, the
writer is blocked from the resource.
Only new readers can access the
resource now.
Similarly, when the last reader exits the
critical section, it signals the writer using
the w semaphore because there are zero
readers now and a writer can have the
chance to access the resource.
CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED

More Related Content

Similar to CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED

Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - Engineering
Yogesh Santhan
 
Ch6
Ch6Ch6
Ch6
C.U
 

Similar to CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED (20)

Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - Engineering
 
Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)
 
LM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesLM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processes
 
Process scheduling (CPU Scheduling)
Process scheduling (CPU Scheduling)Process scheduling (CPU Scheduling)
Process scheduling (CPU Scheduling)
 
Cp usched 2
Cp usched  2Cp usched  2
Cp usched 2
 
chapter 5 CPU scheduling.ppt
chapter  5 CPU scheduling.pptchapter  5 CPU scheduling.ppt
chapter 5 CPU scheduling.ppt
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
Fcfs and sjf
Fcfs and sjfFcfs and sjf
Fcfs and sjf
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
Ch6
Ch6Ch6
Ch6
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
Osy ppt - Copy.pptx
Osy ppt - Copy.pptxOsy ppt - Copy.pptx
Osy ppt - Copy.pptx
 
Ch05 cpu-scheduling
Ch05 cpu-schedulingCh05 cpu-scheduling
Ch05 cpu-scheduling
 
Ch5
Ch5Ch5
Ch5
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
OS_Ch6
OS_Ch6OS_Ch6
OS_Ch6
 
OSCh6
OSCh6OSCh6
OSCh6
 

Recently uploaded

The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptx
heathfieldcps1
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
中 央社
 
Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 

Recently uploaded (20)

Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 
MOOD STABLIZERS DRUGS.pptx
MOOD     STABLIZERS           DRUGS.pptxMOOD     STABLIZERS           DRUGS.pptx
MOOD STABLIZERS DRUGS.pptx
 
The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptx
 
Capitol Tech Univ Doctoral Presentation -May 2024
Capitol Tech Univ Doctoral Presentation -May 2024Capitol Tech Univ Doctoral Presentation -May 2024
Capitol Tech Univ Doctoral Presentation -May 2024
 
Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).
 
Championnat de France de Tennis de table/
Championnat de France de Tennis de table/Championnat de France de Tennis de table/
Championnat de France de Tennis de table/
 
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
 
The Ball Poem- John Berryman_20240518_001617_0000.pptx
The Ball Poem- John Berryman_20240518_001617_0000.pptxThe Ball Poem- John Berryman_20240518_001617_0000.pptx
The Ball Poem- John Berryman_20240518_001617_0000.pptx
 
BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
 
UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024
 
Implanted Devices - VP Shunts: EMGuidewire's Radiology Reading Room
Implanted Devices - VP Shunts: EMGuidewire's Radiology Reading RoomImplanted Devices - VP Shunts: EMGuidewire's Radiology Reading Room
Implanted Devices - VP Shunts: EMGuidewire's Radiology Reading Room
 
Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
“O BEIJO” EM ARTE .
“O BEIJO” EM ARTE                       .“O BEIJO” EM ARTE                       .
“O BEIJO” EM ARTE .
 
philosophy and it's principles based on the life
philosophy and it's principles based on the lifephilosophy and it's principles based on the life
philosophy and it's principles based on the life
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...
 
Exploring Gemini AI and Integration with MuleSoft | MuleSoft Mysore Meetup #45
Exploring Gemini AI and Integration with MuleSoft | MuleSoft Mysore Meetup #45Exploring Gemini AI and Integration with MuleSoft | MuleSoft Mysore Meetup #45
Exploring Gemini AI and Integration with MuleSoft | MuleSoft Mysore Meetup #45
 
Mattingly "AI and Prompt Design: LLMs with Text Classification and Open Source"
Mattingly "AI and Prompt Design: LLMs with Text Classification and Open Source"Mattingly "AI and Prompt Design: LLMs with Text Classification and Open Source"
Mattingly "AI and Prompt Design: LLMs with Text Classification and Open Source"
 

CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED

  • 1. CPU SCHEDULING Scheduling is a process of allowing one process to use the CPU resources, keeping on hold the execution of another process due to the unavailability of resources CPU.
  • 2. Concepts of CPU Scheduling CPU–I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher
  • 3. CPU–I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. Process execution begins with a CPU burst. That is followed by an I/O burst. Processes alternate between these two states. The final CPU burst ends with a system request to terminate execution.
  • 4.
  • 5. CPU Scheduler Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the Short-Term Scheduler or CPU scheduler.
  • 6. Preemptive Scheduling Preemptive scheduling is used when a process switches from the running state to the ready state or from the waiting state to the ready state. The resources (mainly CPU cycles) are allocated to the process for a limited amount of time and then taken away, and the process is again placed back in the ready queue if that process still has CPU burst time remaining. That process stays in the ready queue till it gets its next chance to execute.
  • 7. Non-Preemptive Scheduling Non-preemptive Scheduling is used when a process terminates, or a process switches from running to the waiting state. In this scheduling, once the resources (CPU cycles) are allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting state.
  • 8. Dispatcher The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. Dispatcher function involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program.
  • 9. The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another process running is known as the Dispatch Latency.
  • 10.
  • 11. The aim of the scheduling algorithm is to maximize and minimize the following: Maximize: CPU utilization - It makes sure that the CPU is operating at its peak and is busy. Through output - It is the number of processes that complete their execution per unit of time. Minimize: Waiting time- It is the amount of waiting time in the queue. Response time- Time retired for generating the first request after submission. Turn around time- It is the amount of time required to execute a specific process. TurnAroundTime=Compilationtime−Arrivaltime.
  • 12. Arrival Time In CPU Scheduling, the arrival time refers to the moment in time when a process enters the ready queue and is awaiting execution by the CPU. In other words, it is the point at which a process becomes eligible for scheduling. Burst Time Burst time, also referred to as “execution time”. It is the amount of CPU time the process requires to complete its execution. It is the amount of processing time required by a process to execute a specific task or unit of a job.
  • 13. Completion Time Completion time is when a process finishes execution and is no longer being processed by the CPU.
  • 14. First Come First Serve (FCFS) In FCFS, the process that requests the CPU first is allocated the CPU first. Once the CPU has been allocated to a process, it keeps the CPU until it releases the CPU. FCFS can be implemented by using FIFO queues. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue.
  • 15. Generalized Activity Normalization Time Table (GANTT) chart is a bar chart that is used to illustrate a particular schedule including the start and finish times of each of the participating processes.
  • 16. Process ArrivalT ime (T0) BurstTi me (ΔT) Finish Time (T1) Turnaround Time (TAT = T1 - T0) Waiting Time (WT = TAT - ΔT) P0 0 10 10 10 0 P1 1 6 16 15 9 P2 3 2 18 15 13 P3 5 4 22 17 13 Gantt Chart 🞂Average Turn around Time: (10+15+15+17)/4 = 14.25ms. 🞂Average Waiting Time: (0+9+13+13)/4 = 8.75ms.
  • 17. Advantages of FCFS The First come first serve process scheduling algorithm is one of the simple and easy processes scheduling algorithms. The process that arrives first will be executed first.
  • 18. Disadvantages of FCFS This algorithm makes other small processes wait until the current program completes. Short processes have to wait for a long time until the bigger process arrives before it. The waiting time is usually high. This scheduling algorithm is not ideal for time- sharing systems.
  • 19. Example of FCFS Billing counters in a supermarket is a real-life example of the FCFS algorithm. The first person to come to the billing counter will be served first, later the next person, and so on. The second customer will only be served if the first person's complete billing is done.
  • 20. Shortest-Job-First Scheduling (SJF) The Process that requires shortest time to complete execution is reserved first. The SJF algorithm is defined as “when the CPU is available, it is assigned to the process that has the smallest next CPU burst”.
  • 21. If the next CPU bursts of two processes are the same, FCFS scheduling is used between two processes. SJF is also called the Shortest-Next CPU-Burst algorithm, because scheduling depends on the length of the next CPU burst of a process, rather than its total length.
  • 22. SJF Scheduling can be used in both preemptive and non-preemptive mode. Preemptive mode of Shortest Job First is called as Shortest Remaining Time First (SRTF).
  • 23. Proce ss ArrivalTi me (T0) BurstTi me (ΔT) Finish Time (T1) Turnaround Time (TAT = T1 - T0) Waiting Time (WT = TAT - ΔT) P0 0 10 10 10 0 P1 1 6 22 21 15 P2 3 2 12 9 7 P3 5 4 16 11 7 🞂AverageTurnaroundTime: (10+21+9+11)/4 = 12.75ms. 🞂AverageWaitingTime: (0+15+7+7)/4 = 7.25ms. If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average turn around time.
  • 24. Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled
  • 25. Advantages- SRTF is optimal and guarantees the minimum average waiting time. It provides a standard for other algorithms since no other algorithm performs better than it. Disadvantages- It can not be implemented practically since burst time of the processes can not be known in advance. It leads to starvation for processes with larger burst time. Priorities can not be set for the processes. Processes with larger burst time have poor response time.
  • 26. Problem-03: Consider the set of 5 processes whose arrival time and burst time are given below- If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average turn around time.
  • 27. Problem-04: Consider the set of 5 processes whose arrival time and burst time are given below- If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average turn around time.
  • 28. Round-Robin Scheduling (RR) Round-Robin (RR) scheduling algorithm is designed especially for Time Sharing systems. Each selected process is assigned a time interval called time quantum or time slice. Process is allowed to run only for this time interval.
  • 29. After the time quantum expires, the running process is preempted and sent to the ready queue. Then, the processor is assigned to the next arrived process. It is always preemptive in nature.
  • 30.
  • 31. If we use a time quantum of 4 milliseconds
  • 32. Advantages- It gives the best performance in terms of average response time. It is best suited for time sharing system, client server architecture and interactive system. Disadvantages- It leads to starvation for processes with larger burst time as they have to repeat the cycle many times. Its performance heavily depends on time quantum. Priorities can not be set for the processes.
  • 33. Priority Scheduling A priority is associated with each process and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm is a special kind of priority scheduling algorithm where small CPU bursts will have higher priority. Priorities can be defined based on time limits, memory requirements, the number of open files etc.
  • 34.
  • 35. Priority scheduling can be either Preemptive or Non-preemptive. A Preemptive Priority Scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.
  • 36. Problem: Starvation or Indefinite Blocking •In priority Scheduling when there is a continuous flow of higher priority processes to the ready queue then all the lower priority processes must wait for the CPU until all the higher priority processes execution completes. •This leads to lower priority processes blocked from getting CPU for a long period of time. This situation is called Starvation or Indefinite blocking.
  • 37. Solution: Aging Aging involves gradually increasing the priority of processes that wait in the system for a long time. To prevent starvation of any process, we can use the concept of aging where we keep on increasing the priority of low- priority process based on the its waiting time. For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a process with priority 20(which is comparitively low priority) comes in the ready queue. After one day of waiting, its priority is increased to 19.5 and so on. Doing so, we can ensure that no process will have to wait for indefinite time for getting CPU time for processing.
  • 38. Multilevel Queue Scheduling Algorithm In the Multilevel Queue Scheduling algorithm the processes are classified into different groups. Priority of foreground processes are higher than background processes. The multilevel queue scheduling algorithm partitions the ready queue into several separate queues.
  • 39. Processes are assigned to the queue depending on some properties of the process. Property may be memory size, process priority or process type. Each queue is associated with its own scheduling algorithm. Ex:- Foreground process might scheduled by RR algorithm and the background queue is scheduled by FCFS algorithm.
  • 40.
  • 41. The Description of the processes in the above diagram is as follows: System Process The Operating system itself has its own process to run and is termed as System Process. Interactive Process The Interactive Process is a process in which there should be the same kind of interaction (basically an online game ). Batch Processes Batch processing is basically a technique in the Operating system that collects the programs and data together in the form of the batch before the processing starts. Student Process The system process always gets the highest priority while the student processes always get the lowest priority.
  • 42. No process in the batch queue could run unless the queues for system processes, interactive processes and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process will be preempted.
  • 43. Priority of queue 1 is greater than the priority of queue 2. Here Queue 1 uses RR(as Time quantum =2 and queue 2 uses FCFS and Calculate the average TAT and average WT
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53. Disadvantage: Starvation of Lower level queue The multilevel queue scheduling algorithm is inflexible. The processes are permanently assigned to a queue when they enter the system. Processes are not allowed to move from one queue to another queue. There is a chance that lower level queues will be in starvation because unless the higher level queues are empty no lower level queues will be executing. If at any instant of time if there is a process in higher priority queue then there is no chance that lower level process can be executed eternally.
  • 54. Multilevel Feedback Queue Scheduling (MLFQ) Multilevel feedback queue scheduling algorithm allows a process to move between queues. Processes are separated according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. A process that waits too long in a lower-priority queue moved to a higher-priority queue. This form of aging prevents starvation.
  • 55. A process entering the ready queue is put in queue 0. Process in queue 0 has given time quantum 8ms. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of the queue 1 is given a quantum of 16ms. If it does not complete, it is preempted and is put into queue 2.
  • 56.
  • 57. Use RR for the first two queues and the FCFS for third queue. Time quantum of queue 1 is 3 and queue 2 is 5. Calculate the average TAT and average WT
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68. It is important to note that a process that is in a lower priority queue can only execute only when the higher priority queues are empty. Any running process in the lower priority queue can be interrupted by a process arriving in the higher priority queue. Advantages of MFQS This is a flexible Scheduling Algorithm This scheduling algorithm allows different processes to move between different queues. In this algorithm, A process that waits too long in a lower priority queue may be moved to a higher priority queue which helps in preventing starvation.
  • 69. Inter Process Communication Mechanisms: Inter process communication is the mechanism provided by the operating system that allows processes to communicate with each other. This communication could involve a process letting another process know that some event has occurred or the transferring of data from one process to another. Processes executing concurrently in the operating system may be either independent processes or cooperating processes.
  • 70. Independent Process: Any process that does not share data with any other process. An Independent process does not affect or be affected by the other processes executing in the system. Cooperating Process: Any process that shares data with other processes. A cooperating process can affect or be affected by the other processes executing in the system. Cooperating processes require an Inter-Process Communication (IPC) mechanism that will allow them to exchange data and information.
  • 71. Reasons for providing cooperative process environment: Information Sharing: Several users may be interested in the same piece of information (i.e. a shared file), we must provide an environment to allow concurrent access to such information. Computation Speedup: If we want a particular task to run faster, we must break it into subtasks. Each task will be executing in parallel with the other tasks. Modularity: Dividing the system functions into separate processes or threads.
  • 72. Convenience: Even an individual user may work on many tasks at the same time. For example a user may be editing, listening to music and compiling in parallel. There are two models of IPC: i. Message passing ii. Shared memory.
  • 73. Message-Passing Systems In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. It is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. A message-passing facility provides two operations: send, receive. Messages sent by a process can be either fixed or variable in size.
  • 74. Methods for implementing a logical communication links are: 1. Naming 2. Synchronization 3. Buffering
  • 75. 1. Naming Processes that want to communicate use either Direct or Indirect communication. In Direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. send(P, message) — Send a message to processP. receive(Q, message) — Receive a message from processQ.
  • 76. A communication link in direct communication scheme has the following properties: A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s identity to communicate. A link is associated with exactly two processes. (i.e.) between each pair of processes, there exists exactly onelink.
  • 77. In Indirect communication, the messages are sent to and received from mailboxes or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique integer identification value.
  • 78. A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox. send (A, message) — Send a message to mailboxA. receive (A, message) — Receive a message from mailboxA.
  • 79. In this scheme, a communication link has the following properties: A link is established between a pair of processes only if both members of the pair have a shared mailbox. A link may be associated with more than two processes. Between each pair of communicating processes, a number of different links may exist, with each link corresponding to one mailbox.
  • 80. 2. Synchronization Message passing done in two ways: i. Synchronous or Blocking ii. Asynchronous or Non-Blocking Blocking send: The sending process is blocked until the message is received by the receiving process or by the mail box.
  • 81. Non-blocking send: The sending process sends the message and resumes operation. Blocking receive: The receiver blocks until a message is available. Non-blocking receive: The receiver retrieves either a valid message or a null.
  • 82. 3. Buffering Messages exchanged by communicating processes reside in a temporary queue. Those queues can be implemented in three ways: Zero Capacity: Zero-capacity is called as a message system with no buffering. The sender must block until the recipient receives the message.
  • 83. Bounded Capacity: The queue has finite length n. Hence at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue and the sender can continue execution without waiting. If the link is full, the sender must block until space is available in the queue.
  • 84. Unbounded Capacity: The queue’s length is potentially infinite. Hence any number of messages can wait in it. The sender never blocks. ii. Shared memory : In the shared-memory model, a region of memory will be shared by cooperating processes. Processes can exchange information by reading and writing data to the shared region.
  • 85. Unbounded Capacity: The queue’s length is potentially infinite. Hence any number of messages can wait in it. The sender never blocks. ii. Shared memory : In the shared-memory model, a region of memory will be shared by cooperating processes. Processes can exchange information by reading and writing data to the shared region.
  • 86. Multi Processor Scheduling In multiple processor scheduling multiple CPU’s are available and hence load sharing becomes possible. It is more complex than single processor system In multi processor system, we can use any processor available to run any process in the queue.
  • 87. Approaches to Multiple-Processor Scheduling Asymmetric scheduling When all the scheduling decisions and I/O processing are handled by a single processor which is called the master server and the other processors executes only the user mode. This is simple and reduces the need of sharing. This entire scenario is called the Asymmetric scheduling.
  • 88.
  • 89. Symmetric scheduling All processes may be in a common ready queue or each processor may have its own private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute.
  • 90.
  • 91. Processor affinity System try to avoid migration of processes from one process to another and try to keep a process running on the same processor. This is known as Processor affinity. Soft Affinity:- When an operating system has policy of attempting to keep a process running on the same processor but not guarantee it will do so.
  • 92. Hard Affinity:- Some system such as linux also provide some system calls that support hard affinity which allows a process to migrate between processors. Load balancing:- Load balancing is the phenomena which keeps the work load equally distributed across all processors .
  • 93. 2 approaches are to load balancing Push migration:- It is a task routinely checks the load on each processor and if it finds a imbalance then it evenly distributes load on each processors by moving the processes from overloaded to idle or less busy processors.
  • 94. Pull Migration:- It occurs when an idle processor pulls a waiting task from a busy processor for its execution. Multi core processors:- In multi core processors multiple processors cores are placed on the same physical chip. Each core has a register set to maintain its architectural state and thus appear to the operating system as a separate physical processor.
  • 95. Memory Stall:- When processor access memory the it spends a significant amount of time waiting for the data to become available. This situation is called memory stall.
  • 96. To solve this problem recent hardware designers have implemented the multithreaded processor cores in two or more hardware threads are assigned to each core. If one thread stalls while waiting for the memory, core can switch to another thread.
  • 97. Process Management and Synchronization It is the task phenomenon of coordinating the execution of processes in such a way that no two processes can have access to the same shared data and resources. Process Synchronization is mainly needed in a multi-process system when multiple processes are running together, and more than one processes try to gain access to the same shared resource or any data at the same time.
  • 98. The Critical-Section Problem Critical section is a code segment that can be accessed by only one process at a time. Critical section contains shared variables which need to be synchronized to maintain consistency of data variables. Consider a system consisting of n processes {P0, P1, ..., Pn−1}.
  • 99. Each process has a segment of code, called a Critical Section, in which the process may be changing common variables, updating a table, writing a file and soon. When one process is executing in its critical section, no other process is allowed to execute in its critical section.
  • 100. It cannot be executed by more than one process at a time. The entry section handles the entry into the critical section. It acquires the resources needed for execution by the process. The exit section handles the exit from the critical section. It releases the resources and also informs the other processes that the critical section is free.
  • 101.
  • 102. Solution to the critical section Problem The critical section problem needs a solution to synchronize the different processes. The solution to the critical section problem must satisfy the following conditions: Mutual Exclusion Progress Bounded waiting
  • 103. Mutual Exclusion It implies that only one process can be inside the critical section at any time. If any other processes require the critical section, they must wait until it is free. Ex:- Like the person waiting in queue.
  • 104. Progress Progress means that if a process is not using the critical section, then it should not stop any other process from accessing it. In other words, any process can enter a critical section if it is free. Ex:- Like the person can enter if telephone booth is free, here no one should restrict if booth is free.
  • 105. Bounded waiting Bounded waiting means that each process must have a limited waiting time. It should not wait endlessly to access the critical section. Ex:- Like we can set a time limit for every person to talk, so person in queue no need to wait endlessly.
  • 106. Two general approaches are used to handle critical sections in operating systems: 1. Preemptive Kernel allows a process to be preempted while it is running in kernel mode. 2. Non-preemptive Kernel does not allow a process running in kernel mode to be preempted
  • 107. Peterson solution problem This is widely used and software-based solution to critical section problems. Peterson's solution was developed by a computer scientist Peterson that's why it is named so.
  • 108. The solution because it provides a good algorithmic description of solving the critical- section problem and illustrates some of the complexities involved in designing software that addresses the requirements of mutual exclusion, progress, and bounded waiting.
  • 109. The processes are numbered P0 and P1. Peterson’s solution requires the two processes to share two data items: int turn; boolean flag[2]; The variable turn indicates whose turn it is to enter its critical section. At any point of time the turn value will be either 0 or 1 but not both.
  • 110.
  • 111. The above shows the structure of process Pi in Peterson's solution. Suppose there are N processes (P1, P2, ... PN) and as at some point of time every process requires to enter in the Critical Section. A FLAG[] array of size N is maintained here which is by default false. Whenever a process requires to enter in the critical section, it has to set its flag as true.
  • 112. Example: If Pi wants to enter it will set FLAG[i]=TRUE. Another variable is called TURN and is used to indicate the process number that is currently waiting to enter into the critical section. The process that enters into the critical section while exiting would change the TURN to another number from the list of processes that are ready.
  • 113. Example: If the turn is 3 then P3 enters the Critical section and while exiting turn=4 and therefore P4 breaks out of the wait loop. Synchronization Hardware Many systems provide hardware support for critical section code. The critical section problem could be solved easily in a single-processor environment if we could disallow interrupts to occur while a shared variable or resource is being modified.
  • 114. Mutex Locks As the synchronization hardware solution is not easy to implement for everyone, a strict software approach called Mutex Locks was introduced. In this approach, in the entry section of code, a LOCK is acquired over the critical resources modified and used inside the critical section, and in the exit section that LOCK is released. As the resource is locked while a process executes its critical section hence no other process can access it.
  • 115. Test and Set Lock – Test and Set Lock (TSL) is a hardware solution to the synchronization mechanism. It uses a test and set instruction to provide the synchronization among the processes executing concurrently. We have a shared lock variable which can take either of the two values 0 or 1.
  • 116. If one process is currently executing a test-and-set, no other process is allowed to begin another test-and- set until the first process test-and-set is finished.
  • 117. If you the process P0 go will from queue to the entry section. When it enters the entry section, initially the value of the lock will be 0. When it enters into the critical section, a register will come into the picture. The value of the register will be 1.
  • 118. Now in the entry section itself, the exchange of values between the register and the lock takes place. Hence, the value of lock will become 1 and the value of register will become 0.
  • 119. Here once again if process P0 will go from entry section to the critical section. In critical section, only one process can occur at a time.
  • 120. Now the process P1 will move from the queue to the entry section. Now again , then there will be a problem as only one process can enter into the critical section. Process P0 is already inside the critical section, so process P1 has to wait for P0 to finish the process. As process P0 is already inside the critical section, the value of the register will become 1 and process P1 will go into the while loop and will never enter into the critical section.
  • 121. As process P0 is already inside the critical section, the value of the register will become 1 and process P1 will go into the while loop and will never enter into the critical section. An alert message that process P1 needs to wait for process P0 to leave the critical section. The process P0 will go from critical section to the exit section.
  • 122. In exit section the value of lock as well the value of the register will become 0. Again in the same time if you press P1 then again the value of lock will become 1. In the similar way if process P1 is in the critical section and process P0 is in entry section, then again if you press P0 the process P0 cannot enter into the critical section.
  • 123. The value of the register will become 1 from zero and process P0 will enter into an infinite while loop refraining it from entering it into the critical section. An alert message will come that process P0 needs to wait for process P1 to leave the critical section.
  • 124. Mutual Exclusion Mutual Exclusion is guaranteed in TSL mechanism since a process can never be preempted just before setting the lock variable. Only one process can see the lock variable as 0 at a particular time and that's why, the mutual exclusion is guaranteed.
  • 125. Progress According to the definition of the progress, a process which doesn't want to enter in the critical section should not stop other processes to get into it. In TSL mechanism, a process will execute the TSL instruction only when it wants to get into the critical section. The value of the lock will always be 0 if no process doesn't want to enter into the critical section hence the progress is always guaranteed in TSL.
  • 126. Bounded Waiting Bounded Waiting is not guaranteed in TSL. Some process might not get a chance for so long. We cannot predict for a process that it will definitely get a chance to enter in critical section after a certain time.
  • 127.
  • 128. SWAP instruction Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in the swap function, key is set to true and then swapped with lock.
  • 129. First process will be executed, and in while(key), since key=true , swap will take place and hence lock=true and key=false. Again next iteration takes place while(key) but key=false , so while loop breaks and first process will enter in critical section.
  • 130. Now another process will try to enter in Critical section, so again key=true and hence while(key) loop will run and swap takes place so, lock=true and key=true (since lock=true in first process). Again on next iteration while(key) is true so this will keep on executing and another process will not be able to enter in critical section.
  • 131. Therefore Mutual exclusion is ensured. Again, out of the critical section, lock is changed to false, so any process finding it gets enter the critical section. Progress is ensured..
  • 132.
  • 133. Semaphores Semaphores are integer variables that are used to solve the critical section problem. It is simply a variable used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment. Semaphore uses the two atomic operations Wait() and Signal()
  • 134. All modifications to the integer value of the semaphore in the wait( ) and signal( ) operations must be executed all at once. When one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value.
  • 135.
  • 136. Wait() The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed.
  • 137. Signal The signal operation increments the value of its argument S.
  • 138. Types of Semaphores Semaphores are mainly of two types in Operating system Binary Semaphore: It is a special form of semaphore used for implementing mutual exclusion, hence it is often called a Mutex. A binary semaphore is initialized to 1 and only takes the values 0 and 1 during the execution of a program.
  • 139. In Binary Semaphore, the wait operation works only if the value of semaphore = 1, and the signal operation succeeds when the semaphore= 0. Binary Semaphores are easier to implement than counting semaphores.
  • 140. Counting Semaphores: These are used to implement bounded concurrency. The Counting semaphores can range over an unrestricted domain. These can be used to control access to a given resource that consists of a finite number of Instances. Here the semaphore count is used to indicate the number of available resources.
  • 141. If the resources are added then the semaphore count automatically gets incremented and if the resources are removed, the count is decremented. Counting Semaphore has no mutual exclusion.
  • 142. Advantages of Semaphores Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly and are much more efficient than some other methods of synchronization. There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
  • 143. Readers writer problem Readers writer problem is another example of a classic synchronization problem. The Problem Statement There is a shared resource that should be accessed by multiple processes. There are two types of processes in this context. They are reader and writer.
  • 144. Any number of reader can read from the shared resource simultaneously, but only one writer can write to the shared resource. When a writer is writing data to the resource, no other process can access the resource. A writer cannot write to the resource if there are non-zero readers accessing the resource at that time.
  • 145. The Solution From the above problem statement, it is evident that readers have higher priority than writer. If a writer wants to write to the resource, it must wait until there are no readers currently accessing that resource.
  • 146. Here, we use one mutex m and a semaphore w. An integer variable read_count is used to maintain the number of readers currently accessing the resource. The variable read_count is initialized to 0. A value of 1 is given initially to m and w.
  • 147. Instead of having the process to acquire a lock on the shared resource, we use the mutex m to make the process to acquire and release lock whenever it is updating the read_count variable.
  • 148. The code for the writer process looks like this:
  • 149. And, the code for the reader process looks like this:
  • 150. As seen above in the code for the writer, the writer just waits on the w semaphore until it gets a chance to write to the resource. After performing the write operation, it increments w so that the next writer can access the resource. On the other hand, in the code for the reader, the lock is acquired whenever the read_count is updated by a process.
  • 151. When a reader wants to access the resource, first it increments the read_count value, then accesses the resource, and then decrements the read_count value. The semaphore w is used by the first reader which enters the critical section and the last reader who exits the critical section.
  • 152. The reason for this is, when the first readers enter the critical section, the writer is blocked from the resource. Only new readers can access the resource now.
  • 153. Similarly, when the last reader exits the critical section, it signals the writer using the w semaphore because there are zero readers now and a writer can have the chance to access the resource.