Course Title
Operating Systems & Systems Programming
Course No.
CSE - 3201 / CSE - 2205
Operating Systems & Systems Programming
What is Scheduling
Process Scheduling is the method of the
process manager handling the removal of
an active process from the CPU and
selecting another process based on a
specific strategy/criteria.
Scheduler
A scheduler is a type of system
software (or part of operating system)
that allows to handle process
scheduling.
Operating Systems & Systems Programming
Preemtive Scheduling
Nonpreemtive Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
By far the simplest CPU-scheduling
algorithm is the first-come, first-served
(FCFS) scheduling algorithm.
The process that requests the CPU first is
allocated the CPU first. The
implementation of the FCFS policy is
easily managed with a FIFO queue.
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
When a process enters the ready queue,
its PCB is linked onto the tail of the
queue. When the CPU is free, it is
allocated to the process at the head of the
queue.
FCFS scheduling algorithm is
nonpreemptive
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
On the negative side, the average
waiting time under the FCFS policy is
often quite long.
Consider the following set of processes
that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
If the processes arrive in the order P1,
P2, P3, and are served in FCFS order,
we get the result shown in the following
Gantt chart, which is a bar chart that
illustrates a particular schedule, including
the start and finish times of each of the
participating processes:
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 =
17
Operating Systems & Systems Programming
Scheduling Algorithms
First-Come, First-Served Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
A different approach to CPU scheduling is
the shortest-job-first (SJF) scheduling
algorithm.
This algorithm associates with each
process the length of the process's next
CPU burst. When the CPU is available, it
is assigned to the process that has the
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
If the next CPU bursts of two processes
are the same, FCFS scheduling is used to
break the tie. More appropriate term for
this scheduling method would be the
shortest-next-CPU-burst algorithm,
because scheduling depends on the length
of the next CPU burst of a process, rather
than its total length.
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
A different approach to CPU scheduling is
the shortest-job-first (SJF) scheduling
algorithm.
This algorithm associates with each
process the length of the process's next
CPU burst. When the CPU is available, it
is assigned to the process
The SJF algorithm can be either
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
The SJF algorithm can be either
preemptive or nonpreemptive.
The choice arises when a new process
arrives at the ready queue while a
previous process is still executing. The
next CPU burst of the newly arrived
process may be shorter than what is left
of the currently executing process.
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
A preemptive SJF algorithm will preempt
the currently executing process, whereas
a nonpreemptive SJF algorithm will allow
the currently running process to finish its
CPU burst.
Preemptive SJF scheduling is sometimes
called shortest-remaining-time-first
scheduling.
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Shortest-Job-First (SJF) Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
The SJF algorithm is a special case of
the general priority scheduling algorithm.
A priority is associated with each
process, and the CPU is allocated to the
process with the highest priority.
Equal-priority processes are scheduled in
FCFS order.
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
An SJF algorithm is simply a priority
algorithm where the priority (p) is the
inverse of the (predicted) next CPU burst.
The larger the CPU burst, the lower the
priority, and vice versa.
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Note that we discuss scheduling in terms of
high priority and low priority. Priorities are
generally indicated by some fixed range of
numbers, such as 0 to 7 or 0 to 4,095.
However, there is no general agreement on
whether 0 is the highest or lowest priority.
Some systems use low numbers to represent
low priority; others use low numbers for
high priority.
In this text, we assume that low numbers
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
As an example, consider the following set of
processes, assumed to have arrived at time 0
in the order P1, P2, · · ·, P5, with the
length of the CPU burst given in
milliseconds:
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Using priority scheduling, we would
schedule these processes according to the
following Gantt chart:
The average waiting time is
(0+1+6+16+18)/5= 8.2 milliseconds.
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Priorities can be defined either internally or
externally. Internally defined priorities use
some measurable quantity or quantities to
compute the priority of a process. For
example
Time limits
Memory requirements
The number of open files and
The ratio of average I/0 burst to average CPU
burst.
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
External priorities are set by criteria outside
the operating system, Such as
The importance of the process
The type and amount of funds being paid for
computer use,
The department sponsoring the work, and
other, often political factors.
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Priority scheduling can be either preemptive
or nonpreemptive. When a process arrives at
the ready queue, its priority is compared
with the priority of the currently running
process. A preemptive priority scheduling
algorithm will preempt the CPU if the
priority of the newly arrived process is
higher than the priority of the currently
running process.
A nonpreemptive priority scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Priority scheduling can be either preemptive
or nonpreemptive. When a process arrives at
the ready queue, its priority is compared
with the priority of the currently running
process. A preemptive priority scheduling
algorithm will preempt the CPU if the
priority of the newly arrived process is
higher than the priority of the currently
running process.
A nonpreemptive priority scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Priority Scheduling
Problem
Indefinite blocking or Starvation.
Solution
Aging
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is
designed especially for time-sharing
systems. It is similar to FCFS scheduling,
but preemption is added to enable the
system to switch between processes.
A small unit of time, called a time quantum
or time slice, is defined. A time quantum is
generally from 10
to 100 milliseconds in length.
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
The ready queue is treated as a circular
queue. The CPU scheduler goes around the
ready queue, allocating the CPU to each
process for a time interval of up to 1 time
quantum.
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
To implement RR scheduling, we keep the
ready queue as a FIFO queue of processes.
New processes are added to the tail of the
ready queue. The CPU scheduler picks the
first process from the ready queue, sets a
timer to interrupt after 1 time quantum
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
One of two things will then happen. The
process may have a CPU burst of less than 1
time quantum. In this case, the process itself
will release the CPU voluntarily. The
scheduler will then proceed to the next
process in the ready queue.
.
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
Otherwise, if the CPU burst of the currently
running process is longer than 1 time
quantum, the timer will go off and will cause
an interrupt to the operating system. A
context switch will be executed, and the
process will be put at the tail of the ready
queue. The CPU scheduler will then select
the next process in the ready queue.
The average waiting time under the RR
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Round-Robin Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
Another class of scheduling algorithms
has been created for situations in which
processes are easily classified into
different groups. For example, a common
division is made between foreground
(interactive) processes and background
(batch) processes. These two types of
processes have different response-time
requirements and so may have different
scheduling needs.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
In addition, foreground processes may have
priority (externally defined) over
background processes. A multilevel queue
scheduling algorithm partitions the ready
queue into several separate queues.
The processes are permanently assigned to
one queue, generally based on some property
of the process, such as memory size,
process priority, or process type.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
Each queue has its own scheduling
algorithm. For example, separate queues
might be used for foreground and
background processes.
The foreground queue might be scheduled
by an RR algorithm, while the background
queue is scheduled by an FCFS algorithm.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
In addition, there must be scheduling among
the queues, which is commonly
implemented as fixed-priority preemptive
scheduling. For example, the foreground
queue may have absolute priority over the
background queue.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
Let's look at an example of a multilevel queue
scheduling algorithm with five queues, listed
below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Queue Scheduling
Issues about time-slice among the queues.
Here, each queue gets a certain portion of
the CPU time, which it can then schedule
among its various processes. For instance,
in the foreground-background queue
example, the foreground queue can be given
80 percent of the CPU time for RR
scheduling among its processes, whereas the
background queue receives 20 percent of
the CPU to give to its processes on an FCFS
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
In multilevel queue scheduling algorithm,
processes are permanently assigned to a
queue when they enter the system.
If there are separate queues for foreground
and background processes, for example,
processes do not move from one queue to
the other, since processes do not change
their foreground or background nature.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
In contrast multilevel feedback queue
scheduling algorithm allows a process to
move between queues.
The idea is to separate processes according
to the characteristics of their CPU bursts. If
a process uses too much CPU time, it will be
moved to a lower-priority queue. This scheme
leaves I/O-bound and interactive processes in
the higher-priority queues.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
In addition, a process that waits too long in
a lower-priority queue may be moved to a
higher-priority queue. This form of aging
prevents starvation.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
In addition, a process that waits too long in
a lower-priority queue may be moved to a
higher-priority queue. This form of aging
prevents starvation.
For example, consider a multilevel feedback
queue scheduler with three queues,
numbered from 0 to 2 (Next Slide). The
scheduler first executes all processes in
queue 0. Only when queue 0 is empty will it
execute processes in queue 1. Similarly,
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
Multilevel feedback queues.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
A process that arrives for queue 1 will
preempt a process in queue 2. A process in
queue 1 will in turn be preempted by a
process arriving for queue 0.
Operating Systems & Systems Programming
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
In general, a multilevel feedback queue
scheduler is defined by the following
parameters:
1. The number of queues
2. The scheduling algorithm for each queue
3. The method used to determine when to
upgrade a process to a higher priority
queue
4. The method used to determine when to
demote a process to a lower priority
Operating Systems & Systems Programming
Scheduling Algorithms
Guaranteed Scheduling
If n users are logged in while you are
working, you will receive about 1/n of the
CPU power. Similarly, on a single-user system
with n processes running, all things being
equal, each one should get 1/n of the CPU
cycles. That seems fair enough.
Operating Systems & Systems Programming
Scheduling Algorithms
Guaranteed Scheduling
To make good on this promise, the system
must keep track of how much CPU each
process has had since its creation. It then
computes the amount of CPU each one is
entitled to, namely the time since creation
divided by n. Since the amount of CPU time
each process has actually had is also known,
it is fairly straightforward to compute the
ratio of actual CPU time consumed to CPU
time entitled.
Operating Systems & Systems Programming
Scheduling Algorithms
Guaranteed Scheduling
Rules:
Calculate the ratio of allocated cpu time
to the amount of CPU time each process is
entitled to i.e. the ratio of actual CPU time
consumed to CPU time entitled.
Actual CPU time consumed/ CPU time
entitled
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Guaranteed
Scheduling
Operating Systems & Systems Programming
Scheduling Algorithms
Lottery Scheduling
The basic idea of lottery scheduling is to give
processes lottery tickets for various system
resources, such as CPU time. Whenever a
scheduling decision has to be made, a lottery
ticket is chosen at random, and the process
holding that ticket gets the resource.
Operating Systems & Systems Programming
Scheduling Algorithms
Lottery Scheduling
More important processes can be given extra
tickets, to increase their odds of winning. If
there are 100 tickets outstanding, and one
process holds 20 of them, it will have a 20%
chance of winning each lottery. In the long
run, it will get about 20% of the CPU.
Operating Systems & Systems Programming
Scheduling Algorithms
Lottery Scheduling
Lottery scheduling has several interesting
properties. For example, if a new process
shows up and is granted some tickets, at the
very next lottery it will have a chance of
winning in proportion to the number of
tickets it holds. In other words, lottery
scheduling is highly responsive.
Operating Systems & Systems Programming
Scheduling Algorithms
Lottery Scheduling
Cooperating processes may exchange tickets
if they wish. For example, when a client
process sends a message to a server process
and then blocks, it may give all of its tickets
to the server, to increase the chance of the
server running next. When the server is
finished, it returns the tickets so that the
client can run again.
Operating Systems & Systems Programming
Scheduling Algorithms
Fair-Share Scheduling
So far we have assumed that each process is
scheduled on its own, without regard to who
its owner is. As a result, if user 1 starts up
nine processes and user 2 starts up one
process, with round robin or equal priorities,
user 1 will get 90% of the CPU and user 2
only 10% of it.
Operating Systems & Systems Programming
Scheduling Algorithms
Fair-Share Scheduling
To prevent this situation, some systems take
into account which user owns a process
before scheduling it. In this model, each user
is allocated some fraction of the CPU and the
scheduler picks processes in such a way as to
enforce it. Thus if two users have each been
promised 50% of the CPU, they will each get
that, no matter how many processes they
have in existence.
Operating Systems & Systems Programming
Scheduling Algorithms
Fair-Share Scheduling
As an example, consider a system with two
users, each of which has been promised 50%
of the CPU. User 1 has four processes, A, B,
C, and D, and user 2 has only one process, E.
If round-robin scheduling is used, a possible
scheduling sequence that meets all the
constraints is this one:
A E B E C E D E A E B E C E D
Operating Systems & Systems Programming
Scheduling Algorithms
Fair-Share Scheduling
On the other hand, if user 1 is entitled to
twice as much CPU time as user 2, we might
get
A B E C D E A B E C D E...
Numerous other possibilities exist, of course,
and can be exploited, depending on what the
notion of fairness is.

SchedulingAlgorithm_4.pdf

  • 1.
    Course Title Operating Systems& Systems Programming Course No. CSE - 3201 / CSE - 2205
  • 2.
    Operating Systems &Systems Programming What is Scheduling Process Scheduling is the method of the process manager handling the removal of an active process from the CPU and selecting another process based on a specific strategy/criteria. Scheduler A scheduler is a type of system software (or part of operating system) that allows to handle process scheduling.
  • 3.
    Operating Systems &Systems Programming Preemtive Scheduling Nonpreemtive Scheduling
  • 4.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. The process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue.
  • 5.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. FCFS scheduling algorithm is nonpreemptive
  • 6.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling On the negative side, the average waiting time under the FCFS policy is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds:
  • 7.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the result shown in the following Gantt chart, which is a bar chart that illustrates a particular schedule, including the start and finish times of each of the participating processes:
  • 8.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17
  • 9.
    Operating Systems &Systems Programming Scheduling Algorithms First-Come, First-Served Scheduling
  • 10.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This algorithm associates with each process the length of the process's next CPU burst. When the CPU is available, it is assigned to the process that has the
  • 11.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. More appropriate term for this scheduling method would be the shortest-next-CPU-burst algorithm, because scheduling depends on the length of the next CPU burst of a process, rather than its total length.
  • 12.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This algorithm associates with each process the length of the process's next CPU burst. When the CPU is available, it is assigned to the process The SJF algorithm can be either
  • 13.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process.
  • 14.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling A preemptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling.
  • 15.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling
  • 16.
    Operating Systems &Systems Programming Scheduling Algorithms Shortest-Job-First (SJF) Scheduling
  • 17.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling The SJF algorithm is a special case of the general priority scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order.
  • 18.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
  • 19.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Note that we discuss scheduling in terms of high priority and low priority. Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority; others use low numbers for high priority. In this text, we assume that low numbers
  • 20.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling As an example, consider the following set of processes, assumed to have arrived at time 0 in the order P1, P2, · · ·, P5, with the length of the CPU burst given in milliseconds:
  • 21.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Using priority scheduling, we would schedule these processes according to the following Gantt chart: The average waiting time is (0+1+6+16+18)/5= 8.2 milliseconds.
  • 22.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity or quantities to compute the priority of a process. For example Time limits Memory requirements The number of open files and The ratio of average I/0 burst to average CPU burst.
  • 23.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling External priorities are set by criteria outside the operating system, Such as The importance of the process The type and amount of funds being paid for computer use, The department sponsoring the work, and other, often political factors.
  • 24.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling
  • 25.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling
  • 26.
    Operating Systems &Systems Programming Scheduling Algorithms Priority Scheduling Problem Indefinite blocking or Starvation. Solution Aging
  • 27.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling The round-robin (RR) scheduling algorithm is designed especially for time-sharing systems. It is similar to FCFS scheduling, but preemption is added to enable the system to switch between processes. A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in length.
  • 28.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
  • 29.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum
  • 30.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling One of two things will then happen. The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. .
  • 31.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. The average waiting time under the RR
  • 32.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling
  • 33.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling
  • 34.
    Operating Systems &Systems Programming Scheduling Algorithms Round-Robin Scheduling
  • 35.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling Another class of scheduling algorithms has been created for situations in which processes are easily classified into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs.
  • 36.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling In addition, foreground processes may have priority (externally defined) over background processes. A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type.
  • 37.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling Each queue has its own scheduling algorithm. For example, separate queues might be used for foreground and background processes. The foreground queue might be scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm.
  • 38.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. For example, the foreground queue may have absolute priority over the background queue.
  • 39.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling Let's look at an example of a multilevel queue scheduling algorithm with five queues, listed below in order of priority: 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5. Student processes
  • 40.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling
  • 41.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Queue Scheduling Issues about time-slice among the queues. Here, each queue gets a certain portion of the CPU time, which it can then schedule among its various processes. For instance, in the foreground-background queue example, the foreground queue can be given 80 percent of the CPU time for RR scheduling among its processes, whereas the background queue receives 20 percent of the CPU to give to its processes on an FCFS
  • 42.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling In multilevel queue scheduling algorithm, processes are permanently assigned to a queue when they enter the system. If there are separate queues for foreground and background processes, for example, processes do not move from one queue to the other, since processes do not change their foreground or background nature.
  • 43.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling In contrast multilevel feedback queue scheduling algorithm allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
  • 44.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.
  • 45.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation. For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2 (Next Slide). The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes in queue 1. Similarly,
  • 46.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling Multilevel feedback queues.
  • 47.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling A process that arrives for queue 1 will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a process arriving for queue 0.
  • 48.
    Operating Systems &Systems Programming Scheduling Algorithms Multilevel Feedback Queue Scheduling In general, a multilevel feedback queue scheduler is defined by the following parameters: 1. The number of queues 2. The scheduling algorithm for each queue 3. The method used to determine when to upgrade a process to a higher priority queue 4. The method used to determine when to demote a process to a lower priority
  • 49.
    Operating Systems &Systems Programming Scheduling Algorithms Guaranteed Scheduling If n users are logged in while you are working, you will receive about 1/n of the CPU power. Similarly, on a single-user system with n processes running, all things being equal, each one should get 1/n of the CPU cycles. That seems fair enough.
  • 50.
    Operating Systems &Systems Programming Scheduling Algorithms Guaranteed Scheduling To make good on this promise, the system must keep track of how much CPU each process has had since its creation. It then computes the amount of CPU each one is entitled to, namely the time since creation divided by n. Since the amount of CPU time each process has actually had is also known, it is fairly straightforward to compute the ratio of actual CPU time consumed to CPU time entitled.
  • 51.
    Operating Systems &Systems Programming Scheduling Algorithms Guaranteed Scheduling Rules: Calculate the ratio of allocated cpu time to the amount of CPU time each process is entitled to i.e. the ratio of actual CPU time consumed to CPU time entitled. Actual CPU time consumed/ CPU time entitled
  • 52.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 53.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 54.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 55.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 56.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 57.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 58.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 59.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 60.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 61.
    Operating Systems &Systems Programming Guaranteed Scheduling
  • 62.
    Operating Systems &Systems Programming Scheduling Algorithms Lottery Scheduling The basic idea of lottery scheduling is to give processes lottery tickets for various system resources, such as CPU time. Whenever a scheduling decision has to be made, a lottery ticket is chosen at random, and the process holding that ticket gets the resource.
  • 63.
    Operating Systems &Systems Programming Scheduling Algorithms Lottery Scheduling More important processes can be given extra tickets, to increase their odds of winning. If there are 100 tickets outstanding, and one process holds 20 of them, it will have a 20% chance of winning each lottery. In the long run, it will get about 20% of the CPU.
  • 64.
    Operating Systems &Systems Programming Scheduling Algorithms Lottery Scheduling Lottery scheduling has several interesting properties. For example, if a new process shows up and is granted some tickets, at the very next lottery it will have a chance of winning in proportion to the number of tickets it holds. In other words, lottery scheduling is highly responsive.
  • 65.
    Operating Systems &Systems Programming Scheduling Algorithms Lottery Scheduling Cooperating processes may exchange tickets if they wish. For example, when a client process sends a message to a server process and then blocks, it may give all of its tickets to the server, to increase the chance of the server running next. When the server is finished, it returns the tickets so that the client can run again.
  • 66.
    Operating Systems &Systems Programming Scheduling Algorithms Fair-Share Scheduling So far we have assumed that each process is scheduled on its own, without regard to who its owner is. As a result, if user 1 starts up nine processes and user 2 starts up one process, with round robin or equal priorities, user 1 will get 90% of the CPU and user 2 only 10% of it.
  • 67.
    Operating Systems &Systems Programming Scheduling Algorithms Fair-Share Scheduling To prevent this situation, some systems take into account which user owns a process before scheduling it. In this model, each user is allocated some fraction of the CPU and the scheduler picks processes in such a way as to enforce it. Thus if two users have each been promised 50% of the CPU, they will each get that, no matter how many processes they have in existence.
  • 68.
    Operating Systems &Systems Programming Scheduling Algorithms Fair-Share Scheduling As an example, consider a system with two users, each of which has been promised 50% of the CPU. User 1 has four processes, A, B, C, and D, and user 2 has only one process, E. If round-robin scheduling is used, a possible scheduling sequence that meets all the constraints is this one: A E B E C E D E A E B E C E D
  • 69.
    Operating Systems &Systems Programming Scheduling Algorithms Fair-Share Scheduling On the other hand, if user 1 is entitled to twice as much CPU time as user 2, we might get A B E C D E A B E C D E... Numerous other possibilities exist, of course, and can be exploited, depending on what the notion of fairness is.