SlideShare a Scribd company logo
Uniprocessor
Scheduling
Csci 430, Spring 2018
Texas A&M University – Commerce
Derek Harter
Introduction
to Operating
System
Concepts
Types of Processor
Scheduling
Objective: Be able to explain the
differences between long-, medium- and
short-term scheduling policies.
Processor Scheduling
the processor in a way that meets system
objectives, such as response time, throughput,
and processor efficiency
unctions:
– Long-term
– Medium-term
– Short-term
Table 9.1
Types of Scheduling
Scheduling and Process State
Transitions
Figure 9.2
Nesting of
Scheduling
Functions
(Referencing figure 3.9b)
Queuing
Diagram
Long-Term Scheduler
ermines which programs are admitted to the system for
processing
percentage of time that each process can be executed
ervice to the current set
of processes
Medium-Term
Scheduling
-in decisions are based on the need
to manage the degree of
multiprogramming
swapped-out processes
Short-Term Scheduling
-grained decision of which process to
execute next
blocking of the current process or that may provide an
opportunity to preempt a currently running process in favor
of another
Scheduling Algorithms
Objective: Be able to assess the
performance of different scheduling policies
(for the short term scheduler or dispatcher).
Short Term Scheduling
Criteria
processor time to optimize certain
aspects of system behavior
evaluate the scheduling policy
– User vs. system oriented
criteria
Short-Term Scheduling
Criteria: Performance
- Performance related criteria are quantative (can be measured)
- Examples: response time and throughput
- Not performance related are qualitative or not readily
measured
- Predictability
Table 9.2
Scheduling
Criteria
Priority
Queuing
Alternative Scheduling
Policies
Selection Function
selected
next for execution
execution
characteristics of the process
If based on execution characteristics then important
quantities
are:
generally, this quantity must be estimated or supplied by the
user
Decision Mode
instants in time
at which the
selection
function is
exercised
categories:
Nonpreemptive vs
Preemptive
Nonpreemptive
running state, it will
continue until it
terminates or blocks
itself for I/O
Preemptive
process may be
interrupted and moved
to ready state by the OS
when new process
arrives, on an interrupt,
or periodically
Alternative Scheduling
Policies
Process Scheduling
Example
Comparison of
Scheduling
Policies
Table 9.5
Comparison of
Scheduling
Policies
First-Come-First-
Served (FCFS)
-in-first-
out (FIFO) or a strict
queuing scheme
ceases to execute, the
longest process in the
Ready queue is selected
long processes than short
ones
-
bound processes over I/O-
bound processes
FCFS: Penalizes Short
Jobs
● FCFS performs much better for long processes
than short ones.
Round Robin
clock
because each process is
given a slice of time before
being preempted
length of the time quantum,
or slice, to be used
general-purpose time-
sharing system or
transaction processing
system
treatment of processor-
bound and I/O-bound
processes
Round Robin
Effect of
Size of
Preemption
Time
Quantum
Figure
9.6a
Figure 9.6b
Effect of Size of Preemption Time
Quantum
Virtual Round
Robin (VRR)
Shortest Process Next
(SPN)
icy in
which the process with the
shortest expected processing
time is selected next
the head of the queue
longer processes
know, or at least estimate,
the required processing
time of each process
estimate is substantially
under the actual running
time, the system may abort
the job
Calculating Service Time Interactively
● One difficulty with SPN is need to know or estimate required
processing time (service time)
● For batch job, can be supplied by programmer.
● For interactive jobs, OS can keep average of each “burst”
Rewrite summation:
Exponential averaging:
Exponential Smoothing
Coefficients
Use Of Exponential Averaging
Use Of Exponential Averaging
Shortest Remaining
Time (SRT)
the process that has the
shortest expected
remaining processing time
processes
ld give superior
turnaround time
performance to SPN
because a short job is
given immediate
preference to a running
longer job
Highest Response
Ratio Next (HRRN)
the greatest ratio
accounts for the age of the
process
favored, aging without
service increases the ratio
so that a longer process
will eventually get past
competing shorter jobs
Feedback Scheduling
) of
processes, we
can’t use SPN, SRT or HRRN.
jobs
that run longer.
spent so
far.
duling with a
dynamic
priority mechanism.
priority. Thus longer running processes “age” and become low
priority, and new short processes don’t age enough before they
finish execution.
Feedback
Scheduling
Feedback
Performance
Performance Comparison
Objective: Introduction to queuing theory
and modeling to comparative analysis of
scheduling (and other) algorithms.
Performance
Comparison
oses the next item to be
served independent of service time obeys the relationship:
Normalized Turnaround Time as a
Function of Processor Utilization
Table 9.6
Formulas for
Single-Server
Queues with
Two Priority
Categories
Overall Normalized Response
Time
Normalized Response Time for
Shorter Processes
Normalized Response
Time for Longer Processes
Results
Simulation
Alternative Scheduling
Policies
Fair-Share Scheduling
Objective: Look at scheduling pools
(processes organized as threads).
Fair-Share Scheduling
sets
processor
resources to users who have had more
than their fair share and more to those
who have had less than their fair share
Fair-Share Scheduling
Fair-Share
Scheduler
Traditional UNIX
Scheduling
Objective: Understand the scheduling
technique used in traditional UNIX systems.
Traditional UNIX
Scheduling
-sharing
interactive environment
users while ensuring that low-priority background jobs do
not starve
Employs multilevel feedback using round robin within each
of the priority queues
-second preemption
Scheduling Formula
Bands
access to block
devices and to allow
the operating
system to respond
quickly to system
calls
of priority, the
bands are:
Example of
Traditional
UNIX Process
Scheduling
Summary
decisions with respect to the execution of processes:
-term – determines when new processes are admitted to
the system
-term – part of the swapping function and determines
when a program is brought into main memory so that it may be
executed
-term – determines which ready process will be
executed
next by the processor
most
important characteristic of a system; from a system point of
view,
throughput or processor utilization is important
Chapter 9 Uniprocessor SchedulingSlide 3Processor
SchedulingTable 9.1 Types of SchedulingScheduling and
Process State TransitionsFigure 9.2 Nesting of Scheduling
FunctionsQueuing DiagramLong-Term SchedulerMedium-Term
SchedulingShort-Term SchedulingSlide 12Short Term
Scheduling CriteriaShort-Term Scheduling Criteria:
PerformanceTable 9.2 Scheduling CriteriaPriority
QueuingAlternative Scheduling PoliciesSelection
FunctionDecision ModeNonpreemptive vs PreemptiveSlide
21Table 9.4 Process Scheduling ExampleSlide 23Slide 24First-
Come-First-Served (FCFS)Slide 26Round RobinSlide 28Effect
of Size of Preemption Time QuantumFigure 9.6b Effect of Size
of Preemption Time QuantumVirtual Round Robin
(VRR)Shortest Process Next (SPN)Slide 33Exponential
Smoothing CoefficientsUse Of Exponential AveragingUse Of
Exponential AveragingShortest Remaining Time (SRT)Highest
Response Ratio Next (HRRN)Slide 39Feedback
SchedulingFeedback PerformanceSlide 42Performance
ComparisonSlide 44Slide 45Overall Normalized Response
TimeNormalized Response Time for Shorter
ProcessesNormalized Response Time for Longer
ProcessesResultsSlide 50Slide 51Fair-Share SchedulingSlide
53Fair-Share SchedulerSlide 55Traditional UNIX
SchedulingScheduling FormulaBandsExample of Traditional
UNIX Process SchedulingSummary
Chapter 10
Multiprocessor
and Real-Time
Scheduling
Seventh Edition
By William Stallings
Operating
Systems:
Internals
and Design
Principles
Classifications of
Multiprocessor Systems
● Loosely coupled or distributed multiprocessor,
or cluster
● Functionally specialized processors
● Tightly coupled multiprocessor
Synchronization Granularity
and Processes
Independent
Parallelism
synchronization among
processes
separate,
independent
application or job
-
sharing system
Coarse and Very
Coarse-Grained
Parallelism
level
for concurrent processes running on a
multiprogrammed uniprocessor
change to user software
Medium-Grained
Parallelism
collection of threads within a single process
of an application
interaction among the threads of an application, leading to
a medium-grain level of synchronization
frequently, scheduling decisions concerning one thread
may affect the performance of the entire application
Fine-Grained
Parallelism
parallelism than
is found in the use of threads
approaches
Design Issues
● Assignment of
processes to
processors
● Use of
multiprogramming on
individual processors
● Actual dispatching of
a process
depend on the degree
of granularity of
applications and the
number of processors
available
Assignment of
Processes to Processors
processes on demand (assume uniform/symmetric architecture)
– Static assignment: process stays on processor it is
assigned for lifetime
– Dynamic assignment: process can change processor.
t: less overhead, local caching,
gang scheduling
be
idle, with an empty queue, while another processor has a
backlog
n is dynamic load balancing
Assignment of
Processes to
Processors
require some way of assigning a process
to a processor
Master/Slave
Architecture
ays run on a particular processor
uniprocessor multiprogramming operating system
cause one processor has
control of all memory and I/O resources
Peer Architecture
-scheduling from the pool of
available processes
Process Scheduling
cated to processors
queues based on priority
-server queuing architecture
Thread Scheduling
on is separated from the rest of the definition
of a
process
execute
concurrently in the same address space
structuring aid
and to overlap I/O with processing
parallelism in an application
-
processor systems
have an
impact on applications that require significant interaction
among
threads
Approaches to
Thread Scheduling
a set of related thread
scheduled to run on a set of
processors at the same
time, on a one-to-one basis
processes are not
assigned to a particular
processor
provides implicit scheduling
defined by the assignment of
threads to processors
the number of threads in a process
can be altered during the course of
execution
● Load Sharing
● Gang Scheduling
● Dedicated (static) processor assignment
● Dynamic scheduling
Load Sharing
uniprocessor environment
– Load distributed evenly across processors
– No centralized schedule required
– Global queue can be organized and accessed using any
scheduling scheme (ch 9)
-come-first-served
Disadvantages of
Load Sharing
n of memory that must be
accessed in a manner that enforces mutual exclusion
same processor
ommon pool of threads, it is
unlikely that all of the threads of a program will gain access to
processors at the same time
performance
Gang Scheduling
that make up a
single process
– If closely related processes execute in parallel,
synchronization blocking may be reduced
– Scheduling overhead may be reduced since single
scheduling decision affects number of processors and
processes/threads
ful for medium-grained to fine-grained parallel
applications whose performance severely degrades when
any part of the application is not running while other
parts are ready to run
Figure 10.2
Example of Scheduling Groups
With Four and One Threads
Dedicated Processor
Assignment
assigned to a processor that remains dedicated to that thread
until the application runs to completion
thread of an application is blocked waiting for I/O or for
synchronization with another thread, then that thread’s
processor remains idle
or hundreds of
processors,
processor utilization is no longer so important as a metric for
effectiveness or performance
of a
program should result in a substantial speedup of that program
Figure 10.3
Application Speedup as a Function of Number of Threads
Dynamic Scheduling
system tools that permit the number of threads in the process
to be altered dynamically
uld allow the operating system to adjust the load to
improve utilization
making scheduling decisions
primarily limited to processor allocation
processor assignment for applications that can take advantage of
it
Real-Time Systems
perhaps the
most important component
result of the
computation but also on the time at which the results are
produced
take place
in the outside world
se events occur in “real time” and tasks must be able to
keep up
with them
Hard and Soft
Real-Time Tasks
Hard real-time
task
deadline
unacceptable damage or
a fatal error to the
system
Soft real-time
task
that is desirable but not
mandatory
schedule and complete the
task even if it has passed
its deadline
Periodic and
Aperiodic
Tasks
ed as:
finish time
Characteristics of
Real Time Systems
Determinism
ong an operating system delays
before acknowledging an interrupt
or within predetermined time intervals
processor time, no system will be fully deterministic
Responsiveness
to external events
-time systems that must meet timing
requirements imposed by individuals, devices, and data
flows external to the system
takes an operating system to service the interrupt
User Control
-time operating system
than in ordinary operating systems
ne-grained control over
task priority
tasks and to specify relative priorities within each class
Reliability
-time systems than non-real
time systems
-time systems respond to and control events in real
time so loss or degradation of performance may have
catastrophic consequences such as:
Fail-Soft Operation
such a way as to preserve as much capability and data as
possible
-time system is stable if the system will meet the
deadlines of its most critical, highest-priority tasks even if
some less critical task deadlines are not always met
Real-Time
Scheduling
of
Process
Real-Time
Scheduling
Classes of Real-Time
Scheduling Algorithms
Deadline Scheduling
-time operating systems are designed with the
objective of starting real-time tasks as rapidly as possible
and emphasize rapid interrupt handling and task
dispatching
-time applications are generally not concerned with
sheer speed but rather with completing (or starting)
tasks at the most valuable times
requirement of completion (or initiation) at the most
valuable time
Information Used for
Deadline Scheduling
Table 10.2
Execution Profile of Two Periodic
Tasks
Figure 10.5 Scheduling of Periodic Real-Time Tasks With
Completion Deadlines (Based on Table 10.2)
Figure 10.6 Scheduling of Aperiodic Real-Time Tasks With
Starting Deadlines
Table 10.3
Execution Profile of Five Aperiodic
Tasks
Figure 10.7
Rate
Monoton
ic
Scheduli
ng
Periodic Task
Timing
Diagram
Figure 10.8
Value of
the RMS
Upper
Bound
Table 10.4
Priority Inversion
-based preemptive scheduling
scheme
y relevant in the context of real-time scheduling
-known instance involved the Mars Pathfinder mission
higher priority task to wait for a lower priority task
Unbounded Priority
Inversion
Priority Inheritance
Linux Scheduling
-in-first-out real-time
threads
-robin real-time threads
-real-time
threads
e
used
Linux
Real-Time
Scheduling
Non-Real-Time
Scheduling
the SCHED_OTHER class
did not scale well with
increasing number of
processors and processes
scheduling data structures
for each processor in the
system
priority scheduler known
as the O(1)
scheduler
appropriate process and
assign it to a processor is
constant regardless of the
load on the system or
number of processors
Linux
Schedul
ing
Data
Structur
es
Figure 10.11
UNIX SVR4
Scheduling
earlier UNIX
systems
and the
introduction of a set of 160 priority levels divided into three
priority
classes
SVR
Priori
ty
Class
es
Figure 10.12
SVR Priority Classes
SVR4 Dispatch Queues
Figure 10.13
UNIX FreeBSD
Scheduler
SMP and Multicore
Support
scheduling for a SMP or multicore system
multicore systems
– a scheduler that only
migrates a thread when necessary to avoid
having an idle processor
systems
so that it is no longer a function of the number of
threads in the system
Windows
Thread
Dispatching
Priorities
Figure 10.14
Interactivity Scoring
voluntary sleep time versus its runtime is below a certain
threshold
reshold is defined in the scheduler code and is
not configurable
lower half of the range of interactivity scores
upper half of the range of interactivity scores
Thread Migration
the
last processor that it ran on
processor
Windows
Scheduling
ities in Windows are organized into two bands or
classes:
-time
class
-time tasks
Windows
Priority
Relations
hip
Figure 10.15
Linux
Virtual
Machine
Process
Scheduli
ng
Summary
have access
to the same main memory
various
scheduling algorithms are less significant in a multiprocessor
system
-time process is one that is executed in connection with
some
process or function or set of events external to the computer
system and
that must meet one or more deadlines to interact effectively and
correctly with the external environment
-time operating system is one that is capable of
managing real-
time processes
ing to
relative
deadlines are appropriate in this context
Chapter 10 Multiprocessor and Real-Time
SchedulingClassifications of Multiprocessor
SystemsSynchronization Granularity and ProcessesIndependent
ParallelismCoarse and Very Coarse-Grained
ParallelismMedium-Grained ParallelismFine-Grained
ParallelismDesign IssuesAssignment of Processes to
ProcessorsAssignment of Processes to ProcessorsMaster/Slave
ArchitecturePeer ArchitectureProcess SchedulingThread
SchedulingApproaches to Thread SchedulingLoad
SharingDisadvantages of Load SharingGang SchedulingSlide
21Dedicated Processor AssignmentSlide 23Dynamic
SchedulingReal-Time SystemsHard and Soft Real-Time
TasksPeriodic and Aperiodic TasksCharacteristics of Real Time
SystemsDeterminismResponsivenessUser
ControlReliabilityFail-Soft OperationReal-TimeReal-Time
SchedulingClasses of Real-Time Scheduling
AlgorithmsDeadline SchedulingInformation Used for Deadline
SchedulingTable 10.2 Execution Profile of Two Periodic
TasksSlide 40Slide 41Table 10.3 Execution Profile of Five
Aperiodic TasksSlide 43Periodic Task Timing DiagramValue of
the RMS Upper BoundPriority InversionUnbounded Priority
InversionPriority InheritanceLinux SchedulingLinux Real-Time
SchedulingNon-Real-Time SchedulingLinux Scheduling Data
StructuresUNIX SVR4 SchedulingSVR Priority ClassesSVR
Priority ClassesSVR4 Dispatch QueuesUNIX FreeBSD
SchedulerSMP and Multicore SupportWindows Thread
Dispatching PrioritiesInteractivity ScoringThread
MigrationWindows SchedulingWindows Priority
RelationshipLinux Virtual Machine Process
SchedulingSummary
Operating Systems, Stallings
Chapter 9 Notes
CSci 430 Spring 2019
Overview
In a multiprogramming system, multiple processes exist
concurrently in main
memory. As we have seen and discussed many times, it is
imperative that
we support multiple concurrent processes in main memory, so
that we can
maximize the utilization of the CPU and other peripherals and
devices in the
computing system. However, when we have (many) more
processes than we
have CPUs to execute them, this necessitates that we need to
periodically
switch between the processes in some manner. This is especially
true because
many processes perform operations that will cause them to have
to wait long
periods (from the CPUs point of view), and if the CPU does
nothing while
the process is waiting, the resource is wasted, and useful work
that could
be done will not be performed. We need advanced memory
management
techniques, as we studied in the previous 2 chapters, to ensure
that we have
a su�ciently large number of processes available and able to be
chosen from.
But, until now, we have not looked at the issues and methods we
should use
in order to select among available processes in an e�cient
manner in order
to utilize CPU and other resources wisely.
In this chapter we will begin examining processor scheduling
and man-
agement issues. In particular, we will break the issue up into 3
di�erent time
frames, short, medium and long-term scheduling concerns and
methods. In
chapter 9, which you are to read through this week and next, we
restrict our
discussion to systems with a single CPU in terms of the process
scheduling
issues. In chapter 10, we will look at some of the additional
complications
we need to consider when we are dealing with systems that have
2 or more
available CPUs for scheduling processes in the operating
system.
Learning Objectives
After studying this chapter, you should be able to:
1
� Explain the di�erences among long, medium and short-term
schedul-
ing.
� Assess the performance of di�erent scheduling policies.
� Be able to simulate di�erent preemptive, and non-preemptive
short-
term scheduling policies by hand, and understand the issues and
mech-
anisms involved with their implementation.
9 Uniprocessor Scheduling
9.1 Types of Processor Scheduling
The aim of processor scheduling is to assign processes to be
executed by the
processor(s) over time, in a way that meets system objectives.
We may have
many di�erent objectives in mind that we want to try and
optimize, and
as always in such design decisions, some are mutually exclusive
such that
optimizing one necessarily means performing worse in some
other measure.
Objectives might include response time or overall system
throughput, or
simply trying to maximize CPU utilization if there are processes
waiting
to execute. In many systems, the scheduling of processes are
broken down
into three separate functions: long, medium and short-term
scheduling. The
relative time scales of the activities performed change what the
system might
focus on, and what measures or features it can successfully try
and optimize.
Figure 9.2 is important in understanding the di�erent time
frames, and
how the scheduling of processes relate to each other on these
varying time
frames. We will discuss in a little more detail next, but
basically short-
term scheduling deals with the most basic of our state process
model con-
cepts (from chapter 3), the ready/running/blocked transitions.
Medium-
term scheduling involves decision about if and when to suspend
some pro-
cesses to memory. And generally, long-term scheduling has to
do with de-
cisions in large systems of when to allow processes to begin
running, and
possibly terminating processes early if needed.
2
3
Long-Term Scheduling
The long-term scheduler determines which programs are
admitted to the
system for processing. Thus it controls the degree of multi-
programming in
the system. When the long-term scheduler decides to admit a
new job to
the system, it can add it to the queue for short-term scheduler,
which causes
the program to immediately be loaded into memory once it is
scheduled. Or
it can start the process in a swapped-out condition, which might
allow the
system to create some structures for the process on disk, but
keeps memory
free for actively running processes. Most general purpose OS
will simply
start processes immediately. In big batching systems or HPC
computers,
a batching system may hold jobs as needed, and only create
them when
su�cient resources are available. In such a case, the long-term
scheduler
might admit/create new jobs on a �rst-come-�rst-served
(FCFS) basis, or it
could us information about the requirements of the jobs to
schedule them,
for example trying to keep a mix of processor-bound and I/O-
bound jobs in
the system.
Medium-Term Scheduling
Medium-term scheduling is part of the swapping function. We
have discussed
many of the issues involved with the swapping decision already.
Basically,
swapping in/out processes can be used to manage the
availability of primary
memory on the system. So if memory is tight, the swapper can
move some
processes completely out of memory for a short period. But if
CPU utiliza-
tion is becoming low and processes are swapped out, then the
medium-term
scheduler might attempt to swap back in some processes in
order to increase
the degree of multiprogramming and thus increase CPU and
resource uti-
lization.
Short-Term Scheduling
In terms of the frequency of execution, the long-term scheduler
usually ex-
ecutes rather infrequently, and the medium-term
scheduling/swapping deci-
sions are also going to be somewhat infrequent. The short-term
scheduler,
however, often needs to be invoked very frequently. For
example, every time
a process becomes blocked because of read or write activities,
the short-term
scheduler will be run to select the next process to get the CPU.
Also we
often want to allow the system to preemptively cause running
processes to
be switched, so that we can support the illusion of
multiprogramming in
the system. Without preemption, the system appears to hang
whenever it
4
executes a program that is CPU bound and runs for a long time
without
returning for I/O.
9.2 Scheduling Algorithms
The purpose of processes scheduling is to try and maximize
performance. In
order to successfully maximize performance, we have to have
some idea of the
criteria we want to measure, and thus try and improve by
selecting processes
in some speci�c manner. As the text mentions, there are many
di�erent cri-
teria we might try and maximize. The textbook breaks these
criteria into
two broad classes, user-oriented criteria and system-oriented
criteria. The
names should be fairly self explanatory. User-oriented criteria
are those mea-
sures that are visible to, and most e�ect user perceptions of
how the system
is performing. A classic example would be the response time of
the system,
how fast it appears to begin working on any command you ask it
to perform.
User-oriented criteria can be viewed from the perspective of a
human user of
the system, or from the view of a single process executing in the
system. Ex-
amples of system-oriented criteria would be throughput and
CPU utilization.
A user might just want their jobs or process to be completed as
quickly as
possible. But overall, from the system perspective, we want to
complete as
many jobs as possible in the shortest time possible. And, one
way to achieve
this, is to keep the CPU (and other resources) as busy as
possible, given the
current set of jobs we are managing. On single user type
systems, such as
your personal computer or smart phone, system level criteria are
probably
not important, it is only important that the user is getting the
performance
they expect from the system. System level performance criteria
are more
important on large shared-user systems, such as HPC
supercomputers and
large batching systems and server systems. Table 9.2 of our
textbook gives a
summary of some of the types of performance criteria you
should be familiar
with, in the context of the scheduling of processes.
Priorities
While it is �ne in theory to create scheduling algorithms that
only look at
particular criteria in order to try and �ne tune the scheduling,
in practice
we often need a more explicit and �ner level of control to be
available to the
system operators and designers to explicitly tell the system
which processes
to prefer over which others when scheduling. Priorities are used
in many
systems to allow this �ne-grained level of control. Priorities
can be explicitly
assigned to the entities being scheduled (e.g. processes), and the
level of the
5
priority can be taken into account when making scheduling
decisions. For
example, we could implement a strict priority based scheduler,
with round-
robin scheduling among processes of equal priority. In a strict
priority based
mechanism, we would always choose among the highest priority
processes
currently in the system, and schedule them �rst. As shown in
�gure 9.4 of
our textbook, one way this could be achieved is by using
separate priority
queues for each level in our priority scheme. Then, when a
process needs to be
scheduled, we would �rst see if any processes are in the highest
priority queue
(and select the one at the head of this queue, if using round-
robin scheduling
within the priority queuing scheme). Only if the highest priority
queue is
empty, would we then check the next highest queue for ready
processes.
One problem with this pure priority scheduling scheme we have
just de-
scribed is that lower priority processes may su�er starvation
(and thus this
scheme can be unfair in its scheduling policy). If there are
always high prior-
ity processes running or continually entering the system, low
priority process
will never be scheduled to run. Thus strict priority scheduling
policies are
rarely used. However, by adding a concept of aging, where the
priority of
a process can change dynamically, we can �x this problem. For
example,
the longer a process is in the system, the higher its priority
might become.
And conversely, the more a process executes, the lower we
might set its pri-
ority, for some time. Such modi�ed priority schemes are very
common in
modern operating systems. For example, Unix and windows
variants use
modi�cations of process priority, with dynamically changing
priorities, in
their implementations of their process scheduling mechanisms.
Alternative Scheduling Policies
In order to better understand the trades we need to make when
thinking
about process scheduling mechanisms, our textbook presents
and compares
several short-term scheduling policies in this section. I will
summarize some
of the important points made in this section, but you should
make sure you
understand all of the policies discussed in this section, how they
work, and
their e�ects on various criteria, such as throughput, response
time, etc. Table
9.3 in our text is an important summary of the basic
conclusion's about the
policies we discuss in this section.
6
First I will talk about the rows on this table, and then we will
summarize
brie�y the various scheduling policies. The selection function
determines
how the policy decides which process will be selected next from
among the
pool of available ready processes. For example, for the �rst-
come-�rst-serve
(FCFS) policy, we will select the process that has been waiting
the longest
as the next to be scheduled to run. The process that has been
waiting
the longest must have arrived �rst, before any other process
that might
be waiting. Thus, by selecting the process by max(w), the one
that has
waited the longest, we are selecting the �rst process to arrive
among the
currently waiting processes (FCFS). The selection functions for
all of the
other policies should make sense to you, if you understand what
the w (wait
time), e (execution time) and s (total service time needed)
parameters mean.
The decision mode is an important characteristics of process
scheduling
policies. Basically, non-preemptive policies are not really
suitable for multi-
user interactive systems. When the system schedules processes
in a non-
preemptive manner, this means that once the process is running,
it will
continue running until it is �nished. This is �ne in a batching
system, but in
an interactive user-based system, the system cannot fail to
respond to user
input if there is a long-running process currently scheduled and
running.
Thus we need to use preemptive scheduling processes for
interactive user
7
systems. In a preemptive system, the running process can be
halted and
returned back to the ready queue (or a wait queue), based on
some events.
As discussed in the text, we can support di�erent types of
preemption, such
as only preempting if needed when a new process arrives
(HRRN). However,
again if we are talking about a system that supports interactive
users, we
will need to support time quantum (and I/O blocking based)
preemption.
Throughput and response time represent two of the scheduling
criteria
that we might be interested in in�uencing. As we have
discussed, throughput
is a system-level criteria, where response time is more user-
level focused.
Overhead in this table is a judgment on how costly the
algorithm is to
implement. If the overhead is high for the policy, it may take a
lot of memory
and/or CPU cycles in order to implement the data structures and
calculate
the selection function of the given policy.
The e�ects on process description basically gives some ideas
about if the
policy is fair or not, and if it is not fair, which types of
processes are likely
to be favored or penalized by the policy. I/O bound processes
are those that
need to do a lot of I/O, such as reading or writing large amounts
of data
to/from disk. The opposite of an I/O bound process is a compute
(CPU)
bound process, which once loaded into memory does not need
additional
I/O, and will e�ectively crunch data at a high speed (and not
relinquish the
CPU voluntarily).
And �nally starvation can be a consideration for some process
selection
policies. As we mentioned, starvation is possible for strict
priority based
schedulers, as low priority processes may never be selected to
run if high
priority processes are always present. We have talked about
starvation in
previous chapters on concurrency, and starvation is always a
possible problem
when running concurrent processes. As we see in this table,
some process
selection policies can be created that are guaranteed to avoid
starvation,
such as the simple round robin policy.
I will now brie�y mention each of the short term scheduling
policies that
the textbook presents and analyzes in this section. Make sure
you understand
how these policies work and are implemented, and the trade-
o�s they make in
terms of the parameters they are optimizing. Also, we will
mention and use
the process �nish time, turnaround time (Tr) and the normalized
turnaround
ratio (Tr/Ts) when we discuss these policies. The start and
�nish times
should be obvious, the start time is when the process �rst enters
the system
(not the �rst time step when it is run), and the �nish time is the
time step
in which the process �nishes execution. The turnaround time,
also known
as the residence time Tr, is simply the total time the process
spent in the
system (e.g. �nish time minus start time Tf − Tb) The
turnaround times
8
is not very useful, as it can vary widely as a function of the
service time of
the process. The service time is how many time steps the
process needs to
execute in order to complete its task. This is not useful in
comparing the
relative performance of di�erent policies, because longer
processes will have
long turnaround times and short process may have short (or not)
turnaround
times. By dividing the turnaround time by the processes service
time, we get
a better understanding of the magnitude of time the process
spent waiting
(Tr/Ts), and we can better average and compare such ratios to
judge the
relative e�ects of scheduling policies on system
responsiveness.
First-Come-First-Served (FCFS)
FCFS is the simplest scheduling policy, and thus has (almost)
no overhead
in order to implement it on a system. FCFS can also be
described as (and
called) a strict �rst-in-�rst-out (FIFO) queuing scheme. FCFS
just requires
a simple queue, and it is non-preemptive in nature. When a
process arrives,
we just put it on the back of the queue. And when process
�nishes or we
need to schedule a new process to run, we take the one at the
head of the
queue and it will execute until it completes (or if we have a
blocked/wait
state, it executes until its next I/O request). Basic batching
systems use this
scheduling policy.
FCFS is not particularly good in terms of response time and
other char-
acteristics, and as mentioned it can be unfair to I/O bound
processes, as I/O
bound processes will get returned back to the end of the queue,
but compute
bound processes, once they get scheduled, since there is no
preemption, will
monopolize the CPU and �nish up, leaving the I/O bound
processes waiting
around longer than necessary.
Round Robin
Round robin is basically the FCFS, but where we add a
preemption mecha-
nism. We have 1 queue, and processes that block for I/O are
returned to the
queue as before. But we also cause the running process to be
periodically
preempted, when they exceed their time slice (called the time
slice quan-
tum). Because of this, the RR policy is also sometimes known
as a time
slicing policy.
By adding the ability to preempt, RR addresses the unfairness of
FCFS
for I/O bound processes. However, throughput and response
time may not
be very good with a strict RR preemptive scheduler.
9
Shortest Process Next (SPN)
The SPN policy is another nonpreemptive policy. Further, in
order to imple-
ment SPN, each process must state up front how long it will
need to execute.
The selection function for SPN is to take the policy that is the
shortest that
is currently waiting to be executed as the next process. There
are several
problems with this policy in regards to implementation in real
systems. As
we have already mentioned, since it is nonpreemptive, it is not
suitable for
most interactive systems. Also for most real systems we don't
often really
know how long a process will need to execute. Thus we can
require that
an estimate be given for the length of the process when it is
created and
submitted. However, then we need to decide what we want to do
to pro-
cesses that have exceeded their stated execution service time,
should the be
terminated? All of these considerations need to be tracked in a
real imple-
mentation, which can mean that SPN needs a fair amount of
overhead in
order to be implemented.
SPN is actually fairly good in terms of its throughput and
response time,
at least for short processes. However, SPN can penalize long
processes. This
should be intuitively obvious, as in a system where lots of short
processes
continually enter the system, longer processes may have to wait
a long time
before they get selected to be run. In extreme cases starvation is
possible,
where a long process is never selected because short processes
are continually
always available. So SPN can be unfair to long processes.
Shortest Remaining Time (SRT)
In order to implement the SRT policy, we need to keep track of
how much
execution time each process still has to do. In the version
discussed in our
textbook, the SRT policy is preemptive, but only at the time of
the cre-
ation/entry of a new process. Whenever a new process becomes
available,
the currently running process may be preempted if the new
process is shorter
than the remaining time on the currently running process. In
order to cal-
culate the remaining time of processes, we of course need to
know how much
total time the process needs to run, thus the shortcoming for
real implemen-
tations that we discussed for SPN are also present for SRT, and
it can also
need more overhead than other schedulers as well.
SRT can provide good throughput and response times, though it
still
might penalize long processes, for the same reasons as SPN
previously.
10
Highest Response Ratio Next (HRRN)
In order to understand HRRN, you must understand what the
normalized
turnaround time is and how it is calculated. If we can calculated
the current
normalized turnaround time for all of the processes currently in
our system,
HRRN simply says that we should select the process that has the
highest
(e.g. the worst) such measure to be the next to execute. This is
intuitive if
we are trying to maximize the turnaround time (and thus the
throughput)
of our system. By selecting the process that is doing badly on
this measure
next, we have a chance to improve its turnaround time, and thus
improve
our systems overall average. Basically, by calculating and
choosing processes
based on their turnaround time, we are adding a type of aging
mechanism.
HRRN is nonpreemptive. And as with the previous 2 policies,
we need to
know the expected service time in order to calculate the
response ratio. This
coupled with its nonpreemptive nature makes it unsuitable for
interactive
systems. However, HRRN does �x some of the de�ciencies that
SRT and
SPN have in their unfairness towards long processes, as long
processes will
have increasingly larger response ratios the longer they are in
the system,
and thus will be guaranteed to be selected at some point to be
executed,
thus avoiding starvation.
Feedback
The feedback policy as described in our book is basically a type
of priority
scheme. As discussed in this section, the feedback policy uses a
preemptive
(time-sliced) based mechanism, coupled with a dynamic priority
mechanism.
So basically, this is not a strict priority based policy, but a
priority sched-
uler with dynamic priority mechanism as we mentioned before.
Also as we
said previously, modern interactive OS systems basically use
some form of
dynamic priority based time-sliced scheduler, so this policy is
the most im-
portant to understand in order to comprehend how real OS
schedulers work.
The feedback policy schedules processes on a preemptive (at a
time quan-
tum) basis. Processes when they �rst enter the system have a
particular
priority assigned to them, usually of the highest priority. After
each pre-
emption of the process, its priority is lowered by 1 level (down
to some
minimum priority).
A short process will complete quickly, with migrating very far
down the
priority hierarchy. A longer process will gradually drift
downward. Thus
newer, shorter processes are favored over older, longer
processes.
The performance characteristics of a feedback/priority scheduler
will vary
11
with the time quantum parameter (as will RR). Also, the details
of the
dynamic priorities will have large e�ects on the performance
characteristics
as well. However, since this policy is preemptive it is suitable
for interactive
systems. And because of the priorities, it can be tuned in
various ways, to
try and avoid starvation, and to give fair treatment to di�erent
types of short
and long and I/O and compute bound processes, as needed.
12
Operating Systems, Stallings
Chapter 10 notes
CSci 430 Spring 2019
Overview
Though the addition of more than 1 CPU into a system does
introduce some
new issues to be considered when scheduling processes, the
basic mechanisms
and analysis of process scheduling are not so di�erent as you
might think.
Section 10.1 of our textbook covers 3 main categories of
multiprocessor sys-
tems:
1. Loosely coupled or distributed memory cluster systems
2. Functionally specialized processors
3. Tightly coupled, shared memory multiprocessors.
Of the three categories, the third category is the most familiar
one to
most of the students of this course. Modern general purpose
computing sys-
tems often have multi-core chips and/or 2 or more CPU chips as
part of the
system. However, all of the CPUs on such multi-core chips are
functionally
equivalent. Modern general purpose personal computers actually
often do
have a specialized CPU, di�erent from the general purpose
CPUs. This is
the CPU in your graphical processor or graphics card. However,
usually the
OS does not schedule work or processes to be run on this CPU,
instead the
video graphics drivers use this specialized CPU to perform
rendering and
display driver tasks. The �rst category, scheduling on a
distributed memory
cluster system, is an issue very important to modern
supercomputing clus-
ter systems. Most supercomputers used in scienti�c research are
what are
known as commodity computing clusters. They are basically a
collection of
large numbers of standard computers, connected together using
high-speed
networking connections. Unlike your personal computer or
laptop, all of the
computers in such a cluster computer have their own set of
memory, thus
1
they are often referred to as distributed memory systems, in
contrast to your
typical personal computer that has multiple CPUs sharing a
common pool of
RAM memory. In this course, we will concentrate on the issues
of processor
scheduling for shared memory multiprocessor systems, but you
should read
the materials about the other 2 categories of systems as well in
section 10.1
of our textbook.
One important consideration when scheduling processes on a
multi-processor
system, is how to assign processes to available processors. The
simplest ap-
proach is to treat all of the CPUs as an available pool of
processors, and
assign processes to processors on demand. The question arrises
at this point
of whether, when a process starts running on a CPU, the
assignment should
be static or dynamic.
The static vs. dynamic decision can have consequences. In a
multi-core
CPU chip, the cores have local cache memory that are often
separate, or not
shared, among the cores (The Level 1 L1 and/or L2 cache
levels). When a
process runs on a CPU, it begins using the cache memory local
to that CPU
core. So, if we use dynamic scheduling, the process could be
assigned and run
on a di�erent core at a later time. However, this has the
disadvantage that
all of the loaded cache data becomes invalid, once the process is
switched to
another core. This can be ine�cient, as all of the cache pages
will end up
being needed to be reloaded once again on the new core L1
cache. However,
static assignment has its own problems. In static assignment,
once a process
begins using a particular core, we don't switch it to use another
core. The
obvious problem occurs if you have 2 very long running
processes that get
assigned to the same core, they could end up competing to be
scheduled on
the same core, while other cores go idle, with no statically
assigned processes
running on them.
This, in a modern OS that handles multi-core scheduling, it is
common
to see a dynamic allocation implementation. However, normally
the pro-
cessor scheduler has built-in preferences that cause the
processes to greatly
prefer to be keep scheduling on the same CPU (known as
processor a�nity).
Some parameters can be set or tweaked for a running process, to
make this
a�nity for a particular core be stronger or weaker. This, with
this concept
of processor a�nity, processes tend to try to execute on the
same core they
begin working on, but can be switched over to a new core when
necessary.
2
ÑÐÛÎßÌ×ÒÙ ÍÇÍÌÛÓÍ
×ÒÌÛÎÒßÔÍ ßÒÜ ÜÛÍ×ÙÒ
ÐÎ×ÒÝ×ÐÔÛÍ
ÍÛÊÛÒÌØ ÛÜ×Ì×ÑÒ
É·´´·¿³ ͬ¿´´·²¹-
Ю»²¬·½» Ø¿´´
Þ±-¬±² ݱ´«³¾«- ײ¼·¿²¿°±´·- Ò»© DZ®µ Í¿² Ú®¿²½·-½±
Ë°°»® Í¿¼¼´» 窻®
ß³-¬»®¼¿³ Ý¿°» ̱©² Ü«¾¿· Ô±²¼±² Ó¿¼®·¼ Ó·´¿² Ó«²·½¸
п®·- Ó±²¬®»¿´ ̱®±²¬±
Ü»´¸· Ó»¨·½± Ý·¬§ Í=± п«´± ͧ¼²»§ ر²¹ Õ±²¹ Í»±«´
Í·²¹¿°±®» Ì¿·°»· ̱µ§±
IS
B
N
1-
25
6-
52
02
3-
3
Operating Systems: Internals and Design Principles, Seventh
Edition, by William Stallings. Published by Prentice Hall.
Copyright © 2012 by Pearson Education, Inc.
Û¼·¬±®·¿´ Ü·®»½¬±®æ Ó¿®½·¿ ر®¬±²
Û¼·¬±® ·² ݸ·»ºæ Ó·½¸¿»´ Ø·®-½¸
Û¨»½«¬·ª» Û¼·¬±®æ Ì®¿½§ Ü«²µ»´¾»®¹»®
ß--·-¬¿²¬ Û¼·¬±®æ Ó»´·²¼¿ Ø¿¹¹»®¬§
Û¼·¬±®·¿´ ß--·-¬¿²¬æ ß´´·-±² Ó·½¸¿»´
Ü·®»½¬±® ±º Ó¿®µ»¬·²¹æ אַ·½» Ö±²»-
Ó¿®µ»¬·²¹ Ó¿²¿¹»®æ Ç»¦¿² ß´¿§¿²
Í»²·±Ó¿®µ»¬·²¹ ݱ±®¼·²¿¬±®æ Õ¿¬¸®§² Ú»®®¿²¬·
Ю±¼«½¬·±² Ó¿²¿¹»®æ п¬ Þ®±©²
ß®¬ Ü·®»½¬±®æ Ö¿§²» ݱ²¬»
ݱª»® Ü»-·¹²»®æ Þ®«½» Õ»²-»´¿¿®
Ó»¼·¿ Ü·®»½¬±®æ Ü¿²·»´ Í¿²¼·²
Ó»¼·¿ Ю±¶»½¬ Ó¿²¿¹»®æ É¿²¼¿ α½µ©»´´
Ú«´´óÍ»®ª·½» Ю±¶»½¬ Ó¿²¿¹»³»²¬ñݱ³°±-·¬·±²æ
͸·²§ ﶻ-¸ñײ¬»¹®¿ ͱº¬©¿®» Í»®ª·½» Ъ¬ò Ô¬¼ò
ײ¬»®·±® Ю·²¬»®ñÞ·²¼»®§æ Û¼©¿®¼- Þ®±¬¸»®-
ݱª»® Ю·²¬»®æ Ô»¸·¹¸óи±»²·¨ ݱ´±®
Ý®»¼·¬- ¿²¼ ¿½µ²±©´»¼¹³»²¬- ¾±®®±©»¼ º®±³ ±¬¸»®
-±«®½»- ¿²¼ ®»°®±¼«½»¼ô ©·¬¸ °»®³·--·±²ô ·² ¬¸·-
¬»¨¬¾±±µ ¿°°»¿® ±²
¿°°®±°®·¿¬» °¿¹» ©·¬¸·² ¬»¨¬ò
Ó·½®±-±º¬r ¿²¼ É·²¼±©-r ¿®» ®»¹·-¬»®»¼ ¬®¿¼»³¿®µ- ±º
¬¸» Ó·½®±-±º¬ ݱ®°±®¿¬·±² ·² ¬¸» ËòÍòßò ¿²¼ ±¬¸»®
½±«²¬®·»-ò ͽ®»»²
-¸±¬- ¿²¼ ·½±²- ®»°®·²¬»¼ ©·¬¸ °»®³·--·±² º®±³ ¬¸»
Ó·½®±-±º¬ ݱ®°±®¿¬·±²ò ̸·- ¾±±µ ·- ²±¬ -°±²-±®»¼ ±®
»²¼±®-»¼ ¾§ ±®
¿º� ´·¿¬»¼ ©·¬¸ ¬¸» Ó·½®±-±º¬ ݱ®°±®¿¬·±²ò
ݱ°§®·¹¸¬ w îðïîô îððçô îððëô îððïô ïççè л¿®-±²
Û¼«½¿¬·±²ô ײ½òô °«¾´·-¸·²¹ ¿- Ю»²¬·½» Ø¿´´ô ï Ô¿µ»
ͬ®»»¬ô
Ë°°»® Í¿¼¼´» 窻®ô Ò»© Ö»®-»§ô ðéìëèò ß´´ ®·¹¸¬-
®»-»®ª»¼ò Ó¿²«º¿½¬«®»¼ ·² ¬¸» ˲·¬»¼ ͬ¿¬»- ±º ß³»®·½¿ò
̸·- °«¾´·½¿¬·±²
·- °®±¬»½¬»¼ ¾§ ݱ°§®·¹¸¬ô ¿²¼ °»®³·--·±² -¸±«´¼ ¾»
±¾¬¿·²»¼ º®±³ ¬¸» °«¾´·-¸»® °®·±® ¬± ¿²§ °®±¸·¾·¬»¼
®»°®±¼«½¬·±²ô
-¬±®¿¹» ·² ¿ ®»¬®·»ª¿´ -§-¬»³ô ±® ¬®¿²-³·--·±² ·² ¿²§ º±®³
±® ¾§ ¿²§ ³»¿²-ô »´»½¬®±²·½ô ³»½¸¿²·½¿´ô °¸±¬±½±°§·²¹ô
®»½±®¼·²¹ô ±®
´·µ»©·-»ò ̱ ±¾¬¿·² °»®³·--·±²ø-÷ ¬± «-» ³¿¬»®·¿´ º®±³ ¬¸·-
©±®µô °´»¿-» -«¾³·¬ ¿ ©®·¬¬»² ®»¯«»-¬ ¬± л¿®-±²
Û¼«½¿¬·±²ô ײ½òô
л®³·--·±²- Ü»°¿®¬³»²¬ô ï Ô¿µ» ͬ®»»¬ô Ë°°»® Í¿¼¼´»
窻®ô Ò»© Ö»®-»§ô ðéìëèò
Ó¿²§ ±º ¬¸» ¼»-·¹²¿¬·±²- ¾§ ³¿²«º¿½¬«®»®- ¿²¼ -»´´»®- ¬±
¼·-¬·²¹«·-¸ ¬¸»·® °®±¼«½¬- ¿®» ½´¿·³»¼ ¿- ¬®¿¼»³¿®µ-ò
ɸ»®» ¬¸±-»
¼»-·¹²¿¬·±²- ¿°°»¿® ·² ¬¸·- ¾±±µô ¿²¼ ¬¸» °«¾´·-¸»® ©¿-
¿©¿®» ±º ¿ ¬®¿¼»³¿®µ ½´¿·³ô ¬¸» ¼»-·¹²¿¬·±²- ¸¿ª» ¾»»²
°®·²¬»¼ ·²
·²·¬·¿´ ½¿°- ±® ¿´´ ½¿°-ò
Ô·¾®¿®§ ±º ݱ²¹®»-- Ý¿¬¿´±¹·²¹ó·²óЫ¾´·½¿¬·±² Ü¿¬¿
ͬ¿´´·²¹-ô É·´´·¿³ò
Ñ°»®¿¬·²¹ -§-¬»³- æ ·²¬»®²¿´- ¿²¼ ¼»-·¹² °®·²½·°´»- ñ É·´´·¿³
ͬ¿´´·²¹-ò � 鬸 »¼ò
°ò ½³ò
ײ½´«¼»- ¾·¾´·±¹®¿°¸·½¿´ ®»º»®»²½»- ¿²¼ ·²¼»¨ò
×ÍÞÒóïíæ çéèóðóïíóîíðççèóï ø¿´µò °¿°»®÷
×ÍÞÒóïðæ ðóïíóîíðççèóÈ ø¿´µò °¿°»®÷
ïò Ñ°»®¿¬·²¹ -§-¬»³- øݱ³°«¬»®-÷ ×ò Ì·¬´»ò
ÏßéêòéêòÑêíÍéíí îðïï
ððëòìùí ¼½îî
îðïððìèëçé
ïð ç è é ê ë ì í î ï�ÛÞ�ïë ïì ïí ïî ïï
×ÍÞÒ ïðæ ðóïíóîíðççèóÈ
×ÍÞÒ ïíæ çéèóðóïíóîíðççèóï
IS
B
N
1-256-52023-3
Operating Systems: Internals and Design Principles, Seventh
Edition, by William Stallings. Published by Prentice Hall.
Copyright © 2012 by Pearson Education, Inc.
̱ ³§ ¾®·´´·¿²¬ ¿²¼ ¾®¿ª» ©·º»ô
ß²¬·¹±²» Ì®·½·¿ô ©¸± ¸¿- -«®ª·ª»¼
¬¸» ©±®-¬ ¸±®®±®- ·³¿¹·²¿¾´»ò
IS
B
N
1-
25
6-
52
02
3-
3
Operating Systems: Internals and Design Principles, Seventh
Edition, by William Stallings. Published by Prentice Hall.
Copyright © 2012 by Pearson Education, Inc.
·ª
îòê ÑÍ Ü»-·¹² ݱ²-·¼»®¿¬·±²- º±®
Ó«´¬·°®±½»--±® ¿²¼ Ó«´¬·½±®» éé
îòé Ó·½®±-±º¬ É·²¼±©- Ѫ»®ª·»© èð
îòè Ì®¿¼·¬·±²¿´ ËÒ×È Í§-¬»³- çð
îòç Ó±¼»®² ËÒ×È Í§-¬»³- çî
îòïð Ô·²«¨ çì
îòïï Ô·²«¨ ÊÍ»®ª»® Ê·®¬«¿´ Ó¿½¸·²»
ß®½¸·¬»½¬«®» ïðð
îòïî λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼ É»¾
Í·¬»- ïðï
îòïí Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼
Ю±¾´»³- ïðí
ÐßÎÌ î ÐÎÑÝÛÍÍÛÍ ïðê
ݸ¿°¬»® í Ю±½»-- Ü»-½®·°¬·±² ¿²¼
ݱ²¬®±´ ïðê
íòï ɸ¿¬ ×- ¿ Ю±½»--á ïðè
íòî Ю±½»-- ͬ¿¬»- ïïð
íòí Ю±½»-- Ü»-½®·°¬·±² ïîê
íòì Ю±½»-- ݱ²¬®±´ ïíì
íòë Û¨»½«¬·±² ±º ¬¸» Ñ°»®¿¬·²¹
ͧ-¬»³ ïìð
íòê Í»½«®·¬§ ×--«»- ïìí
íòé ËÒ×È ÍÊÎì Ю±½»--
Ó¿²¿¹»³»²¬ ïìé
íòè Í«³³¿®§ ïëî
íòç λ½±³³»²¼»¼ λ¿¼·²¹ ïëî
íòïð Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼
Ю±¾´»³- ïëí
ݸ¿°¬»® ì ̸®»¿¼- ïëé
ìòï Ю±½»--»- ¿²¼ ̸®»¿¼- ïëè
ìòî ̧°»- ±º ̸®»¿¼- ïêì
ìòí Ó«´¬·½±®» ¿²¼ Ó«´¬·¬¸®»¿¼·²¹ ïéï
ìòì É·²¼±©- é ̸®»¿¼ ¿²¼ ÍÓÐ
Ó¿²¿¹»³»²¬ ïéê
ìòë ͱ´¿®·- ̸®»¿¼ ¿²¼ ÍÓÐ
Ó¿²¿¹»³»²¬ ïèî
ìòê Ô·²«¨ Ю±½»-- ¿²¼ ̸®»¿¼
Ó¿²¿¹»³»²¬ ïèê
ìòé Ó¿½ ÑÍ È Ù®¿²¼ Ý»²¬®¿´
Ü·-°¿¬½¸ ïèç
Ѳ´·²» λ­±«®½»­ ̈
Ю»º¿½» ̈ ·
ß¾±«¬ ¬¸» ß«¬¸±® ̈ ·¨
ݸ¿°¬»® ð λ¿¼»®�- ¿²¼ ײ-¬®«½¬±®�-
Ù«·¼» ï
ðòï Ñ«¬´·²» ±º ¬¸·- Þ±±µ î
ðòî Û¨¿³°´» ͧ-¬»³- î
ðòí ß Î±¿¼³¿° º±® λ¿¼»®- ¿²¼
ײ-¬®«½¬±®- í
ðòì ײ¬»®²»¬ ¿²¼ É»¾ λ-±«®½»- ì
ÐßÎÌ ï ÞßÝÕÙÎÑËÒÜ é
ݸ¿°¬»® ï ݱ³°«¬»® ͧ-¬»³
Ѫ»®ª·»© é
ïòï Þ¿-·½ Û´»³»²¬- è
ïòî Ûª±´«¬·±² ±º ¬¸»
Ó·½®±°®±½»--±® ïð
ïòí ײ-¬®«½¬·±² Û¨»½«¬·±² ïï
ïòì ײ¬»®®«°¬- ïì
ïòë ̸» Ó»³±®§ Ø·»®¿®½¸§ îì
ïòê Ý¿½¸» Ó»³±®§ îé
ïòé Ü·®»½¬ Ó»³±®§ ß½½»-- íï
ïòè Ó«´¬·°®±½»--±® ¿²¼ Ó«´¬·½±®»
Ñ®¹¿²·¦¿¬·±² íí
ïòç λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼
É»¾ Í·¬»- íê
ïòïð Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô
¿²¼ Ю±¾´»³- íé
ïß Ð»®º±®³¿²½» ݸ¿®¿½¬»®·-¬·½- ±º
Ì©±óÔ»ª»´ Ó»³±®·»- íç
ݸ¿°¬»® î Ñ°»®¿¬·²¹ ͧ-¬»³
Ѫ»®ª·»© ìê
îòï Ñ°»®¿¬·²¹ ͧ-¬»³ Ѿ¶»½¬·ª»- ¿²¼
Ú«²½¬·±²- ìè
îòî ̸» Ûª±´«¬·±² ±º Ñ°»®¿¬·²¹
ͧ-¬»³- ëî
îòí Ó¿¶±® ß½¸·»ª»³»²¬- êî
îòì Ü»ª»´±°³»²¬- Ô»¿¼·²¹ ¬± Ó±¼»®²
Ñ°»®¿¬·²¹ ͧ-¬»³- éï
îòë Ê·®¬«¿´ Ó¿½¸·²»- éì
ÝÑÒÌÛÒÌÍ
IS
B
N
1-256-52023-3
Operating Systems: Internals and Design Principles, Seventh
Edition, by William Stallings. Published by Prentice Hall.
Copyright © 2012 by Pearson Education, Inc.
ÝÑÒÌÛÒÌÍ ª
éòí п¹·²¹ íîï
éòì Í»¹³»²¬¿¬·±² íîë
éòë Í»½«®·¬§ ×--«»- íîê
éòê Í«³³¿®§ ííð
éòé λ½±³³»²¼»¼ λ¿¼·²¹ ííð
éòè Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼
Ю±¾´»³- ííï
éß Ô±¿¼·²¹ ¿²¼ Ô·²µ·²¹ ííì
ݸ¿°¬»® è Ê·®¬«¿´ Ó»³±®§ íìð
èòï Ø¿®¼©¿®» ¿²¼ ݱ²¬®±´
ͬ®«½¬«®»- íìï
èòî Ñ°»®¿¬·²¹ ͧ-¬»³ ͱº¬©¿®» íêð
èòí ËÒ×È ¿²¼ ͱ´¿®·- Ó»³±®§
Ó¿²¿¹»³»²¬ íéç
èòì Ô·²«¨ Ó»³±®§ Ó¿²¿¹»³»²¬ íèì
èòë É·²¼±©- Ó»³±®§
Ó¿²¿¹»³»²¬ íèê
èòê Í«³³¿®§ íèç
èòé λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼ É»¾
Í·¬»- íçð
èòè Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô
¿²¼ Ю±¾´»³- íçï
ÐßÎÌ ì ÍÝØÛÜËÔ×ÒÙ íçë
ݸ¿°¬»® ç ˲·°®±½»--±® ͽ¸»¼«´·²¹ íçë
çòï ̧°»- ±º Ю±½»--±® ͽ¸»¼«´·²¹ íçê
çòî ͽ¸»¼«´·²¹ ß´¹±®·¬¸³- ìðð
çòí Ì®¿¼·¬·±²¿´ ËÒ×È
ͽ¸»¼«´·²¹ ìîî
çòì Í«³³¿®§ ìîì
çòë λ½±³³»²¼»¼ λ¿¼·²¹ ìîë
çòê Õ»§ Ì»®³­ô λª·»© …
p5-start.cppp5-start.cpp/**
* @author Jane Student
* @cwid 123 45 678
* @class CSci 430, Spring 2018
* @ide Visual Studio Express 2010
* @date November 15, 2018
* @assg prog-04
*
* @description This program implements a simulation of proce
ss
* scheduling policies. In this program, we implement round-
robin
* scheduling, where the time slice quantum can be specified a
s
* as a command line parameter. And we also implement shor
test
* remaining time (SRT) scheduling policy
*/
#include<stdlib.h>
#include<iostream>
#include<iomanip>
#include<fstream>
#include<string>
#include<list>
usingnamespace std;
// global constants
// I won't test your round robin implementation with more than 2
0 processes
constint MAX_PROCESSES =20;
constint NO_PROCESS =0;
// Simple structure, holds all of the information about processes,
their names
// arrival and service times, that we are to simulate.
typedefstruct
{
string processName;
int arrivalTime;
int serviceTime;
// holds running count of time slices for current time quantum,
when
// serviceTime == quantum, time slice is up
int sliceTime;
// holds total number of time steps currently run, when == to
// serviceTime process is done
int totalTime;
// holds time when process finishes, used to calculate final stats,
// like T_r, T_r/T_s
int finishTime;
// a boolean flag, we will set this to true when the process is co
mplete
bool finished;
}Process;
// Process table, holds table of information about processes we a
re simulating
typedefstruct
{
int numProcesses;
Process* process[MAX_PROCESSES];
}ProcessTable;
/** Create process table
* Allocate memory for a new process table. Load the process
* information from the simulation file into a table with the proc
ess
* information needed to perform the simulation. At the same ti
me we
* initialize other information in process table for use in the
* simulation. Return the newly created ProcessTable
*
* @param processFilanem The name (char*) of the file to open
and read
* the process information from.
* @param processTable This is actually a return parameter. Th
is
* should be a pointer to an already allocated array of
* Process structure items. We will fill in this structure
* and return the process information.
*
* @returns ProcessTable* The newly allocated and initialized P
rocessTable
* structure.
*/
ProcessTable* createProcessTable(char* processFilename)
{
ifstream simprocessfile(processFilename);
ProcessTable* processTable;
int pid;
string processName;
int arrivalTime;
int serviceTime;
// If we can't open file, abort and let the user know problem
if(!simprocessfile.is_open())
{
cout <<"Error: could not open process simulation file: "
<< processFilename << endl;
exit(1);
}
// Format of file is
// ProcessName1 ArrivalTime1 ServiceTime1
// ProcessName2 ArrivalTime2 ServiceTime2
// ...
// ProcessNameN ArrivalTimeN ServiceTimeN
//
// Where the name is any arbitray string identifier, and ArrivalT
ime
// and ServiceTime are integer values
pid =0;
processTable =new(ProcessTable);
while(simprocessfile >> processName >> arrivalTime >> servic
eTime)
{
// allocate a new process to hold information
Process* process =new(Process);
processTable->process[pid]= process;
// load information into process read from simulation file
process->processName = processName;
process->arrivalTime = arrivalTime;
process->serviceTime = serviceTime;
// initialize other process information for the simulaiton
process->sliceTime =0;
process->totalTime =0;
process->finishTime =0;
process->finished =false;
pid++;
}
// Set the number of processes we need to simulate information i
n
// the process table
processTable->numProcesses = pid;
return processTable;
}
/** Display process table
* Convenience method, dump all of the information about the p
rocesses
* in a process table to stdout.
*
* @param processTable The table, a pointer to type ProcessTab
le
* struct, with the information we are to display
*/
void displayProcessTable(ProcessTable* processTable)
{
cout <<"Process Table num = "<< processTable-
>numProcesses << endl;
cout <<"PID Name Arrv Srvc"<< endl;
cout <<"------------------"<< endl;
for(int pid=0; pid < processTable->numProcesses; pid++)
{
Process* p = processTable->process[pid];
cout << setw(2)<< right << pid <<") ";
cout << setw(4)<< left << p->processName <<" ";
cout << setw(4)<< right << p->arrivalTime <<" ";
cout << setw(4)<< right << p->serviceTime <<" ";
cout << endl;
}
}
/** Round robin scheduler simulator
* The main routine for performing the round robin preemptive
* scheduler simulator. We expect the time quantum to already
be
* specified and given to us as the first parameter. The file nam
e
* with the process arrival and service time information is given
as
* the second parameter. We simulate preemptive round robin
* scheduling of all of the processes until there are no longer an
y
* processes left in the system (all processes have exceeded thei
r
* service time and have exited).
*
* @param processTable A pointer to a ProcessTable structure h
olding
* information about the processes, arrival times and duratio
ns
* that we are simulating execution of.
* @param quantum An integer value holding the time slice qua
ntum we
* are using for this simulation.
*/
void roundRobinScheduler(ProcessTable* processTable,int quan
tum)
{
// Implement the round robin scheduler here
cout <<"<roundRobinScheduler> entered, quantum: "<< quant
um << endl;
}
/** shortest remaining time simulator
* The main routine for performing the shortest remaining time
* preemptive scheduler simulator. The file name with the proc
ess
* arrival and service time information is given as the first
* parameter. We simulate preemptive shortest remaining time
* scheduling of all of the processes until there are no longer an
y
* processes left in the system (all processes have exceeded thei
r
* service time and have exited).
*
* @param processTable A pointer to a ProcessTable structure h
olding
* information about the processes, arrival times and duratio
ns
* that we are simulating execution of.
*/
void shortestRemainingTime(ProcessTable* processTable)
{
// Implement the shortest remaining time policy here
cout <<"<shortestRemainingTime> entered"<< endl;
}
/** Main entry point of round robin scheduler
* The main entry point of the round robin scheduler simulator.
The main funciton
* checks the command line arguments, and calls the simulation
function if correct
* arguments were supplied. We expect two command line argu
ments, which are the
* time slice quantum value we are to use for this preemptive sc
heduler simulation,
* and the name of the simulation file holding the process arriva
l and service
* time information.
*
* @param argc The argument count
* @param argv The command line argument values. We expect
argv[1] to be the
* time slice quantum parameter (int format) and argv[2
] to be the
* name of the process simulation file (charcter string)
*/
int main(int argc,char** argv)
{
string policy;
ProcessTable* processTable;
int quantum =0;
// If not all parameters provides, abort and let user know of prob
lem
if(argc <3|| argc >4)
{
cout <<"Error: expecting process simulation file and scheduli
ng policy as command line parameters"
<< endl;
cout <<"Usage: "<< argv[0]<<" process-
file.sim [rr|srt] [quantum]"<< endl;
exit(1);
}
// load process table and parse command line arguments
processTable = createProcessTable(argv[1]);
// just to confirm that process table loaded correctly. You shoul
d
// comment out or remove this as it is not asked for as part of th
e
// output for the assignment simulation
displayProcessTable(processTable);
// determine policy to simulate
policy.assign(argv[2]);
// perform simulation of indicated scheduling policy
if(policy =="rr")
{
if(argc !=4)
{
cout <<"Error: time quantum must be provided for round ro
bin `rr` scheduling policy"<< endl;
exit(1);
}
quantum = atoi(argv[3]);
if((quantum <=0)||(quantum >1000))
{
cout <<"Error: received bad time slice quantum parameter:
"<< argv[1]<< endl;
cout <<" valid values are integers in range from 1 to 10
00"<< endl;
exit(1);
}
roundRobinScheduler(processTable, quantum);
}
elseif(policy =="srt")
{
shortestRemainingTime(processTable);
}
else
{
cout <<"Error: unknown process scheduling policy: "<< polic
y << endl;
}
}
prog-05.pdf
Programming Assignment #5
CSci 430, Spring 2019
Dates:
Assigned: Monday April 15, 2019
Due: Wednesday May 1, 2019 (before Midnight)
Objectives:
� Understand short-term process scheduling.
� Work with data structures to implement a round-robin
scheduler.
� Look at e�ects of di�erent time slice quantum sizes on the
round-robin scheduling algorithm.
� Use C/C++ to implement vector and matrix data structures,
get practice in creating and using
such data structures in C/C++.
Description:
Our textbooks chapter 9 discusses several possible short-term
process scheduling policies. In this
programming assignment exercise we will implement two of the
preemptive policies, the simple shortest
remaining time policy (SRT) and the round-robin scheduler with
preemptive time slicing. Your program
will be given a simple input �le, indicating the process name,
its arrival time and its total service time,
the same as the process scheduling examples from our textbook
in Table 9.4 and Figure 9.5. You will
simulate the execution of the required schedulers. As in
previous assignments, you program will need
to work non-interactively and be callable from the command
line. The program will be provided with
the �le name of a �le with process information, in the format
discussed below. Your program will also
be given the time slicing quantum parameter it is to use for the
simulation, if round-robin scheduling
is selected. Your program will need to output the results of
running the set of simulated processes
using the selected scheduling policy with the indicated time
slice for the round-robin scheduler. Your
program will have to output its results exactly as shown below
in the required output format. Your
program will also need to calculate some summary statistics for
the simulated processes, including the
turnaround time and Tr/Ts ratio for each process, and the mean
Tr and Tr/Ts values for the given
simulation.
Process simulation �le formats
The �les with the information about the processes to be
simulated are fairly simple, and have the same
information that our textbook uses to illustrate the process
scheduling examples. Each simulation �le
contains multiple rows of data, where each row consists of the
process name, its arrival time, and its
service time. Here is an example:
1
A 0 3
B 2 6
C 4 4
D 6 5
E 8 2
This �le is named process-01.sim in the zip archive of �les I
have given you to get started on this
assignment. This is also the same set of processes and
start/service times used for all of the examples
in table 9.4 and �gure 9.5.
Running Simulations
As with previous assignments you are required to support using
your simulation from the command
line. Your program will take the name of the �le containing the
process information �rst. The next
parameter will be either 'rr' to perform round-robin scheduling,
or 'srt' if shortest remaining time policy
is to be simulated. Finally, a 3rd parameter will be supplied for
the round-robin scheduler, the time
slice quantum to use. An example of running your �nished
program should look like this:
$ ./p3 process-01.sim rr 4
A A A B B B B C C C C D D D D B B E E D
Name Fnsh T_r T_r/T_s
----------------------
A 3 3 1
B 17 15 2.5
C 11 7 1.75
D 20 14 2.8
E 19 11 5.5
Here we are running the simulation using the set of process
information given in the previous section
and with a time slice quantum of 4.
Required Output
As shown above, your program must generate 2 bits of output.
First of all, while running the simulation
of the selected scheduling policy, you should display the
process names in the order they are run. In
the previous example, the sequence of scheduled/run processes
was:
A A A B B B B C C C C D D D D B B E E D
This indicates that process A ran �rst (times 0, 1 and 2),
followed by B running 4 times (times 3
to 7), etc. You are required to output the sequence of process
runs as the �rst line of output, with a
single space in between each process name as shown.
After the processes have run, you need to calculate and display
the statistics for the processes that
you just simulated. In our previous example, the statistics for
our round-robin simulation with a time
quantum of 4 time slices were:
Name Fnsh T_r T_r/T_s
----------------------
A 3 3 1
B 17 15 2.5
C 11 7 1.75
2
D 20 14 2.8
E 19 11 5.5
For each process, you need to output the time when it �nished,
the turnaround time (Tr) and the
ratio of the turnaround time to the service time (Tr/Ts).
I have provided a zip �le with a �le named p3-start.cpp as a
template to get you started. In addition,
I have provided you with two process simulation �les, named
process-01.sim and process-02.sim, with
2 sets of process information you can simulate. There are
several examples of correct results generated
for the two sets of inputs, named things like process-01-q1.res,
process-01-q4.res, process-01-srt.res, etc.
These are the correct results you should get for running your
simulation with round-robin scheduling
for various time quantums or for shortest remaining time
scheduling.
3
processtable-01.sim
A 0 3
B 2 6
C 4 4
D 6 5
E 8 2
processtable-02.sim
A 0 4
B 1 7
C 4 5
D 4 5
E 7 2
F 8 5
G 10 1
H 10 4
I 12 6
processtable-03.sim
A 0 3
B 2 4
C 3 5
D 3 8
E 3 2
F 5 6
G 7 9
H 7 4
I 8 3
J 8 5
K 8 4
L 10 6
Makefile
all: p5sol
p5: p5-start.cpp
g++ -g $< -o [email protected]
p5sol: p5-solution.cpp
g++ -g $< -o p5
debug: p5-solution.cpp
g++ -DDEBUG_BUILD=1 -g $< -o p5
p5test:
./p5 processtable-01.sim rr 1 > sim-01-rr-1.tst
@diff -s -q sim-01-rr-1.tst sim-01-rr-1.res
./p5 processtable-01.sim rr 4 > sim-01-rr-4.tst
@diff -s -q sim-01-rr-4.tst sim-01-rr-4.res
./p5 processtable-01.sim srt > sim-01-srt.tst
@diff -s -q sim-01-srt.tst sim-01-srt.res
./p5 processtable-02.sim rr 1 > sim-02-rr-1.tst
@diff -s -q sim-02-rr-1.tst sim-02-rr-1.res
./p5 processtable-02.sim rr 4 > sim-02-rr-4.tst
@diff -s -q sim-02-rr-4.tst sim-02-rr-4.res
./p5 processtable-02.sim srt > sim-02-srt.tst
@diff -s -q sim-02-srt.tst sim-02-srt.res
./p5 processtable-03.sim rr 1 > sim-03-rr-1.tst
@diff -s -q sim-03-rr-1.tst sim-03-rr-1.res
./p5 processtable-03.sim rr 5 > sim-03-rr-5.tst
@diff -s -q sim-03-rr-5.tst sim-03-rr-5.res
./p5 processtable-03.sim srt > sim-03-srt.tst
@diff -s -q sim-03-srt.tst sim-03-srt.res
@rm sim-01-rr-1.tst sim-01-rr-4.tst sim-01-srt.tst sim-02-
rr-1.tst sim-02-rr-4.tst sim-02-srt.tst sim-03-rr-1.tst sim-03-rr-
5.tst sim-03-srt.tst
p5zip:
zip ../prog-05.zip p5-start.cpp prog-05.pdf processtable-
01.sim processtable-02.sim processtable-03.sim Makefile *.res
p5solzip:
zip ../prog-05-sol.zip p5-solution.cpp processtable-*.sim
sim-*.res
clean:
rm -f p5 sim-*.tst core* *~
sim-01-rr-1.res
sim-01-rr-4.res
sim-01-srt.res
sim-02-rr-1.res
sim-02-rr-4.res
sim-02-srt.res
sim-03-rr-1.res
sim-03-rr-5.res
sim-03-srt.res

More Related Content

Similar to Uniprocessor SchedulingCsci 430, Spring 2018Texas A&

Learning scheduler parameters for adaptive preemption
Learning scheduler parameters for adaptive preemptionLearning scheduler parameters for adaptive preemption
Learning scheduler parameters for adaptive preemption
csandit
 
ch_scheduling (1).ppt
ch_scheduling (1).pptch_scheduling (1).ppt
ch_scheduling (1).ppt
Farhanahmad540205
 
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
IJECEIAES
 
20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design
8016AryanSabat
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.
Shreya Kumar
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
Riya Choudhary
 
Do2644844490
Do2644844490Do2644844490
Do2644844490IJMER
 
Operating System Scheduling
Operating System SchedulingOperating System Scheduling
Operating System Scheduling
Vishnu Prasad
 
Scheduling
SchedulingScheduling
An improved round robin cpu scheduling
An improved round robin cpu schedulingAn improved round robin cpu scheduling
An improved round robin cpu scheduling
IJCSEA Journal
 
cpu scheduling by shivam singh
cpu scheduling by shivam singhcpu scheduling by shivam singh
cpu scheduling by shivam singhshivam71291
 
An Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
An Improved Round Robin CPU Scheduling Algorithm with Varying Time QuantumAn Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
An Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
IJCSEA Journal
 
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUMAN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
IJCSEA Journal
 
LM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesLM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processes
manideepakc
 
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERSTEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
ijscai
 
Osy ppt - Copy.pptx
Osy ppt - Copy.pptxOsy ppt - Copy.pptx
Osy ppt - Copy.pptx
NikhilShinde253288
 
Rtos Concepts
Rtos ConceptsRtos Concepts
Rtos Concepts
Sundaresan Sundar
 

Similar to Uniprocessor SchedulingCsci 430, Spring 2018Texas A& (20)

Learning scheduler parameters for adaptive preemption
Learning scheduler parameters for adaptive preemptionLearning scheduler parameters for adaptive preemption
Learning scheduler parameters for adaptive preemption
 
ch_scheduling (1).ppt
ch_scheduling (1).pptch_scheduling (1).ppt
ch_scheduling (1).ppt
 
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
Optimal Round Robin CPU Scheduling Algorithm using Manhattan Distance
 
20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.
 
PPT CPU
PPT CPUPPT CPU
PPT CPU
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
seminar report
seminar reportseminar report
seminar report
 
Do2644844490
Do2644844490Do2644844490
Do2644844490
 
Operating System Scheduling
Operating System SchedulingOperating System Scheduling
Operating System Scheduling
 
Scheduling
SchedulingScheduling
Scheduling
 
An improved round robin cpu scheduling
An improved round robin cpu schedulingAn improved round robin cpu scheduling
An improved round robin cpu scheduling
 
cpu scheduling by shivam singh
cpu scheduling by shivam singhcpu scheduling by shivam singh
cpu scheduling by shivam singh
 
An Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
An Improved Round Robin CPU Scheduling Algorithm with Varying Time QuantumAn Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
An Improved Round Robin CPU Scheduling Algorithm with Varying Time Quantum
 
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUMAN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
AN IMPROVED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH VARYING TIME QUANTUM
 
LM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesLM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processes
 
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERSTEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERS
 
Osy ppt - Copy.pptx
Osy ppt - Copy.pptxOsy ppt - Copy.pptx
Osy ppt - Copy.pptx
 
Rtos Concepts
Rtos ConceptsRtos Concepts
Rtos Concepts
 

More from corbing9ttj

1. Is a smile a universal nonverbal form of communication Why or wh.docx
1. Is a smile a universal nonverbal form of communication Why or wh.docx1. Is a smile a universal nonverbal form of communication Why or wh.docx
1. Is a smile a universal nonverbal form of communication Why or wh.docx
corbing9ttj
 
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
corbing9ttj
 
1. IntroductionThe objective of this video is to introduce studen.docx
1. IntroductionThe objective of this video is to introduce studen.docx1. IntroductionThe objective of this video is to introduce studen.docx
1. IntroductionThe objective of this video is to introduce studen.docx
corbing9ttj
 
1. In the following sentence, which words are used as adjectivesT.docx
1. In the following sentence, which words are used as adjectivesT.docx1. In the following sentence, which words are used as adjectivesT.docx
1. In the following sentence, which words are used as adjectivesT.docx
corbing9ttj
 
1. In the song Tonight from the musical West Side Story, the compo.docx
1. In the song Tonight from the musical West Side Story, the compo.docx1. In the song Tonight from the musical West Side Story, the compo.docx
1. In the song Tonight from the musical West Side Story, the compo.docx
corbing9ttj
 
1. IT Diffusion Models Please respond to the followingReview th.docx
1. IT Diffusion Models Please respond to the followingReview th.docx1. IT Diffusion Models Please respond to the followingReview th.docx
1. IT Diffusion Models Please respond to the followingReview th.docx
corbing9ttj
 
1. In many ways the underworld was considered dangerous, but there w.docx
1. In many ways the underworld was considered dangerous, but there w.docx1. In many ways the underworld was considered dangerous, but there w.docx
1. In many ways the underworld was considered dangerous, but there w.docx
corbing9ttj
 
1. In Jules Henry’s view, how are values and drives related to e.docx
1. In Jules Henry’s view, how are values and drives related to e.docx1. In Jules Henry’s view, how are values and drives related to e.docx
1. In Jules Henry’s view, how are values and drives related to e.docx
corbing9ttj
 
1. If I wanted to test a certain chemical to see if it made mice run.docx
1. If I wanted to test a certain chemical to see if it made mice run.docx1. If I wanted to test a certain chemical to see if it made mice run.docx
1. If I wanted to test a certain chemical to see if it made mice run.docx
corbing9ttj
 
1. Identify a community or aggregate you are currently involved wi.docx
1. Identify a community or aggregate you are currently involved wi.docx1. Identify a community or aggregate you are currently involved wi.docx
1. Identify a community or aggregate you are currently involved wi.docx
corbing9ttj
 
1. Identify and describe the steps of the scientific method. Which o.docx
1. Identify and describe the steps of the scientific method. Which o.docx1. Identify and describe the steps of the scientific method. Which o.docx
1. Identify and describe the steps of the scientific method. Which o.docx
corbing9ttj
 
1. How many time zones are there across the world2. Which map wou.docx
1. How many time zones are there across the world2. Which map wou.docx1. How many time zones are there across the world2. Which map wou.docx
1. How many time zones are there across the world2. Which map wou.docx
corbing9ttj
 
1. How has society responded to coastal changes initiated by sand tr.docx
1. How has society responded to coastal changes initiated by sand tr.docx1. How has society responded to coastal changes initiated by sand tr.docx
1. How has society responded to coastal changes initiated by sand tr.docx
corbing9ttj
 
1. How has the economic geography of this region changed over the .docx
1. How has the economic geography of this region changed over the .docx1. How has the economic geography of this region changed over the .docx
1. How has the economic geography of this region changed over the .docx
corbing9ttj
 
1. Important nurses of the 19th century are often overshadowed by Ni.docx
1. Important nurses of the 19th century are often overshadowed by Ni.docx1. Important nurses of the 19th century are often overshadowed by Ni.docx
1. Important nurses of the 19th century are often overshadowed by Ni.docx
corbing9ttj
 
1. In what ways did the Columbian Exchange impact the Americas, .docx
1. In what ways did the Columbian Exchange impact the Americas, .docx1. In what ways did the Columbian Exchange impact the Americas, .docx
1. In what ways did the Columbian Exchange impact the Americas, .docx
corbing9ttj
 
1. How did the conditions of life among descendants of African Slave.docx
1. How did the conditions of life among descendants of African Slave.docx1. How did the conditions of life among descendants of African Slave.docx
1. How did the conditions of life among descendants of African Slave.docx
corbing9ttj
 
1. If you adopt the _______ perspective, youll reject an emphasis o.docx
1. If you adopt the _______ perspective, youll reject an emphasis o.docx1. If you adopt the _______ perspective, youll reject an emphasis o.docx
1. If you adopt the _______ perspective, youll reject an emphasis o.docx
corbing9ttj
 
1. How are information systems transforming business and what is the.docx
1. How are information systems transforming business and what is the.docx1. How are information systems transforming business and what is the.docx
1. How are information systems transforming business and what is the.docx
corbing9ttj
 
1. How can Fiedlers theory of the least preferred coworker help man.docx
1. How can Fiedlers theory of the least preferred coworker help man.docx1. How can Fiedlers theory of the least preferred coworker help man.docx
1. How can Fiedlers theory of the least preferred coworker help man.docx
corbing9ttj
 

More from corbing9ttj (20)

1. Is a smile a universal nonverbal form of communication Why or wh.docx
1. Is a smile a universal nonverbal form of communication Why or wh.docx1. Is a smile a universal nonverbal form of communication Why or wh.docx
1. Is a smile a universal nonverbal form of communication Why or wh.docx
 
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
1. IntroductionWritten and Narrated by Professor Deni ElliottThe.docx
 
1. IntroductionThe objective of this video is to introduce studen.docx
1. IntroductionThe objective of this video is to introduce studen.docx1. IntroductionThe objective of this video is to introduce studen.docx
1. IntroductionThe objective of this video is to introduce studen.docx
 
1. In the following sentence, which words are used as adjectivesT.docx
1. In the following sentence, which words are used as adjectivesT.docx1. In the following sentence, which words are used as adjectivesT.docx
1. In the following sentence, which words are used as adjectivesT.docx
 
1. In the song Tonight from the musical West Side Story, the compo.docx
1. In the song Tonight from the musical West Side Story, the compo.docx1. In the song Tonight from the musical West Side Story, the compo.docx
1. In the song Tonight from the musical West Side Story, the compo.docx
 
1. IT Diffusion Models Please respond to the followingReview th.docx
1. IT Diffusion Models Please respond to the followingReview th.docx1. IT Diffusion Models Please respond to the followingReview th.docx
1. IT Diffusion Models Please respond to the followingReview th.docx
 
1. In many ways the underworld was considered dangerous, but there w.docx
1. In many ways the underworld was considered dangerous, but there w.docx1. In many ways the underworld was considered dangerous, but there w.docx
1. In many ways the underworld was considered dangerous, but there w.docx
 
1. In Jules Henry’s view, how are values and drives related to e.docx
1. In Jules Henry’s view, how are values and drives related to e.docx1. In Jules Henry’s view, how are values and drives related to e.docx
1. In Jules Henry’s view, how are values and drives related to e.docx
 
1. If I wanted to test a certain chemical to see if it made mice run.docx
1. If I wanted to test a certain chemical to see if it made mice run.docx1. If I wanted to test a certain chemical to see if it made mice run.docx
1. If I wanted to test a certain chemical to see if it made mice run.docx
 
1. Identify a community or aggregate you are currently involved wi.docx
1. Identify a community or aggregate you are currently involved wi.docx1. Identify a community or aggregate you are currently involved wi.docx
1. Identify a community or aggregate you are currently involved wi.docx
 
1. Identify and describe the steps of the scientific method. Which o.docx
1. Identify and describe the steps of the scientific method. Which o.docx1. Identify and describe the steps of the scientific method. Which o.docx
1. Identify and describe the steps of the scientific method. Which o.docx
 
1. How many time zones are there across the world2. Which map wou.docx
1. How many time zones are there across the world2. Which map wou.docx1. How many time zones are there across the world2. Which map wou.docx
1. How many time zones are there across the world2. Which map wou.docx
 
1. How has society responded to coastal changes initiated by sand tr.docx
1. How has society responded to coastal changes initiated by sand tr.docx1. How has society responded to coastal changes initiated by sand tr.docx
1. How has society responded to coastal changes initiated by sand tr.docx
 
1. How has the economic geography of this region changed over the .docx
1. How has the economic geography of this region changed over the .docx1. How has the economic geography of this region changed over the .docx
1. How has the economic geography of this region changed over the .docx
 
1. Important nurses of the 19th century are often overshadowed by Ni.docx
1. Important nurses of the 19th century are often overshadowed by Ni.docx1. Important nurses of the 19th century are often overshadowed by Ni.docx
1. Important nurses of the 19th century are often overshadowed by Ni.docx
 
1. In what ways did the Columbian Exchange impact the Americas, .docx
1. In what ways did the Columbian Exchange impact the Americas, .docx1. In what ways did the Columbian Exchange impact the Americas, .docx
1. In what ways did the Columbian Exchange impact the Americas, .docx
 
1. How did the conditions of life among descendants of African Slave.docx
1. How did the conditions of life among descendants of African Slave.docx1. How did the conditions of life among descendants of African Slave.docx
1. How did the conditions of life among descendants of African Slave.docx
 
1. If you adopt the _______ perspective, youll reject an emphasis o.docx
1. If you adopt the _______ perspective, youll reject an emphasis o.docx1. If you adopt the _______ perspective, youll reject an emphasis o.docx
1. If you adopt the _______ perspective, youll reject an emphasis o.docx
 
1. How are information systems transforming business and what is the.docx
1. How are information systems transforming business and what is the.docx1. How are information systems transforming business and what is the.docx
1. How are information systems transforming business and what is the.docx
 
1. How can Fiedlers theory of the least preferred coworker help man.docx
1. How can Fiedlers theory of the least preferred coworker help man.docx1. How can Fiedlers theory of the least preferred coworker help man.docx
1. How can Fiedlers theory of the least preferred coworker help man.docx
 

Recently uploaded

Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Dr. Vinod Kumar Kanvaria
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
Krisztián Száraz
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
chanes7
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
Academy of Science of South Africa
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
tarandeep35
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
deeptiverma2406
 

Recently uploaded (20)

Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
 

Uniprocessor SchedulingCsci 430, Spring 2018Texas A&

  • 1. Uniprocessor Scheduling Csci 430, Spring 2018 Texas A&M University – Commerce Derek Harter Introduction to Operating System Concepts Types of Processor Scheduling Objective: Be able to explain the differences between long-, medium- and short-term scheduling policies. Processor Scheduling the processor in a way that meets system objectives, such as response time, throughput, and processor efficiency
  • 2. unctions: – Long-term – Medium-term – Short-term Table 9.1 Types of Scheduling Scheduling and Process State Transitions Figure 9.2 Nesting of Scheduling Functions (Referencing figure 3.9b) Queuing Diagram Long-Term Scheduler
  • 3. ermines which programs are admitted to the system for processing percentage of time that each process can be executed ervice to the current set of processes Medium-Term Scheduling -in decisions are based on the need to manage the degree of multiprogramming swapped-out processes Short-Term Scheduling -grained decision of which process to execute next
  • 4. blocking of the current process or that may provide an opportunity to preempt a currently running process in favor of another Scheduling Algorithms Objective: Be able to assess the performance of different scheduling policies (for the short term scheduler or dispatcher). Short Term Scheduling Criteria processor time to optimize certain aspects of system behavior evaluate the scheduling policy – User vs. system oriented criteria Short-Term Scheduling Criteria: Performance - Performance related criteria are quantative (can be measured) - Examples: response time and throughput - Not performance related are qualitative or not readily
  • 5. measured - Predictability Table 9.2 Scheduling Criteria Priority Queuing Alternative Scheduling Policies Selection Function selected next for execution execution characteristics of the process If based on execution characteristics then important quantities are:
  • 6. generally, this quantity must be estimated or supplied by the user Decision Mode instants in time at which the selection function is exercised categories: Nonpreemptive vs Preemptive Nonpreemptive running state, it will continue until it terminates or blocks itself for I/O
  • 7. Preemptive process may be interrupted and moved to ready state by the OS when new process arrives, on an interrupt, or periodically Alternative Scheduling Policies Process Scheduling Example Comparison of Scheduling Policies Table 9.5 Comparison of Scheduling
  • 8. Policies First-Come-First- Served (FCFS) -in-first- out (FIFO) or a strict queuing scheme ceases to execute, the longest process in the Ready queue is selected long processes than short ones - bound processes over I/O- bound processes FCFS: Penalizes Short Jobs ● FCFS performs much better for long processes than short ones.
  • 9. Round Robin clock because each process is given a slice of time before being preempted length of the time quantum, or slice, to be used general-purpose time- sharing system or transaction processing system treatment of processor- bound and I/O-bound processes Round Robin Effect of Size of Preemption Time
  • 10. Quantum Figure 9.6a Figure 9.6b Effect of Size of Preemption Time Quantum Virtual Round Robin (VRR) Shortest Process Next (SPN) icy in which the process with the shortest expected processing time is selected next the head of the queue longer processes know, or at least estimate, the required processing
  • 11. time of each process estimate is substantially under the actual running time, the system may abort the job Calculating Service Time Interactively ● One difficulty with SPN is need to know or estimate required processing time (service time) ● For batch job, can be supplied by programmer. ● For interactive jobs, OS can keep average of each “burst” Rewrite summation: Exponential averaging: Exponential Smoothing Coefficients Use Of Exponential Averaging Use Of Exponential Averaging
  • 12. Shortest Remaining Time (SRT) the process that has the shortest expected remaining processing time processes ld give superior turnaround time performance to SPN because a short job is given immediate preference to a running longer job Highest Response Ratio Next (HRRN) the greatest ratio accounts for the age of the process
  • 13. favored, aging without service increases the ratio so that a longer process will eventually get past competing shorter jobs Feedback Scheduling ) of processes, we can’t use SPN, SRT or HRRN. jobs that run longer. spent so far. duling with a dynamic priority mechanism. priority. Thus longer running processes “age” and become low priority, and new short processes don’t age enough before they finish execution. Feedback Scheduling
  • 14. Feedback Performance Performance Comparison Objective: Introduction to queuing theory and modeling to comparative analysis of scheduling (and other) algorithms. Performance Comparison oses the next item to be served independent of service time obeys the relationship: Normalized Turnaround Time as a Function of Processor Utilization Table 9.6 Formulas for Single-Server Queues with Two Priority Categories
  • 15. Overall Normalized Response Time Normalized Response Time for Shorter Processes Normalized Response Time for Longer Processes Results Simulation Alternative Scheduling Policies Fair-Share Scheduling Objective: Look at scheduling pools (processes organized as threads). Fair-Share Scheduling
  • 16. sets processor resources to users who have had more than their fair share and more to those who have had less than their fair share Fair-Share Scheduling Fair-Share Scheduler Traditional UNIX Scheduling Objective: Understand the scheduling technique used in traditional UNIX systems. Traditional UNIX Scheduling -sharing
  • 17. interactive environment users while ensuring that low-priority background jobs do not starve Employs multilevel feedback using round robin within each of the priority queues -second preemption Scheduling Formula Bands access to block devices and to allow the operating system to respond quickly to system calls of priority, the bands are: Example of
  • 18. Traditional UNIX Process Scheduling Summary decisions with respect to the execution of processes: -term – determines when new processes are admitted to the system -term – part of the swapping function and determines when a program is brought into main memory so that it may be executed -term – determines which ready process will be executed next by the processor most important characteristic of a system; from a system point of view, throughput or processor utilization is important Chapter 9 Uniprocessor SchedulingSlide 3Processor SchedulingTable 9.1 Types of SchedulingScheduling and Process State TransitionsFigure 9.2 Nesting of Scheduling FunctionsQueuing DiagramLong-Term SchedulerMedium-Term SchedulingShort-Term SchedulingSlide 12Short Term
  • 19. Scheduling CriteriaShort-Term Scheduling Criteria: PerformanceTable 9.2 Scheduling CriteriaPriority QueuingAlternative Scheduling PoliciesSelection FunctionDecision ModeNonpreemptive vs PreemptiveSlide 21Table 9.4 Process Scheduling ExampleSlide 23Slide 24First- Come-First-Served (FCFS)Slide 26Round RobinSlide 28Effect of Size of Preemption Time QuantumFigure 9.6b Effect of Size of Preemption Time QuantumVirtual Round Robin (VRR)Shortest Process Next (SPN)Slide 33Exponential Smoothing CoefficientsUse Of Exponential AveragingUse Of Exponential AveragingShortest Remaining Time (SRT)Highest Response Ratio Next (HRRN)Slide 39Feedback SchedulingFeedback PerformanceSlide 42Performance ComparisonSlide 44Slide 45Overall Normalized Response TimeNormalized Response Time for Shorter ProcessesNormalized Response Time for Longer ProcessesResultsSlide 50Slide 51Fair-Share SchedulingSlide 53Fair-Share SchedulerSlide 55Traditional UNIX SchedulingScheduling FormulaBandsExample of Traditional UNIX Process SchedulingSummary Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design
  • 20. Principles Classifications of Multiprocessor Systems ● Loosely coupled or distributed multiprocessor, or cluster ● Functionally specialized processors ● Tightly coupled multiprocessor Synchronization Granularity and Processes Independent Parallelism synchronization among processes separate, independent application or job - sharing system
  • 21. Coarse and Very Coarse-Grained Parallelism level for concurrent processes running on a multiprogrammed uniprocessor change to user software Medium-Grained Parallelism collection of threads within a single process of an application interaction among the threads of an application, leading to a medium-grain level of synchronization frequently, scheduling decisions concerning one thread may affect the performance of the entire application Fine-Grained
  • 22. Parallelism parallelism than is found in the use of threads approaches Design Issues ● Assignment of processes to processors ● Use of multiprogramming on individual processors ● Actual dispatching of a process depend on the degree of granularity of applications and the number of processors available Assignment of Processes to Processors
  • 23. processes on demand (assume uniform/symmetric architecture) – Static assignment: process stays on processor it is assigned for lifetime – Dynamic assignment: process can change processor. t: less overhead, local caching, gang scheduling be idle, with an empty queue, while another processor has a backlog n is dynamic load balancing Assignment of Processes to Processors require some way of assigning a process to a processor Master/Slave
  • 24. Architecture ays run on a particular processor uniprocessor multiprogramming operating system cause one processor has control of all memory and I/O resources Peer Architecture -scheduling from the pool of available processes Process Scheduling cated to processors queues based on priority -server queuing architecture
  • 25. Thread Scheduling on is separated from the rest of the definition of a process execute concurrently in the same address space structuring aid and to overlap I/O with processing parallelism in an application - processor systems have an impact on applications that require significant interaction among threads Approaches to Thread Scheduling a set of related thread scheduled to run on a set of processors at the same time, on a one-to-one basis
  • 26. processes are not assigned to a particular processor provides implicit scheduling defined by the assignment of threads to processors the number of threads in a process can be altered during the course of execution ● Load Sharing ● Gang Scheduling ● Dedicated (static) processor assignment ● Dynamic scheduling Load Sharing uniprocessor environment – Load distributed evenly across processors – No centralized schedule required – Global queue can be organized and accessed using any scheduling scheme (ch 9) -come-first-served
  • 27. Disadvantages of Load Sharing n of memory that must be accessed in a manner that enforces mutual exclusion same processor ommon pool of threads, it is unlikely that all of the threads of a program will gain access to processors at the same time performance Gang Scheduling that make up a single process – If closely related processes execute in parallel, synchronization blocking may be reduced – Scheduling overhead may be reduced since single scheduling decision affects number of processors and processes/threads
  • 28. ful for medium-grained to fine-grained parallel applications whose performance severely degrades when any part of the application is not running while other parts are ready to run Figure 10.2 Example of Scheduling Groups With Four and One Threads Dedicated Processor Assignment assigned to a processor that remains dedicated to that thread until the application runs to completion thread of an application is blocked waiting for I/O or for synchronization with another thread, then that thread’s processor remains idle or hundreds of processors, processor utilization is no longer so important as a metric for effectiveness or performance
  • 29. of a program should result in a substantial speedup of that program Figure 10.3 Application Speedup as a Function of Number of Threads Dynamic Scheduling system tools that permit the number of threads in the process to be altered dynamically uld allow the operating system to adjust the load to improve utilization making scheduling decisions primarily limited to processor allocation processor assignment for applications that can take advantage of it Real-Time Systems perhaps the
  • 30. most important component result of the computation but also on the time at which the results are produced take place in the outside world se events occur in “real time” and tasks must be able to keep up with them Hard and Soft Real-Time Tasks Hard real-time task deadline unacceptable damage or a fatal error to the system Soft real-time task
  • 31. that is desirable but not mandatory schedule and complete the task even if it has passed its deadline Periodic and Aperiodic Tasks ed as: finish time Characteristics of Real Time Systems Determinism ong an operating system delays
  • 32. before acknowledging an interrupt or within predetermined time intervals processor time, no system will be fully deterministic Responsiveness to external events -time systems that must meet timing requirements imposed by individuals, devices, and data flows external to the system takes an operating system to service the interrupt User Control -time operating system than in ordinary operating systems ne-grained control over task priority tasks and to specify relative priorities within each class
  • 33. Reliability -time systems than non-real time systems -time systems respond to and control events in real time so loss or degradation of performance may have catastrophic consequences such as: Fail-Soft Operation such a way as to preserve as much capability and data as possible -time system is stable if the system will meet the deadlines of its most critical, highest-priority tasks even if some less critical task deadlines are not always met Real-Time Scheduling
  • 34. of Process Real-Time Scheduling Classes of Real-Time Scheduling Algorithms Deadline Scheduling -time operating systems are designed with the objective of starting real-time tasks as rapidly as possible and emphasize rapid interrupt handling and task dispatching -time applications are generally not concerned with sheer speed but rather with completing (or starting) tasks at the most valuable times requirement of completion (or initiation) at the most valuable time Information Used for Deadline Scheduling
  • 35. Table 10.2 Execution Profile of Two Periodic Tasks Figure 10.5 Scheduling of Periodic Real-Time Tasks With Completion Deadlines (Based on Table 10.2) Figure 10.6 Scheduling of Aperiodic Real-Time Tasks With Starting Deadlines Table 10.3 Execution Profile of Five Aperiodic Tasks Figure 10.7 Rate Monoton ic Scheduli ng
  • 36. Periodic Task Timing Diagram Figure 10.8 Value of the RMS Upper Bound Table 10.4 Priority Inversion -based preemptive scheduling scheme y relevant in the context of real-time scheduling -known instance involved the Mars Pathfinder mission higher priority task to wait for a lower priority task Unbounded Priority Inversion
  • 37. Priority Inheritance Linux Scheduling -in-first-out real-time threads -robin real-time threads -real-time threads e used Linux Real-Time Scheduling Non-Real-Time Scheduling the SCHED_OTHER class did not scale well with
  • 38. increasing number of processors and processes scheduling data structures for each processor in the system priority scheduler known as the O(1) scheduler appropriate process and assign it to a processor is constant regardless of the load on the system or number of processors Linux Schedul ing Data Structur es Figure 10.11 UNIX SVR4 Scheduling
  • 39. earlier UNIX systems and the introduction of a set of 160 priority levels divided into three priority classes SVR Priori ty Class es Figure 10.12 SVR Priority Classes SVR4 Dispatch Queues Figure 10.13 UNIX FreeBSD
  • 40. Scheduler SMP and Multicore Support scheduling for a SMP or multicore system multicore systems – a scheduler that only migrates a thread when necessary to avoid having an idle processor systems so that it is no longer a function of the number of threads in the system Windows Thread Dispatching Priorities Figure 10.14
  • 41. Interactivity Scoring voluntary sleep time versus its runtime is below a certain threshold reshold is defined in the scheduler code and is not configurable lower half of the range of interactivity scores upper half of the range of interactivity scores Thread Migration the last processor that it ran on processor Windows Scheduling ities in Windows are organized into two bands or classes:
  • 43. scheduling algorithms are less significant in a multiprocessor system -time process is one that is executed in connection with some process or function or set of events external to the computer system and that must meet one or more deadlines to interact effectively and correctly with the external environment -time operating system is one that is capable of managing real- time processes ing to relative deadlines are appropriate in this context Chapter 10 Multiprocessor and Real-Time SchedulingClassifications of Multiprocessor SystemsSynchronization Granularity and ProcessesIndependent ParallelismCoarse and Very Coarse-Grained ParallelismMedium-Grained ParallelismFine-Grained ParallelismDesign IssuesAssignment of Processes to ProcessorsAssignment of Processes to ProcessorsMaster/Slave ArchitecturePeer ArchitectureProcess SchedulingThread SchedulingApproaches to Thread SchedulingLoad SharingDisadvantages of Load SharingGang SchedulingSlide 21Dedicated Processor AssignmentSlide 23Dynamic SchedulingReal-Time SystemsHard and Soft Real-Time TasksPeriodic and Aperiodic TasksCharacteristics of Real Time SystemsDeterminismResponsivenessUser ControlReliabilityFail-Soft OperationReal-TimeReal-Time SchedulingClasses of Real-Time Scheduling AlgorithmsDeadline SchedulingInformation Used for Deadline
  • 44. SchedulingTable 10.2 Execution Profile of Two Periodic TasksSlide 40Slide 41Table 10.3 Execution Profile of Five Aperiodic TasksSlide 43Periodic Task Timing DiagramValue of the RMS Upper BoundPriority InversionUnbounded Priority InversionPriority InheritanceLinux SchedulingLinux Real-Time SchedulingNon-Real-Time SchedulingLinux Scheduling Data StructuresUNIX SVR4 SchedulingSVR Priority ClassesSVR Priority ClassesSVR4 Dispatch QueuesUNIX FreeBSD SchedulerSMP and Multicore SupportWindows Thread Dispatching PrioritiesInteractivity ScoringThread MigrationWindows SchedulingWindows Priority RelationshipLinux Virtual Machine Process SchedulingSummary Operating Systems, Stallings Chapter 9 Notes CSci 430 Spring 2019 Overview In a multiprogramming system, multiple processes exist concurrently in main memory. As we have seen and discussed many times, it is imperative that we support multiple concurrent processes in main memory, so that we can maximize the utilization of the CPU and other peripherals and devices in the computing system. However, when we have (many) more processes than we have CPUs to execute them, this necessitates that we need to periodically
  • 45. switch between the processes in some manner. This is especially true because many processes perform operations that will cause them to have to wait long periods (from the CPUs point of view), and if the CPU does nothing while the process is waiting, the resource is wasted, and useful work that could be done will not be performed. We need advanced memory management techniques, as we studied in the previous 2 chapters, to ensure that we have a su�ciently large number of processes available and able to be chosen from. But, until now, we have not looked at the issues and methods we should use in order to select among available processes in an e�cient manner in order to utilize CPU and other resources wisely. In this chapter we will begin examining processor scheduling and man- agement issues. In particular, we will break the issue up into 3 di�erent time frames, short, medium and long-term scheduling concerns and methods. In chapter 9, which you are to read through this week and next, we restrict our discussion to systems with a single CPU in terms of the process scheduling issues. In chapter 10, we will look at some of the additional complications we need to consider when we are dealing with systems that have 2 or more available CPUs for scheduling processes in the operating system.
  • 46. Learning Objectives After studying this chapter, you should be able to: 1 � Explain the di�erences among long, medium and short-term schedul- ing. � Assess the performance of di�erent scheduling policies. � Be able to simulate di�erent preemptive, and non-preemptive short- term scheduling policies by hand, and understand the issues and mech- anisms involved with their implementation. 9 Uniprocessor Scheduling 9.1 Types of Processor Scheduling The aim of processor scheduling is to assign processes to be executed by the processor(s) over time, in a way that meets system objectives. We may have many di�erent objectives in mind that we want to try and optimize, and as always in such design decisions, some are mutually exclusive such that optimizing one necessarily means performing worse in some other measure. Objectives might include response time or overall system
  • 47. throughput, or simply trying to maximize CPU utilization if there are processes waiting to execute. In many systems, the scheduling of processes are broken down into three separate functions: long, medium and short-term scheduling. The relative time scales of the activities performed change what the system might focus on, and what measures or features it can successfully try and optimize. Figure 9.2 is important in understanding the di�erent time frames, and how the scheduling of processes relate to each other on these varying time frames. We will discuss in a little more detail next, but basically short- term scheduling deals with the most basic of our state process model con- cepts (from chapter 3), the ready/running/blocked transitions. Medium- term scheduling involves decision about if and when to suspend some pro- cesses to memory. And generally, long-term scheduling has to do with de- cisions in large systems of when to allow processes to begin running, and possibly terminating processes early if needed. 2 3
  • 48. Long-Term Scheduling The long-term scheduler determines which programs are admitted to the system for processing. Thus it controls the degree of multi- programming in the system. When the long-term scheduler decides to admit a new job to the system, it can add it to the queue for short-term scheduler, which causes the program to immediately be loaded into memory once it is scheduled. Or it can start the process in a swapped-out condition, which might allow the system to create some structures for the process on disk, but keeps memory free for actively running processes. Most general purpose OS will simply start processes immediately. In big batching systems or HPC computers, a batching system may hold jobs as needed, and only create them when su�cient resources are available. In such a case, the long-term scheduler might admit/create new jobs on a �rst-come-�rst-served (FCFS) basis, or it could us information about the requirements of the jobs to schedule them, for example trying to keep a mix of processor-bound and I/O- bound jobs in the system. Medium-Term Scheduling
  • 49. Medium-term scheduling is part of the swapping function. We have discussed many of the issues involved with the swapping decision already. Basically, swapping in/out processes can be used to manage the availability of primary memory on the system. So if memory is tight, the swapper can move some processes completely out of memory for a short period. But if CPU utiliza- tion is becoming low and processes are swapped out, then the medium-term scheduler might attempt to swap back in some processes in order to increase the degree of multiprogramming and thus increase CPU and resource uti- lization. Short-Term Scheduling In terms of the frequency of execution, the long-term scheduler usually ex- ecutes rather infrequently, and the medium-term scheduling/swapping deci- sions are also going to be somewhat infrequent. The short-term scheduler, however, often needs to be invoked very frequently. For example, every time a process becomes blocked because of read or write activities, the short-term scheduler will be run to select the next process to get the CPU. Also we often want to allow the system to preemptively cause running processes to be switched, so that we can support the illusion of multiprogramming in
  • 50. the system. Without preemption, the system appears to hang whenever it 4 executes a program that is CPU bound and runs for a long time without returning for I/O. 9.2 Scheduling Algorithms The purpose of processes scheduling is to try and maximize performance. In order to successfully maximize performance, we have to have some idea of the criteria we want to measure, and thus try and improve by selecting processes in some speci�c manner. As the text mentions, there are many di�erent cri- teria we might try and maximize. The textbook breaks these criteria into two broad classes, user-oriented criteria and system-oriented criteria. The names should be fairly self explanatory. User-oriented criteria are those mea- sures that are visible to, and most e�ect user perceptions of how the system is performing. A classic example would be the response time of the system, how fast it appears to begin working on any command you ask it to perform. User-oriented criteria can be viewed from the perspective of a human user of the system, or from the view of a single process executing in the
  • 51. system. Ex- amples of system-oriented criteria would be throughput and CPU utilization. A user might just want their jobs or process to be completed as quickly as possible. But overall, from the system perspective, we want to complete as many jobs as possible in the shortest time possible. And, one way to achieve this, is to keep the CPU (and other resources) as busy as possible, given the current set of jobs we are managing. On single user type systems, such as your personal computer or smart phone, system level criteria are probably not important, it is only important that the user is getting the performance they expect from the system. System level performance criteria are more important on large shared-user systems, such as HPC supercomputers and large batching systems and server systems. Table 9.2 of our textbook gives a summary of some of the types of performance criteria you should be familiar with, in the context of the scheduling of processes. Priorities While it is �ne in theory to create scheduling algorithms that only look at particular criteria in order to try and �ne tune the scheduling, in practice we often need a more explicit and �ner level of control to be available to the system operators and designers to explicitly tell the system
  • 52. which processes to prefer over which others when scheduling. Priorities are used in many systems to allow this �ne-grained level of control. Priorities can be explicitly assigned to the entities being scheduled (e.g. processes), and the level of the 5 priority can be taken into account when making scheduling decisions. For example, we could implement a strict priority based scheduler, with round- robin scheduling among processes of equal priority. In a strict priority based mechanism, we would always choose among the highest priority processes currently in the system, and schedule them �rst. As shown in �gure 9.4 of our textbook, one way this could be achieved is by using separate priority queues for each level in our priority scheme. Then, when a process needs to be scheduled, we would �rst see if any processes are in the highest priority queue (and select the one at the head of this queue, if using round- robin scheduling within the priority queuing scheme). Only if the highest priority queue is empty, would we then check the next highest queue for ready processes. One problem with this pure priority scheduling scheme we have
  • 53. just de- scribed is that lower priority processes may su�er starvation (and thus this scheme can be unfair in its scheduling policy). If there are always high prior- ity processes running or continually entering the system, low priority process will never be scheduled to run. Thus strict priority scheduling policies are rarely used. However, by adding a concept of aging, where the priority of a process can change dynamically, we can �x this problem. For example, the longer a process is in the system, the higher its priority might become. And conversely, the more a process executes, the lower we might set its pri- ority, for some time. Such modi�ed priority schemes are very common in modern operating systems. For example, Unix and windows variants use modi�cations of process priority, with dynamically changing priorities, in their implementations of their process scheduling mechanisms. Alternative Scheduling Policies In order to better understand the trades we need to make when thinking about process scheduling mechanisms, our textbook presents and compares several short-term scheduling policies in this section. I will summarize some of the important points made in this section, but you should make sure you understand all of the policies discussed in this section, how they
  • 54. work, and their e�ects on various criteria, such as throughput, response time, etc. Table 9.3 in our text is an important summary of the basic conclusion's about the policies we discuss in this section. 6 First I will talk about the rows on this table, and then we will summarize brie�y the various scheduling policies. The selection function determines how the policy decides which process will be selected next from among the pool of available ready processes. For example, for the �rst- come-�rst-serve (FCFS) policy, we will select the process that has been waiting the longest as the next to be scheduled to run. The process that has been waiting the longest must have arrived �rst, before any other process that might be waiting. Thus, by selecting the process by max(w), the one that has waited the longest, we are selecting the �rst process to arrive among the currently waiting processes (FCFS). The selection functions for all of the other policies should make sense to you, if you understand what the w (wait time), e (execution time) and s (total service time needed) parameters mean.
  • 55. The decision mode is an important characteristics of process scheduling policies. Basically, non-preemptive policies are not really suitable for multi- user interactive systems. When the system schedules processes in a non- preemptive manner, this means that once the process is running, it will continue running until it is �nished. This is �ne in a batching system, but in an interactive user-based system, the system cannot fail to respond to user input if there is a long-running process currently scheduled and running. Thus we need to use preemptive scheduling processes for interactive user 7 systems. In a preemptive system, the running process can be halted and returned back to the ready queue (or a wait queue), based on some events. As discussed in the text, we can support di�erent types of preemption, such as only preempting if needed when a new process arrives (HRRN). However, again if we are talking about a system that supports interactive users, we will need to support time quantum (and I/O blocking based) preemption. Throughput and response time represent two of the scheduling criteria
  • 56. that we might be interested in in�uencing. As we have discussed, throughput is a system-level criteria, where response time is more user- level focused. Overhead in this table is a judgment on how costly the algorithm is to implement. If the overhead is high for the policy, it may take a lot of memory and/or CPU cycles in order to implement the data structures and calculate the selection function of the given policy. The e�ects on process description basically gives some ideas about if the policy is fair or not, and if it is not fair, which types of processes are likely to be favored or penalized by the policy. I/O bound processes are those that need to do a lot of I/O, such as reading or writing large amounts of data to/from disk. The opposite of an I/O bound process is a compute (CPU) bound process, which once loaded into memory does not need additional I/O, and will e�ectively crunch data at a high speed (and not relinquish the CPU voluntarily). And �nally starvation can be a consideration for some process selection policies. As we mentioned, starvation is possible for strict priority based schedulers, as low priority processes may never be selected to run if high priority processes are always present. We have talked about
  • 57. starvation in previous chapters on concurrency, and starvation is always a possible problem when running concurrent processes. As we see in this table, some process selection policies can be created that are guaranteed to avoid starvation, such as the simple round robin policy. I will now brie�y mention each of the short term scheduling policies that the textbook presents and analyzes in this section. Make sure you understand how these policies work and are implemented, and the trade- o�s they make in terms of the parameters they are optimizing. Also, we will mention and use the process �nish time, turnaround time (Tr) and the normalized turnaround ratio (Tr/Ts) when we discuss these policies. The start and �nish times should be obvious, the start time is when the process �rst enters the system (not the �rst time step when it is run), and the �nish time is the time step in which the process �nishes execution. The turnaround time, also known as the residence time Tr, is simply the total time the process spent in the system (e.g. �nish time minus start time Tf − Tb) The turnaround times 8
  • 58. is not very useful, as it can vary widely as a function of the service time of the process. The service time is how many time steps the process needs to execute in order to complete its task. This is not useful in comparing the relative performance of di�erent policies, because longer processes will have long turnaround times and short process may have short (or not) turnaround times. By dividing the turnaround time by the processes service time, we get a better understanding of the magnitude of time the process spent waiting (Tr/Ts), and we can better average and compare such ratios to judge the relative e�ects of scheduling policies on system responsiveness. First-Come-First-Served (FCFS) FCFS is the simplest scheduling policy, and thus has (almost) no overhead in order to implement it on a system. FCFS can also be described as (and called) a strict �rst-in-�rst-out (FIFO) queuing scheme. FCFS just requires a simple queue, and it is non-preemptive in nature. When a process arrives, we just put it on the back of the queue. And when process �nishes or we need to schedule a new process to run, we take the one at the head of the queue and it will execute until it completes (or if we have a blocked/wait state, it executes until its next I/O request). Basic batching
  • 59. systems use this scheduling policy. FCFS is not particularly good in terms of response time and other char- acteristics, and as mentioned it can be unfair to I/O bound processes, as I/O bound processes will get returned back to the end of the queue, but compute bound processes, once they get scheduled, since there is no preemption, will monopolize the CPU and �nish up, leaving the I/O bound processes waiting around longer than necessary. Round Robin Round robin is basically the FCFS, but where we add a preemption mecha- nism. We have 1 queue, and processes that block for I/O are returned to the queue as before. But we also cause the running process to be periodically preempted, when they exceed their time slice (called the time slice quan- tum). Because of this, the RR policy is also sometimes known as a time slicing policy. By adding the ability to preempt, RR addresses the unfairness of FCFS for I/O bound processes. However, throughput and response time may not be very good with a strict RR preemptive scheduler. 9
  • 60. Shortest Process Next (SPN) The SPN policy is another nonpreemptive policy. Further, in order to imple- ment SPN, each process must state up front how long it will need to execute. The selection function for SPN is to take the policy that is the shortest that is currently waiting to be executed as the next process. There are several problems with this policy in regards to implementation in real systems. As we have already mentioned, since it is nonpreemptive, it is not suitable for most interactive systems. Also for most real systems we don't often really know how long a process will need to execute. Thus we can require that an estimate be given for the length of the process when it is created and submitted. However, then we need to decide what we want to do to pro- cesses that have exceeded their stated execution service time, should the be terminated? All of these considerations need to be tracked in a real imple- mentation, which can mean that SPN needs a fair amount of overhead in order to be implemented. SPN is actually fairly good in terms of its throughput and response time, at least for short processes. However, SPN can penalize long
  • 61. processes. This should be intuitively obvious, as in a system where lots of short processes continually enter the system, longer processes may have to wait a long time before they get selected to be run. In extreme cases starvation is possible, where a long process is never selected because short processes are continually always available. So SPN can be unfair to long processes. Shortest Remaining Time (SRT) In order to implement the SRT policy, we need to keep track of how much execution time each process still has to do. In the version discussed in our textbook, the SRT policy is preemptive, but only at the time of the cre- ation/entry of a new process. Whenever a new process becomes available, the currently running process may be preempted if the new process is shorter than the remaining time on the currently running process. In order to cal- culate the remaining time of processes, we of course need to know how much total time the process needs to run, thus the shortcoming for real implemen- tations that we discussed for SPN are also present for SRT, and it can also need more overhead than other schedulers as well. SRT can provide good throughput and response times, though it still might penalize long processes, for the same reasons as SPN
  • 62. previously. 10 Highest Response Ratio Next (HRRN) In order to understand HRRN, you must understand what the normalized turnaround time is and how it is calculated. If we can calculated the current normalized turnaround time for all of the processes currently in our system, HRRN simply says that we should select the process that has the highest (e.g. the worst) such measure to be the next to execute. This is intuitive if we are trying to maximize the turnaround time (and thus the throughput) of our system. By selecting the process that is doing badly on this measure next, we have a chance to improve its turnaround time, and thus improve our systems overall average. Basically, by calculating and choosing processes based on their turnaround time, we are adding a type of aging mechanism. HRRN is nonpreemptive. And as with the previous 2 policies, we need to know the expected service time in order to calculate the response ratio. This coupled with its nonpreemptive nature makes it unsuitable for interactive systems. However, HRRN does �x some of the de�ciencies that SRT and
  • 63. SPN have in their unfairness towards long processes, as long processes will have increasingly larger response ratios the longer they are in the system, and thus will be guaranteed to be selected at some point to be executed, thus avoiding starvation. Feedback The feedback policy as described in our book is basically a type of priority scheme. As discussed in this section, the feedback policy uses a preemptive (time-sliced) based mechanism, coupled with a dynamic priority mechanism. So basically, this is not a strict priority based policy, but a priority sched- uler with dynamic priority mechanism as we mentioned before. Also as we said previously, modern interactive OS systems basically use some form of dynamic priority based time-sliced scheduler, so this policy is the most im- portant to understand in order to comprehend how real OS schedulers work. The feedback policy schedules processes on a preemptive (at a time quan- tum) basis. Processes when they �rst enter the system have a particular priority assigned to them, usually of the highest priority. After each pre- emption of the process, its priority is lowered by 1 level (down to some minimum priority).
  • 64. A short process will complete quickly, with migrating very far down the priority hierarchy. A longer process will gradually drift downward. Thus newer, shorter processes are favored over older, longer processes. The performance characteristics of a feedback/priority scheduler will vary 11 with the time quantum parameter (as will RR). Also, the details of the dynamic priorities will have large e�ects on the performance characteristics as well. However, since this policy is preemptive it is suitable for interactive systems. And because of the priorities, it can be tuned in various ways, to try and avoid starvation, and to give fair treatment to di�erent types of short and long and I/O and compute bound processes, as needed. 12 Operating Systems, Stallings Chapter 10 notes
  • 65. CSci 430 Spring 2019 Overview Though the addition of more than 1 CPU into a system does introduce some new issues to be considered when scheduling processes, the basic mechanisms and analysis of process scheduling are not so di�erent as you might think. Section 10.1 of our textbook covers 3 main categories of multiprocessor sys- tems: 1. Loosely coupled or distributed memory cluster systems 2. Functionally specialized processors 3. Tightly coupled, shared memory multiprocessors. Of the three categories, the third category is the most familiar one to most of the students of this course. Modern general purpose computing sys- tems often have multi-core chips and/or 2 or more CPU chips as part of the system. However, all of the CPUs on such multi-core chips are functionally equivalent. Modern general purpose personal computers actually often do have a specialized CPU, di�erent from the general purpose CPUs. This is the CPU in your graphical processor or graphics card. However, usually the OS does not schedule work or processes to be run on this CPU, instead the
  • 66. video graphics drivers use this specialized CPU to perform rendering and display driver tasks. The �rst category, scheduling on a distributed memory cluster system, is an issue very important to modern supercomputing clus- ter systems. Most supercomputers used in scienti�c research are what are known as commodity computing clusters. They are basically a collection of large numbers of standard computers, connected together using high-speed networking connections. Unlike your personal computer or laptop, all of the computers in such a cluster computer have their own set of memory, thus 1 they are often referred to as distributed memory systems, in contrast to your typical personal computer that has multiple CPUs sharing a common pool of RAM memory. In this course, we will concentrate on the issues of processor scheduling for shared memory multiprocessor systems, but you should read the materials about the other 2 categories of systems as well in section 10.1 of our textbook. One important consideration when scheduling processes on a multi-processor system, is how to assign processes to available processors. The
  • 67. simplest ap- proach is to treat all of the CPUs as an available pool of processors, and assign processes to processors on demand. The question arrises at this point of whether, when a process starts running on a CPU, the assignment should be static or dynamic. The static vs. dynamic decision can have consequences. In a multi-core CPU chip, the cores have local cache memory that are often separate, or not shared, among the cores (The Level 1 L1 and/or L2 cache levels). When a process runs on a CPU, it begins using the cache memory local to that CPU core. So, if we use dynamic scheduling, the process could be assigned and run on a di�erent core at a later time. However, this has the disadvantage that all of the loaded cache data becomes invalid, once the process is switched to another core. This can be ine�cient, as all of the cache pages will end up being needed to be reloaded once again on the new core L1 cache. However, static assignment has its own problems. In static assignment, once a process begins using a particular core, we don't switch it to use another core. The obvious problem occurs if you have 2 very long running processes that get assigned to the same core, they could end up competing to be scheduled on the same core, while other cores go idle, with no statically
  • 68. assigned processes running on them. This, in a modern OS that handles multi-core scheduling, it is common to see a dynamic allocation implementation. However, normally the pro- cessor scheduler has built-in preferences that cause the processes to greatly prefer to be keep scheduling on the same CPU (known as processor a�nity). Some parameters can be set or tweaked for a running process, to make this a�nity for a particular core be stronger or weaker. This, with this concept of processor a�nity, processes tend to try to execute on the same core they begin working on, but can be switched over to a new core when necessary. 2 ÑÐÛÎßÌ×ÒÙ ÍÇÍÌÛÓÍ ×ÒÌÛÎÒßÔÍ ßÒÜ ÜÛÍ×ÙÒ ÐÎ×ÒÝ×ÐÔÛÍ ÍÛÊÛÒÌØ ÛÜ×Ì×ÑÒ É·´´·¿³ ͬ¿´´·²¹-
  • 69. Ю»²¬·½» Ø¿´´ Þ±-¬±² ݱ´«³¾«- ײ¼·¿²¿°±´·- Ò»© DZ®µ Í¿² Ú®¿²½·-½± Ë°°»® Í¿¼¼´» 窻® ß³-¬»®¼¿³ Ý¿°» ̱©² Ü«¾¿· Ô±²¼±² Ó¿¼®·¼ Ó·´¿² Ó«²·½¸ п®·- Ó±²¬®»¿´ ̱®±²¬± Ü»´¸· Ó»¨·½± Ý·¬§ Í=± п«´± ͧ¼²»§ ر²¹ Õ±²¹ Í»±«´ Í·²¹¿°±®» Ì¿·°»· ̱µ§± IS B N 1- 25 6- 52 02 3- 3 Operating Systems: Internals and Design Principles, Seventh Edition, by William Stallings. Published by Prentice Hall. Copyright © 2012 by Pearson Education, Inc. Û¼·¬±®·¿´ Ü·®»½¬±®æ Ó¿®½·¿ ر®¬±² Û¼·¬±® ·² ݸ·»ºæ Ó·½¸¿»´ Ø·®-½¸ Û¨»½«¬·ª» Û¼·¬±®æ Ì®¿½§ Ü«²µ»´¾»®¹»® ß--·-¬¿²¬ Û¼·¬±®æ Ó»´·²¼¿ Ø¿¹¹»®¬§ Û¼·¬±®·¿´ ß--·-¬¿²¬æ ß´´·-±² Ó·½¸¿»´
  • 70. Ü·®»½¬±® ±º Ó¿®µ»¬·²¹æ אַ·½» Ö±²»- Ó¿®µ»¬·²¹ Ó¿²¿¹»®æ Ç»¦¿² ß´¿§¿² Í»²·±Ó¿®µ»¬·²¹ ݱ±®¼·²¿¬±®æ Õ¿¬¸®§² Ú»®®¿²¬· Ю±¼«½¬·±² Ó¿²¿¹»®æ п¬ Þ®±©² ß®¬ Ü·®»½¬±®æ Ö¿§²» ݱ²¬» ݱª»® Ü»-·¹²»®æ Þ®«½» Õ»²-»´¿¿® Ó»¼·¿ Ü·®»½¬±®æ Ü¿²·»´ Í¿²¼·² Ó»¼·¿ Ю±¶»½¬ Ó¿²¿¹»®æ É¿²¼¿ α½µ©»´´ Ú«´´óÍ»®ª·½» Ю±¶»½¬ Ó¿²¿¹»³»²¬ñݱ³°±-·¬·±²æ ͸·²§ ﶻ-¸ñײ¬»¹®¿ ͱº¬©¿®» Í»®ª·½» Ъ¬ò Ô¬¼ò ײ¬»®·±® Ю·²¬»®ñÞ·²¼»®§æ Û¼©¿®¼- Þ®±¬¸»®- ݱª»® Ю·²¬»®æ Ô»¸·¹¸óи±»²·¨ ݱ´±® Ý®»¼·¬- ¿²¼ ¿½µ²±©´»¼¹³»²¬- ¾±®®±©»¼ º®±³ ±¬¸»® -±«®½»- ¿²¼ ®»°®±¼«½»¼ô ©·¬¸ °»®³·--·±²ô ·² ¬¸·- ¬»¨¬¾±±µ ¿°°»¿® ±² ¿°°®±°®·¿¬» °¿¹» ©·¬¸·² ¬»¨¬ò Ó·½®±-±º¬r ¿²¼ É·²¼±©-r ¿®» ®»¹·-¬»®»¼ ¬®¿¼»³¿®µ- ±º ¬¸» Ó·½®±-±º¬ ݱ®°±®¿¬·±² ·² ¬¸» ËòÍòßò ¿²¼ ±¬¸»® ½±«²¬®·»-ò ͽ®»»² -¸±¬- ¿²¼ ·½±²- ®»°®·²¬»¼ ©·¬¸ °»®³·--·±² º®±³ ¬¸» Ó·½®±-±º¬ ݱ®°±®¿¬·±²ò ̸·- ¾±±µ ·- ²±¬ -°±²-±®»¼ ±® »²¼±®-»¼ ¾§ ±® ¿º� ´·¿¬»¼ ©·¬¸ ¬¸» Ó·½®±-±º¬ ݱ®°±®¿¬·±²ò ݱ°§®·¹¸¬ w îðïîô îððçô îððëô îððïô ïççè л¿®-±² Û¼«½¿¬·±²ô ײ½òô °«¾´·-¸·²¹ ¿- Ю»²¬·½» Ø¿´´ô ï Ô¿µ» ͬ®»»¬ô Ë°°»® Í¿¼¼´» 窻®ô Ò»© Ö»®-»§ô ðéìëèò ß´´ ®·¹¸¬- ®»-»®ª»¼ò Ó¿²«º¿½¬«®»¼ ·² ¬¸» ˲·¬»¼ ͬ¿¬»- ±º ß³»®·½¿ò ̸·- °«¾´·½¿¬·±² ·- °®±¬»½¬»¼ ¾§ ݱ°§®·¹¸¬ô ¿²¼ °»®³·--·±² -¸±«´¼ ¾» ±¾¬¿·²»¼ º®±³ ¬¸» °«¾´·-¸»® °®·±® ¬± ¿²§ °®±¸·¾·¬»¼ ®»°®±¼«½¬·±²ô
  • 71. -¬±®¿¹» ·² ¿ ®»¬®·»ª¿´ -§-¬»³ô ±® ¬®¿²-³·--·±² ·² ¿²§ º±®³ ±® ¾§ ¿²§ ³»¿²-ô »´»½¬®±²·½ô ³»½¸¿²·½¿´ô °¸±¬±½±°§·²¹ô ®»½±®¼·²¹ô ±® ´·µ»©·-»ò ̱ ±¾¬¿·² °»®³·--·±²ø-÷ ¬± «-» ³¿¬»®·¿´ º®±³ ¬¸·- ©±®µô °´»¿-» -«¾³·¬ ¿ ©®·¬¬»² ®»¯«»-¬ ¬± л¿®-±² Û¼«½¿¬·±²ô ײ½òô л®³·--·±²- Ü»°¿®¬³»²¬ô ï Ô¿µ» ͬ®»»¬ô Ë°°»® Í¿¼¼´» 窻®ô Ò»© Ö»®-»§ô ðéìëèò Ó¿²§ ±º ¬¸» ¼»-·¹²¿¬·±²- ¾§ ³¿²«º¿½¬«®»®- ¿²¼ -»´´»®- ¬± ¼·-¬·²¹«·-¸ ¬¸»·® °®±¼«½¬- ¿®» ½´¿·³»¼ ¿- ¬®¿¼»³¿®µ-ò ɸ»®» ¬¸±-» ¼»-·¹²¿¬·±²- ¿°°»¿® ·² ¬¸·- ¾±±µô ¿²¼ ¬¸» °«¾´·-¸»® ©¿- ¿©¿®» ±º ¿ ¬®¿¼»³¿®µ ½´¿·³ô ¬¸» ¼»-·¹²¿¬·±²- ¸¿ª» ¾»»² °®·²¬»¼ ·² ·²·¬·¿´ ½¿°- ±® ¿´´ ½¿°-ò Ô·¾®¿®§ ±º ݱ²¹®»-- Ý¿¬¿´±¹·²¹ó·²óЫ¾´·½¿¬·±² Ü¿¬¿ ͬ¿´´·²¹-ô É·´´·¿³ò Ñ°»®¿¬·²¹ -§-¬»³- æ ·²¬»®²¿´- ¿²¼ ¼»-·¹² °®·²½·°´»- ñ É·´´·¿³ ͬ¿´´·²¹-ò � 鬸 »¼ò °ò ½³ò ײ½´«¼»- ¾·¾´·±¹®¿°¸·½¿´ ®»º»®»²½»- ¿²¼ ·²¼»¨ò ×ÍÞÒóïíæ çéèóðóïíóîíðççèóï ø¿´µò °¿°»®÷ ×ÍÞÒóïðæ ðóïíóîíðççèóÈ ø¿´µò °¿°»®÷ ïò Ñ°»®¿¬·²¹ -§-¬»³- øݱ³°«¬»®-÷ ×ò Ì·¬´»ò ÏßéêòéêòÑêíÍéíí îðïï ððëòìùí ¼½îî îðïððìèëçé ïð ç è é ê ë ì í î ï�ÛÞ�ïë ïì ïí ïî ïï ×ÍÞÒ ïðæ ðóïíóîíðççèóÈ ×ÍÞÒ ïíæ çéèóðóïíóîíðççèóï
  • 72. IS B N 1-256-52023-3 Operating Systems: Internals and Design Principles, Seventh Edition, by William Stallings. Published by Prentice Hall. Copyright © 2012 by Pearson Education, Inc. ̱ ³§ ¾®·´´·¿²¬ ¿²¼ ¾®¿ª» ©·º»ô ß²¬·¹±²» Ì®·½·¿ô ©¸± ¸¿- -«®ª·ª»¼ ¬¸» ©±®-¬ ¸±®®±®- ·³¿¹·²¿¾´»ò IS B N 1- 25 6- 52 02 3- 3 Operating Systems: Internals and Design Principles, Seventh Edition, by William Stallings. Published by Prentice Hall. Copyright © 2012 by Pearson Education, Inc.
  • 73. ·ª îòê ÑÍ Ü»-·¹² ݱ²-·¼»®¿¬·±²- º±® Ó«´¬·°®±½»--±® ¿²¼ Ó«´¬·½±®» éé îòé Ó·½®±-±º¬ É·²¼±©- Ѫ»®ª·»© èð îòè Ì®¿¼·¬·±²¿´ ËÒ×È Í§-¬»³- çð îòç Ó±¼»®² ËÒ×È Í§-¬»³- çî îòïð Ô·²«¨ çì îòïï Ô·²«¨ ÊÍ»®ª»® Ê·®¬«¿´ Ó¿½¸·²» ß®½¸·¬»½¬«®» ïðð îòïî λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼ É»¾ Í·¬»- ïðï îòïí Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼ Ю±¾´»³- ïðí ÐßÎÌ î ÐÎÑÝÛÍÍÛÍ ïðê ݸ¿°¬»® í Ю±½»-- Ü»-½®·°¬·±² ¿²¼ ݱ²¬®±´ ïðê íòï ɸ¿¬ ×- ¿ Ю±½»--á ïðè
  • 74. íòî Ю±½»-- ͬ¿¬»- ïïð íòí Ю±½»-- Ü»-½®·°¬·±² ïîê íòì Ю±½»-- ݱ²¬®±´ ïíì íòë Û¨»½«¬·±² ±º ¬¸» Ñ°»®¿¬·²¹ ͧ-¬»³ ïìð íòê Í»½«®·¬§ ×--«»- ïìí íòé ËÒ×È ÍÊÎì Ю±½»-- Ó¿²¿¹»³»²¬ ïìé íòè Í«³³¿®§ ïëî íòç λ½±³³»²¼»¼ λ¿¼·²¹ ïëî íòïð Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼ Ю±¾´»³- ïëí ݸ¿°¬»® ì ̸®»¿¼- ïëé ìòï Ю±½»--»- ¿²¼ ̸®»¿¼- ïëè ìòî ̧°»- ±º ̸®»¿¼- ïêì ìòí Ó«´¬·½±®» ¿²¼ Ó«´¬·¬¸®»¿¼·²¹ ïéï ìòì É·²¼±©- é ̸®»¿¼ ¿²¼ ÍÓÐ Ó¿²¿¹»³»²¬ ïéê
  • 75. ìòë ͱ´¿®·- ̸®»¿¼ ¿²¼ ÍÓÐ Ó¿²¿¹»³»²¬ ïèî ìòê Ô·²«¨ Ю±½»-- ¿²¼ ̸®»¿¼ Ó¿²¿¹»³»²¬ ïèê ìòé Ó¿½ ÑÍ È Ù®¿²¼ Ý»²¬®¿´ Ü·-°¿¬½¸ ïèç Ѳ´·²» λ­±«®½»­ ̈ Ю»º¿½» ̈ · ß¾±«¬ ¬¸» ß«¬¸±® ̈ ·¨ ݸ¿°¬»® ð λ¿¼»®�- ¿²¼ ײ-¬®«½¬±®�- Ù«·¼» ï ðòï Ñ«¬´·²» ±º ¬¸·- Þ±±µ î ðòî Û¨¿³°´» ͧ-¬»³- î ðòí ß Î±¿¼³¿° º±® λ¿¼»®- ¿²¼ ײ-¬®«½¬±®- í ðòì ײ¬»®²»¬ ¿²¼ É»¾ λ-±«®½»- ì ÐßÎÌ ï ÞßÝÕÙÎÑËÒÜ é ݸ¿°¬»® ï ݱ³°«¬»® ͧ-¬»³
  • 76. Ѫ»®ª·»© é ïòï Þ¿-·½ Û´»³»²¬- è ïòî Ûª±´«¬·±² ±º ¬¸» Ó·½®±°®±½»--±® ïð ïòí ײ-¬®«½¬·±² Û¨»½«¬·±² ïï ïòì ײ¬»®®«°¬- ïì ïòë ̸» Ó»³±®§ Ø·»®¿®½¸§ îì ïòê Ý¿½¸» Ó»³±®§ îé ïòé Ü·®»½¬ Ó»³±®§ ß½½»-- íï ïòè Ó«´¬·°®±½»--±® ¿²¼ Ó«´¬·½±®» Ñ®¹¿²·¦¿¬·±² íí ïòç λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼ É»¾ Í·¬»- íê ïòïð Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼ Ю±¾´»³- íé ïß Ð»®º±®³¿²½» ݸ¿®¿½¬»®·-¬·½- ±º Ì©±óÔ»ª»´ Ó»³±®·»- íç ݸ¿°¬»® î Ñ°»®¿¬·²¹ ͧ-¬»³
  • 77. Ѫ»®ª·»© ìê îòï Ñ°»®¿¬·²¹ ͧ-¬»³ Ѿ¶»½¬·ª»- ¿²¼ Ú«²½¬·±²- ìè îòî ̸» Ûª±´«¬·±² ±º Ñ°»®¿¬·²¹ ͧ-¬»³- ëî îòí Ó¿¶±® ß½¸·»ª»³»²¬- êî îòì Ü»ª»´±°³»²¬- Ô»¿¼·²¹ ¬± Ó±¼»®² Ñ°»®¿¬·²¹ ͧ-¬»³- éï îòë Ê·®¬«¿´ Ó¿½¸·²»- éì ÝÑÒÌÛÒÌÍ IS B N 1-256-52023-3 Operating Systems: Internals and Design Principles, Seventh Edition, by William Stallings. Published by Prentice Hall. Copyright © 2012 by Pearson Education, Inc. ÝÑÒÌÛÒÌÍ ª éòí п¹·²¹ íîï
  • 78. éòì Í»¹³»²¬¿¬·±² íîë éòë Í»½«®·¬§ ×--«»- íîê éòê Í«³³¿®§ ííð éòé λ½±³³»²¼»¼ λ¿¼·²¹ ííð éòè Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼ Ю±¾´»³- ííï éß Ô±¿¼·²¹ ¿²¼ Ô·²µ·²¹ ííì ݸ¿°¬»® è Ê·®¬«¿´ Ó»³±®§ íìð èòï Ø¿®¼©¿®» ¿²¼ ݱ²¬®±´ ͬ®«½¬«®»- íìï èòî Ñ°»®¿¬·²¹ ͧ-¬»³ ͱº¬©¿®» íêð èòí ËÒ×È ¿²¼ ͱ´¿®·- Ó»³±®§ Ó¿²¿¹»³»²¬ íéç èòì Ô·²«¨ Ó»³±®§ Ó¿²¿¹»³»²¬ íèì èòë É·²¼±©- Ó»³±®§ Ó¿²¿¹»³»²¬ íèê èòê Í«³³¿®§ íèç èòé λ½±³³»²¼»¼ λ¿¼·²¹ ¿²¼ É»¾
  • 79. Í·¬»- íçð èòè Õ»§ Ì»®³-ô λª·»© Ï«»-¬·±²-ô ¿²¼ Ю±¾´»³- íçï ÐßÎÌ ì ÍÝØÛÜËÔ×ÒÙ íçë ݸ¿°¬»® ç ˲·°®±½»--±® ͽ¸»¼«´·²¹ íçë çòï ̧°»- ±º Ю±½»--±® ͽ¸»¼«´·²¹ íçê çòî ͽ¸»¼«´·²¹ ß´¹±®·¬¸³- ìðð çòí Ì®¿¼·¬·±²¿´ ËÒ×È Í½¸»¼«´·²¹ ìîî çòì Í«³³¿®§ ìîì çòë λ½±³³»²¼»¼ λ¿¼·²¹ ìîë çòê Õ»§ Ì»®³­ô λª·»© … p5-start.cppp5-start.cpp/** * @author Jane Student * @cwid 123 45 678 * @class CSci 430, Spring 2018 * @ide Visual Studio Express 2010 * @date November 15, 2018 * @assg prog-04 * * @description This program implements a simulation of proce ss * scheduling policies. In this program, we implement round-
  • 80. robin * scheduling, where the time slice quantum can be specified a s * as a command line parameter. And we also implement shor test * remaining time (SRT) scheduling policy */ #include<stdlib.h> #include<iostream> #include<iomanip> #include<fstream> #include<string> #include<list> usingnamespace std; // global constants // I won't test your round robin implementation with more than 2 0 processes constint MAX_PROCESSES =20; constint NO_PROCESS =0; // Simple structure, holds all of the information about processes, their names // arrival and service times, that we are to simulate. typedefstruct { string processName; int arrivalTime; int serviceTime; // holds running count of time slices for current time quantum, when // serviceTime == quantum, time slice is up int sliceTime; // holds total number of time steps currently run, when == to // serviceTime process is done
  • 81. int totalTime; // holds time when process finishes, used to calculate final stats, // like T_r, T_r/T_s int finishTime; // a boolean flag, we will set this to true when the process is co mplete bool finished; }Process; // Process table, holds table of information about processes we a re simulating typedefstruct { int numProcesses; Process* process[MAX_PROCESSES]; }ProcessTable; /** Create process table * Allocate memory for a new process table. Load the process * information from the simulation file into a table with the proc ess * information needed to perform the simulation. At the same ti me we * initialize other information in process table for use in the * simulation. Return the newly created ProcessTable * * @param processFilanem The name (char*) of the file to open and read * the process information from. * @param processTable This is actually a return parameter. Th is * should be a pointer to an already allocated array of * Process structure items. We will fill in this structure * and return the process information. *
  • 82. * @returns ProcessTable* The newly allocated and initialized P rocessTable * structure. */ ProcessTable* createProcessTable(char* processFilename) { ifstream simprocessfile(processFilename); ProcessTable* processTable; int pid; string processName; int arrivalTime; int serviceTime; // If we can't open file, abort and let the user know problem if(!simprocessfile.is_open()) { cout <<"Error: could not open process simulation file: " << processFilename << endl; exit(1); } // Format of file is // ProcessName1 ArrivalTime1 ServiceTime1 // ProcessName2 ArrivalTime2 ServiceTime2 // ... // ProcessNameN ArrivalTimeN ServiceTimeN // // Where the name is any arbitray string identifier, and ArrivalT ime // and ServiceTime are integer values pid =0; processTable =new(ProcessTable); while(simprocessfile >> processName >> arrivalTime >> servic eTime) { // allocate a new process to hold information
  • 83. Process* process =new(Process); processTable->process[pid]= process; // load information into process read from simulation file process->processName = processName; process->arrivalTime = arrivalTime; process->serviceTime = serviceTime; // initialize other process information for the simulaiton process->sliceTime =0; process->totalTime =0; process->finishTime =0; process->finished =false; pid++; } // Set the number of processes we need to simulate information i n // the process table processTable->numProcesses = pid; return processTable; } /** Display process table * Convenience method, dump all of the information about the p rocesses * in a process table to stdout. * * @param processTable The table, a pointer to type ProcessTab le * struct, with the information we are to display */ void displayProcessTable(ProcessTable* processTable)
  • 84. { cout <<"Process Table num = "<< processTable- >numProcesses << endl; cout <<"PID Name Arrv Srvc"<< endl; cout <<"------------------"<< endl; for(int pid=0; pid < processTable->numProcesses; pid++) { Process* p = processTable->process[pid]; cout << setw(2)<< right << pid <<") "; cout << setw(4)<< left << p->processName <<" "; cout << setw(4)<< right << p->arrivalTime <<" "; cout << setw(4)<< right << p->serviceTime <<" "; cout << endl; } } /** Round robin scheduler simulator * The main routine for performing the round robin preemptive * scheduler simulator. We expect the time quantum to already be * specified and given to us as the first parameter. The file nam e * with the process arrival and service time information is given as * the second parameter. We simulate preemptive round robin * scheduling of all of the processes until there are no longer an y * processes left in the system (all processes have exceeded thei r * service time and have exited). * * @param processTable A pointer to a ProcessTable structure h olding * information about the processes, arrival times and duratio ns
  • 85. * that we are simulating execution of. * @param quantum An integer value holding the time slice qua ntum we * are using for this simulation. */ void roundRobinScheduler(ProcessTable* processTable,int quan tum) { // Implement the round robin scheduler here cout <<"<roundRobinScheduler> entered, quantum: "<< quant um << endl; } /** shortest remaining time simulator * The main routine for performing the shortest remaining time * preemptive scheduler simulator. The file name with the proc ess * arrival and service time information is given as the first * parameter. We simulate preemptive shortest remaining time * scheduling of all of the processes until there are no longer an y * processes left in the system (all processes have exceeded thei r * service time and have exited). * * @param processTable A pointer to a ProcessTable structure h olding * information about the processes, arrival times and duratio ns * that we are simulating execution of. */ void shortestRemainingTime(ProcessTable* processTable) { // Implement the shortest remaining time policy here cout <<"<shortestRemainingTime> entered"<< endl;
  • 86. } /** Main entry point of round robin scheduler * The main entry point of the round robin scheduler simulator. The main funciton * checks the command line arguments, and calls the simulation function if correct * arguments were supplied. We expect two command line argu ments, which are the * time slice quantum value we are to use for this preemptive sc heduler simulation, * and the name of the simulation file holding the process arriva l and service * time information. * * @param argc The argument count * @param argv The command line argument values. We expect argv[1] to be the * time slice quantum parameter (int format) and argv[2 ] to be the * name of the process simulation file (charcter string) */ int main(int argc,char** argv) { string policy; ProcessTable* processTable; int quantum =0; // If not all parameters provides, abort and let user know of prob lem if(argc <3|| argc >4) { cout <<"Error: expecting process simulation file and scheduli ng policy as command line parameters" << endl;
  • 87. cout <<"Usage: "<< argv[0]<<" process- file.sim [rr|srt] [quantum]"<< endl; exit(1); } // load process table and parse command line arguments processTable = createProcessTable(argv[1]); // just to confirm that process table loaded correctly. You shoul d // comment out or remove this as it is not asked for as part of th e // output for the assignment simulation displayProcessTable(processTable); // determine policy to simulate policy.assign(argv[2]); // perform simulation of indicated scheduling policy if(policy =="rr") { if(argc !=4) { cout <<"Error: time quantum must be provided for round ro bin `rr` scheduling policy"<< endl; exit(1); } quantum = atoi(argv[3]); if((quantum <=0)||(quantum >1000)) { cout <<"Error: received bad time slice quantum parameter: "<< argv[1]<< endl; cout <<" valid values are integers in range from 1 to 10 00"<< endl; exit(1); }
  • 88. roundRobinScheduler(processTable, quantum); } elseif(policy =="srt") { shortestRemainingTime(processTable); } else { cout <<"Error: unknown process scheduling policy: "<< polic y << endl; } } prog-05.pdf Programming Assignment #5 CSci 430, Spring 2019 Dates: Assigned: Monday April 15, 2019 Due: Wednesday May 1, 2019 (before Midnight) Objectives: � Understand short-term process scheduling. � Work with data structures to implement a round-robin scheduler. � Look at e�ects of di�erent time slice quantum sizes on the round-robin scheduling algorithm.
  • 89. � Use C/C++ to implement vector and matrix data structures, get practice in creating and using such data structures in C/C++. Description: Our textbooks chapter 9 discusses several possible short-term process scheduling policies. In this programming assignment exercise we will implement two of the preemptive policies, the simple shortest remaining time policy (SRT) and the round-robin scheduler with preemptive time slicing. Your program will be given a simple input �le, indicating the process name, its arrival time and its total service time, the same as the process scheduling examples from our textbook in Table 9.4 and Figure 9.5. You will simulate the execution of the required schedulers. As in previous assignments, you program will need to work non-interactively and be callable from the command line. The program will be provided with the �le name of a �le with process information, in the format discussed below. Your program will also be given the time slicing quantum parameter it is to use for the simulation, if round-robin scheduling is selected. Your program will need to output the results of running the set of simulated processes using the selected scheduling policy with the indicated time slice for the round-robin scheduler. Your program will have to output its results exactly as shown below in the required output format. Your program will also need to calculate some summary statistics for the simulated processes, including the turnaround time and Tr/Ts ratio for each process, and the mean Tr and Tr/Ts values for the given simulation.
  • 90. Process simulation �le formats The �les with the information about the processes to be simulated are fairly simple, and have the same information that our textbook uses to illustrate the process scheduling examples. Each simulation �le contains multiple rows of data, where each row consists of the process name, its arrival time, and its service time. Here is an example: 1 A 0 3 B 2 6 C 4 4 D 6 5 E 8 2 This �le is named process-01.sim in the zip archive of �les I have given you to get started on this assignment. This is also the same set of processes and start/service times used for all of the examples in table 9.4 and �gure 9.5. Running Simulations As with previous assignments you are required to support using your simulation from the command line. Your program will take the name of the �le containing the process information �rst. The next
  • 91. parameter will be either 'rr' to perform round-robin scheduling, or 'srt' if shortest remaining time policy is to be simulated. Finally, a 3rd parameter will be supplied for the round-robin scheduler, the time slice quantum to use. An example of running your �nished program should look like this: $ ./p3 process-01.sim rr 4 A A A B B B B C C C C D D D D B B E E D Name Fnsh T_r T_r/T_s ---------------------- A 3 3 1 B 17 15 2.5 C 11 7 1.75 D 20 14 2.8 E 19 11 5.5 Here we are running the simulation using the set of process information given in the previous section and with a time slice quantum of 4. Required Output As shown above, your program must generate 2 bits of output. First of all, while running the simulation of the selected scheduling policy, you should display the process names in the order they are run. In the previous example, the sequence of scheduled/run processes
  • 92. was: A A A B B B B C C C C D D D D B B E E D This indicates that process A ran �rst (times 0, 1 and 2), followed by B running 4 times (times 3 to 7), etc. You are required to output the sequence of process runs as the �rst line of output, with a single space in between each process name as shown. After the processes have run, you need to calculate and display the statistics for the processes that you just simulated. In our previous example, the statistics for our round-robin simulation with a time quantum of 4 time slices were: Name Fnsh T_r T_r/T_s ---------------------- A 3 3 1 B 17 15 2.5 C 11 7 1.75 2 D 20 14 2.8 E 19 11 5.5 For each process, you need to output the time when it �nished, the turnaround time (Tr) and the
  • 93. ratio of the turnaround time to the service time (Tr/Ts). I have provided a zip �le with a �le named p3-start.cpp as a template to get you started. In addition, I have provided you with two process simulation �les, named process-01.sim and process-02.sim, with 2 sets of process information you can simulate. There are several examples of correct results generated for the two sets of inputs, named things like process-01-q1.res, process-01-q4.res, process-01-srt.res, etc. These are the correct results you should get for running your simulation with round-robin scheduling for various time quantums or for shortest remaining time scheduling. 3 processtable-01.sim A 0 3 B 2 6 C 4 4 D 6 5 E 8 2 processtable-02.sim A 0 4 B 1 7 C 4 5 D 4 5 E 7 2 F 8 5 G 10 1 H 10 4 I 12 6
  • 94. processtable-03.sim A 0 3 B 2 4 C 3 5 D 3 8 E 3 2 F 5 6 G 7 9 H 7 4 I 8 3 J 8 5 K 8 4 L 10 6 Makefile all: p5sol p5: p5-start.cpp g++ -g $< -o [email protected] p5sol: p5-solution.cpp g++ -g $< -o p5 debug: p5-solution.cpp g++ -DDEBUG_BUILD=1 -g $< -o p5 p5test: ./p5 processtable-01.sim rr 1 > sim-01-rr-1.tst @diff -s -q sim-01-rr-1.tst sim-01-rr-1.res ./p5 processtable-01.sim rr 4 > sim-01-rr-4.tst @diff -s -q sim-01-rr-4.tst sim-01-rr-4.res ./p5 processtable-01.sim srt > sim-01-srt.tst
  • 95. @diff -s -q sim-01-srt.tst sim-01-srt.res ./p5 processtable-02.sim rr 1 > sim-02-rr-1.tst @diff -s -q sim-02-rr-1.tst sim-02-rr-1.res ./p5 processtable-02.sim rr 4 > sim-02-rr-4.tst @diff -s -q sim-02-rr-4.tst sim-02-rr-4.res ./p5 processtable-02.sim srt > sim-02-srt.tst @diff -s -q sim-02-srt.tst sim-02-srt.res ./p5 processtable-03.sim rr 1 > sim-03-rr-1.tst @diff -s -q sim-03-rr-1.tst sim-03-rr-1.res ./p5 processtable-03.sim rr 5 > sim-03-rr-5.tst @diff -s -q sim-03-rr-5.tst sim-03-rr-5.res ./p5 processtable-03.sim srt > sim-03-srt.tst @diff -s -q sim-03-srt.tst sim-03-srt.res @rm sim-01-rr-1.tst sim-01-rr-4.tst sim-01-srt.tst sim-02- rr-1.tst sim-02-rr-4.tst sim-02-srt.tst sim-03-rr-1.tst sim-03-rr- 5.tst sim-03-srt.tst p5zip: zip ../prog-05.zip p5-start.cpp prog-05.pdf processtable- 01.sim processtable-02.sim processtable-03.sim Makefile *.res p5solzip: zip ../prog-05-sol.zip p5-solution.cpp processtable-*.sim sim-*.res clean: rm -f p5 sim-*.tst core* *~ sim-01-rr-1.res sim-01-rr-4.res