Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Principles of operating system
1. 1
Principles of Operating Systems
Text Book:
Operating System Concepts -
Avi Silberschatz,
Peter BaerGalvin
and Greg Gagne
Anilsatyadas Dharmapuri
Department of Computer Sciences
Pondicherry University
anilsatyadas@gmail.com
2. 2
Why Study Operating Systems?
Because OS hacking will make you a better
programmer and a better thinker.
The OS is really large (Windows Vista is 50 million lines+).
The OS manages concurrency.
Concurrency leads to interesting programming challenges.
(Interesting programming challenges can lead to wisdom.)
OS code manages raw hardware.
Programming raw hardware is challenging: timing dependent
behavior, undocumented behavior, HW bugs.
OS code must be efficient, low CPU, memory, disk use.
OS fails machine fails. OS must fail less than user
programs.
OS provides services that enable application programs
… knowledge of OS will make you a better computer user
OS basis of system security.
3. 3
Why Won’t Operating Systems Go Away?
Hardware needs an OS to be useful.
Multiplex resources for efficiency and security.
OS is cornerstone of what makes computing fun.
The design of an MP3 player involves many OS
issues:
Its OS implements a file system which is specially
designed to store music files.
The OS manages communication with your PC.
The OS reads music from the disc and buffers it in
memory.
The OS controls the volume of the output device.
The OS controls the display.
The OS controls wireless network access.
Cell phone, mp3 player, DVD player, PDA, iPhone,
peer-to-peer file sharing
Systems/Theory/Artificial intelligence
5. 5
Introduction to Operating Systems
An operating system is the interface between the user and
the architecture.
OS as juggler: providing the illusion of a dedicated machine
with infinite memory and CPU.
OS as government: protecting users from each other,
allocating resources efficiently and fairly, and providing
secure and safe communication
OS as complex system: keeping OS design and
implementation as simple as possible is the key to getting the
OS to work
User Applications
Operating System
Hardware
Virtual Machine Interface
Physical Machine Interface
6. 6
What is an Operating System?
Any code that runs with the hardware kernel bit set
An abstract virtual machine
A set of abstractions that simplify application design
Files instead of “bytes on a disk”
Core OS services, written by “pros”
Processes, process scheduling
Address spaces
Device control
~30% of Linux source code. Basis of stability and security
Device drivers written by “whoever”
Software run in kernel to manages a particular vendor’s
hardware
E.g. Homer Simpson doll with USB
~70% of Linux source code
OS is extensible
Drivers are the biggest source of OS instability
7. 7
What is an Operating System?
For any OS area (CPU scheduling, file systems, memory
management), begin by asking two questions
What’s the hardware interface? (The Physical Reality)
What is the application interface? (The Nicer Interface for
programmer producivity)
Key questions:
Why is the application interface defined the way it is?
Should we push more functionality into applications, the OS, or
the hardware?
What are the tradeoffs between programmability, complexity,
and flexibility?
8. 8
Operating System Functions
Service provider
Provide standard facilities
File system
Standard libraries
Window system
…
Coordinator: three aspects
Protection: prevent jobs from interfering with each other
Communication: enable jobs to interact with each other
Resource management: facilitate sharing of resources across
jobs.
Operating systems are everywhere
Single-function devices (embedded controllers, Nintendo, …)
OS provides a collection of standard services
Sometimes OS/middleware distinction is blurry
Multi-function/application devices (workstations and servers)
OS manages application interactions
9. 9
Why do we need operating systems?
Convenience
Provide a high-level abstraction of physical resources.
Make hardware usable by getting rid of warts & specifics.
Enable the construction of more complex software systems
Enable portable code.
MS-DOS version 1 boots on the latest 3+ GHz Pentium.
Would games that ran on MS-DOSv1 work well today?
Efficiency
Share limited or expensive physical resources.
Provide protection.
10. 10
Computer Architecture & Processes
CPU - the processor that performs the actual computation
I/O devices - terminal, disks, video board, printer, etc.
Memory - RAM containing data and programs used by the CPU
System bus - the communication medium between the CPU,
memory, and peripherals
11. 11
Evolution of Operating Systems
Why do operating systems change?
Key functions: hardware abstraction and coordination
Principle: Design tradeoffs change as technology changes.
Comparing computing systems from 1981 and 2007
1981 2007 Factor
MIPS 1 57,000 57,000
$/SPECInt $100K $2 50,000
DRAM size 128KB 2GB 16,000
Disk size 10MB 1TB 100,000
Net BW 9600 bps 100 Mb/s 10,000
Address bits 16 64 4
Users/machine 100 <1 100
Energy efficiency and parallelism loom on the horizon.
Data centers projected to consume 3% of US energy by next year
No more single-core CPUs
12. 12
From Architecture to OS to Application, and Back
Hardware Example OS Services User Abstraction
Processor Process management,
Scheduling, Traps,
Protections, Billing,
Synchronization
Process
Memory Management, Protection,
Virtual memory
Address space
I/O devices Concurrency with CPU,
Interrupt handling
Terminal, Mouse,
Printer, (System
Calls)
File system Management, Persistence Files
Distributed
systems
Network security,
Distributed file system
RPC system calls,
Transparent file
sharing
13. 13
From Architectural to OS to Application, and Back
OS Service Hardware Support
Protection Kernel / User mode
Protected Instructions
Base and Limit Registers
Interrupts Interrupt Vectors
System calls Trap instructions and trap vectors
I/O Interrupts or Memory-Mapping
Scheduling, error
recovery, billing
Timer
Synchronization Atomic instructions
Virtual Memory Translation look-aside buffers
Register pointing to base of page table
14. 14
Interrupts - Moving from Kernel to User Mode
User processes may not:
address I/O directly
use instructions that
manipulate OS memory
(e.g., page tables)
set the mode bits that
determine user or kernel
mode
disable and enable
interrupts
halt the machine
but in kernel mode, the OS does all these things
a status bit in a protected processor register indicates the
mode
Protected instructions can only be executed in kernel mode.
On interrupts (e.g., time slice) or system calls
15. 15
History of Operating Systems: Phases
Phase 1: Hardware is expensive, humans are cheap
User at console: single-user systems
Batching systems
Multi-programming systems
Phase 2: Hardware is cheap, humans are expensive
Time sharing: Users use cheap terminals and share servers
Phase 3: Hardware is very cheap, humans are very expensive
Personal computing: One system per user
Distributed computing: lots of systems per user
Phase 4: Ubiquitous computing
Computers will be sewn into our clothes, but I hope not
implanted in our skin
Cell phone, mp3 player, DVD player, TIVO, PDA, iPhone
16. 16
History of Operating Systems: Phases
Phase 1: Hardware is expensive, humans are cheap
User at console: single-user systems
Batching systems
Multi-programming systems
Phase 2: Hardware is cheap, humans are expensive
Time sharing: Users use cheap terminals and share servers
Phase 3: Hardware is very cheap, humans are very expensive
Personal computing: One system per user
Distributed computing: lots of systems per user
Phase 4: Ubiquitous computing
17. 17
A Brief History of Operating Systems
Hand programmed machines (‘45-‘55)
Single user systems
OS = loader + libraries of common subroutines
Problem: low utilization of expensive components
= % utilization
Execution time
Execution time +
Card reader time
19. 19
TapeTape
Batch processing (‘55-‘65)
Operating system = loader + sequencer + output processor
Input
Compute
Output
Card
Reader
Printer
Tape Tape
Operating System
“System Software”
User Program
User Data
20. 20
Multiprogramming (‘65-‘80)
Keep several jobs in memory and multiplex CPU
between jobs
Operating System
“System Software”
User Program 1
User Program 2User Program 2
User Program n
...
program P
begin
:
Read(var)
:
end P
system call Read()
begin
StartIO(input device)
WaitIO(interrupt)
EndIO(input device)
:
end Read
Simple, “synchronous” input:
What to do while we wait
for the I/O device?
21. 21
Multiprogramming (‘65-‘80)
Keep several jobs in memory and multiplex CPU
between jobs
Operating System
“System Software”
User Program 1
User Program 2User Program 2
User Program n
...
Program 1
I/O
Device
k: read()
k+1:
endio()
interrupt
main{
}
}
OS
read{
startIO()
waitIO()
22. 22
Multiprogramming (‘65-‘80)
Keep several jobs in memory and multiplex CPU
between jobs
Operating System
“System Software”
User Program 1
User Program 2User Program 2
User Program n
...
Program 1 Program 2OS
I/O
Device
k: read()
startIO()
interrupt
main{
read{
endio{
}
schedule()
main{
k+1:
}
}
schedule()
23. 23
History of Operating Systems: Phases
Phase 1: Hardware is expensive, humans are cheap
User at console: single-user systems
Batching systems
Multi-programming systems
Phase 2: Hardware is cheap, humans are expensive
Time sharing: Users use cheap terminals and share servers
Phase 3: Hardware is very cheap, humans are very expensive
Personal computing: One system per user
Distributed computing: lots of systems per user
Phase 4: Ubiquitous computing
24. 24
Timesharing (‘70- )
A timer interrupt is used to multiplex CPU among
jobs
Operating System
“System Software”
User Program 1
User Program 2User Program 2
User Program n
...
Program 1 Program 2OS
k+1:
schedule{
timer
interrupt
schedule{
timer
interrupt
k:
main{
}
main{
}
timer
interrupt
schedule{
25. 25
History of Operating Systems: Phases
Phase 1: Hardware is expensive, humans are cheap
User at console: single-user systems
Batching systems
Multi-programming systems
Phase 2: Hardware is cheap, humans are expensive
Time sharing: Users use cheap terminals and share servers
Phase 3: Hardware is very cheap, humans are very expensive
Personal computing: One system per user
Distributed computing: lots of systems per user
Phase 4: Ubiquitous computing
26. 26
Operating Systems for PCs
Personal computing systems
Single user
Utilization is no longer a concern
Emphasis is on user interface and API
Many services & features not present
Evolution
Initially: OS as a simple service provider
(simple libraries)
Now: Multi-application systems with
support for coordination and
communication
Growing security issues (e.g., online
commerce, medical records)
27. 27
Distributed Operating Systems
Typically support distributed services
Sharing of data and coordination across multiple systems
Possibly employ multiple processors
Loosely coupled v. tightly coupled systems
High availability & reliability requirements
Amazon, CNN
OS
process
management
User
Program
CPU
LAN/WAN
OS
process management
memory management
User
Program
CPU
OS
file system
name services
mail services
CPU
Network
28. 28
History of Operating Systems: Phases
Phase 1: Hardware is expensive, humans are cheap
User at console: single-user systems
Batching systems
Multi-programming systems
Phase 2: Hardware is cheap, humans are expensive
Time sharing: Users use cheap terminals and share servers
Phase 3: Hardware is very cheap, humans are very expensive
Personal computing: One system per user
Distributed computing: lots of systems per user
Phase 4: Connecting people and their machines
Intellectual property issues
Information organization
29. 29
Course Overview
OS Structure, Processes and Process Management
CPU scheduling
Threads and concurrent programming
Thread coordination, mutual exclusion, monitors
Deadlocks
Virtual memory & Memory management
Disks & file systems
Distributed file systems
Security
30. 30
What is an Operating System?
A program that acts as an intermediary between a
user of a computer and the computer hardware.
Operating system goals:
Execute user programs
Make the computer system convenient to use.
Use the computer hardware in an efficient manner.
31. 31
Computer System Structure
Computer system can be divided into four components
Hardware – provides basic computing resources
CPU, memory, I/O devices
Operating system
Controls and coordinates use of hardware among
various applications and users
Application programs – define the ways in which the
system resources are used to solve the computing
problems of the users
Word processors, compilers, web browsers,
database systems, video games
Users
People, machines, other computers
33. 33
Operating System Definition
OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair
resource use
OS is a control program
Controls execution of programs to prevent errors and
improper use of the computer
34. 34
Operating System Structure
Multiprogramming needed for efficiency
Single user cannot keep CPU and I/O devices busy at all times
Multiprogramming organizes jobs (code and data) so CPU always has
one to execute
A subset of total jobs in system is kept in memory
One job selected and run via job scheduling
When it has to wait (for I/O for example), OS switches to another
job
Timesharing (multitasking) is logical extension in which CPU switches
jobs so frequently that users can interact with each job while it is
running, creating interactive computing
Response time should be < 1 second
Each user has at least one program executing in memory process
If several jobs ready to run at the same time CPU scheduling
If processes don’t fit in memory, swapping moves them in and out to
run
Virtual memory allows execution of processes not completely in
memory
36. 36
Operating-System Operations
Interrupt driven by hardware
Software error or request creates exception or trap
Division by zero, request for operating system service
Other process problems include infinite loop, processes modifying
each other or the operating system
Dual-mode operation allows OS to protect itself and other system
components
User mode and kernel mode
Mode bit provided by hardware
Provides ability to distinguish when system is running user
code or kernel code
Some instructions designated as privileged, only executable
in kernel mode
System call changes mode to kernel, return from call resets
it to user
37. 37
Chapter 3: Processes
Process Concept
Process Scheduling
Operations on Processes
Cooperating Processes
Interprocess Communication
Communication in Client-Server Systems
38. 38
Process Concept
An operating system executes a variety of
programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process
almost interchangeably
Process – a program in execution; process
execution must progress in sequential
fashion
A process includes:
program counter
stack
data section
40. 40
Process State
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some
event to occur
ready: The process is waiting to be
assigned to a processor
terminated: The process has finished
execution
42. 42
Process Control Block (PCB)
Information associated with each process
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information
45. 45
Process Scheduling Queues
Job queue – set of all processes in the
system
Ready queue – set of all processes residing
in main memory, ready and waiting to
execute
Device queues – set of processes waiting
for an I/O device
Processes migrate among the various
queues
46. 46
Process Scheduling Queues
Job queue – set of all processes in the
system
Ready queue – set of all processes residing
in main memory, ready and waiting to
execute
Device queues – set of processes waiting
for an I/O device
Processes migrate among the various
queues
49. 49
Schedulers
Long-term scheduler (or job scheduler)
– selects which processes should be
brought into the ready queue
Short-term scheduler (or CPU
scheduler) – selects which process should
be executed next and allocates CPU
51. 51
Schedulers (Cont.)
Short-term scheduler is invoked very frequently
(milliseconds) (must be fast)
Long-term scheduler is invoked very infrequently
(seconds, minutes) (may be slow)
The long-term scheduler controls the degree of
multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
CPU-bound process – spends more time doing
computations; few very long CPU bursts
52. 52
Context Switch
When CPU switches to another process, the
system must save the state of the old process
and load the saved state for the new process
Context-switch time is overhead; the system
does no useful work while switching
Time dependent on hardware support
53. 53
Process Creation
Parent process create children processes, which,
in turn create other processes, forming a tree of
processes
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution
Parent and children execute concurrently
Parent waits until children terminate
54. 54
Process Creation (Cont.)
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the
process’ memory space with a new program
56. 56
C Program Forking Separate Process
int main()
{
pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}
58. 58
Process Termination
Process executes last statement and asks the
operating system to delete it (exit)
Output data from child to parent (via wait)
Process’ resources are deallocated by operating system
Parent may terminate execution of children
processes (abort)
Child has exceeded allocated resources
Task assigned to child is no longer required
If parent is exiting
Some operating system do not allow child to
continue if its parent terminates
All children terminated - cascading termination
59. 59
Cooperating Processes
Independent process cannot affect or be
affected by the execution of another process
Cooperating process can affect or be affected by
the execution of another process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience
60. 60
Producer-Consumer Problem
Paradigm for cooperating processes,
producer process produces information
that is consumed by a consumer process
unbounded-buffer places no practical limit on
the size of the buffer
bounded-buffer assumes that there is a fixed
buffer size
61. 61
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Solution is correct, but can only use
BUFFER_SIZE-1 elements
62. 62
Bounded-Buffer – Insert() Method
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
63. 63
Bounded Buffer – Remove() Method
while (true) {
while (in == out)
; // do nothing -- nothing to
consume
// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
64. 64
Interprocess Communication (IPC)
Mechanism for processes to communicate and to
synchronize their actions
Message system – processes communicate with
each other without resorting to shared variables
IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
If P and Q wish to communicate, they need to:
establish a communication link between them
exchange messages via send/receive
Implementation of communication link
physical (e.g., shared memory, hardware bus)
logical (e.g., logical properties)
65. 65
Implementation Questions
How are links established?
Can a link be associated with more than two
processes?
How many links can there be between every pair of
communicating processes?
What is the capacity of a link?
Is the size of a message that the link can
accommodate fixed or variable?
Is a link unidirectional or bi-directional?
67. 67
Indirect Communication
Messages are directed and received from
mailboxes (also referred to as ports)
Each mailbox has a unique id
Processes can communicate only if they share a
mailbox
Properties of communication link
Link established only if processes share a common
mailbox
A link may be associated with many processes
Each pair of processes may share several
communication links
Link may be unidirectional or bi-directional
68. 68
Indirect Communication
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message
from mailbox A
69. 69
Indirect Communication
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive
operation
Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
70. 70
Synchronization
Message passing may be either blocking or non-
blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is
received
Blocking receive has the receiver block until a message is
available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and
continue
Non-blocking receive has the receiver receive a valid
message or null
71. 71
Synchronization
Message passing may be either blocking or non-
blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is
received
Blocking receive has the receiver block until a message
is available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and
continue
Non-blocking receive has the receiver receive a valid
message or null
72. 72
Synchronization
Message passing may be either blocking or non-
blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is
received
Blocking receive has the receiver block until a message
is available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and
continue
Non-blocking receive has the receiver receive a valid
message or null
73. 73
Buffering
Queue of messages attached to the link;
implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
76. 76
Basic Concepts
Maximum CPU utilization obtained with
multiprogramming
CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait
CPU burst distribution
79. 79
CPU Scheduler
Selects from among the processes in memory that
are ready to execute, and allocates the CPU to one
of them
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
80. 80
Dispatcher
Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to
restart that program
Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
81. 81
Scheduling Criteria
CPU utilization – keep the CPU as busy as
possible
Throughput – # of processes that
complete their execution per time unit
Turnaround time – amount of time to
execute a particular process
Waiting time – amount of time a process
has been waiting in the ready queue
Response time – amount of time it takes
from when a request was submitted until
the first response is produced, not output
(for time-sharing environment)
83. 83
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2
, P3
The Gantt Chart for the schedule is:
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 300
84. 84
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1
The Gantt chart for the schedule is:
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect short process behind long process
P1P3P2
63 300
85. 85
Shortest-Job-First (SJR) Scheduling
Associate with each process the length of its next
CPU burst. Use these lengths to schedule the
process with the shortest time
Two schemes:
nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst
preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)
SJF is optimal – gives minimum average waiting
time for a given set of processes
86. 86
Process Arrival Time Burst Time Remaining time
P1 0.0 7 7
P2 2.0 4 5
P3 4.0 1
P4 5.0 4
SJF (non-preemptive)
Average waiting time = (0 + 6 + 3 + 7)/4 = 4
Example of Non-Preemptive SJF
P1 P3 P2
73 160
P4
8 12
87. 87
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (preemptive)
Average waiting time = (9 + 1 + 0 +2)/4 = 3
P1 P3P2
42 110
P4
5 7
P2 P1
16
88. 88
Determining Length of Next CPU Burst
Can only estimate the length
Can be done by using the length of previous CPU
bursts, using exponential averaging
:Define4.
10,3.
burstCPUnexttheforvaluepredicted2.
burstCPUoflenghtactual1.
1
n
th
n nt
.11 nnn t
89. 89
Examples of Exponential Averaging
=0
n+1 = n
Recent history does not count
=1
n+1 = tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1,
each successive term has less weight than its
predecessor
.11 nnn t
90. 90
Priority Scheduling
A priority number (integer) is associated with each
process
The CPU is allocated to the process with the highest
priority (smallest integer highest priority)
Preemptive
nonpreemptive
SJF is a priority scheduling where priority is the
predicted next CPU burst time
Problem Starvation – low priority processes may
never execute
Solution Aging – as time progresses increase the
priority of the process
91. 91
Round Robin (RR)
Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and
added to the end of the ready queue.
If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once.
No process waits more than (n-1)q time units.
Performance
q large FIFO
q small q must be large with respect to context switch,
otherwise overhead is too high
92. 92
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
The Gantt chart is:
Typically, higher average turnaround than SJF, but
better response
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
94. 94
Multilevel Queue
Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
20% to background in FCFS
96. 96
Multilevel Feedback Queue
A process can move between the various
queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined
by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will
enter when that process needs service
97. 97
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When
it gains CPU, job receives 8 milliseconds. If it does not
finish in 8 milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is
preempted and moved to queue Q2.
99. 99
Multiple-Processor Scheduling
CPU scheduling more complex when
multiple CPUs are available
Homogeneous processors within a
multiprocessor
Load sharing
Asymmetric multiprocessing – only one
processor accesses the system data
structures, alleviating the need for data
sharing