Process And Process Scheduling
Bhargavi Varala
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 1
MODULE OBJECTIVES
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 2
◾ To introduce the notion of a process
◾ To describe the various features of processes
◾ To describe various scheduling algorithms
◾ To understand the concept of threads
Recipe (Program)
Recipe being made
(Program in execution)
(Process)
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 3
What Is A Process?
◾ Process is a program in execution
◾ Process contains
◾ Program code
◾ Program counter and registers
◾ Stack
◾ Data section
◾ Heap
Stack
Heap
Data
Text
Fig. Process in memory
◾ Program is a passive entity while process is an active entity.
Main Memory
Program Process
◾ Two or more processes of same program might run at the same time but
each of them is a separate process.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 4
Process States
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 5
Process Description: Process Control Block (PCB)
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 6
◾ Each is represented by a Process Control Block (PCB)
◾ Also known as task control block
◾ PCB contains
◾ Process state
◾ Program counter
◾ CPU registers
◾ CPU Scheduling information
◾ Memory management information
◾ Accounting information
◾ I/O status information
◾ PCB is a repository for any information that vary from process to
process
Process State
Process Number
Program Counter
Registers
Memory Limits
List of Open Files
. . . .
Fig. Process Control Block
Scheduling Queues
• Contains all
processes
Job Queue
• Ready to
execute
• Waiting
for CPU
Ready Queue
• Waiting for a
particular
device I/O
device
Device Queue
Fig. Different Scheduling Queues
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 7
How A Process Is Scheduled?
"PROCESS AND PROCESS SCHEDULING" BY MS. RASHMI BHAT 8
ready queue CPU
I/O I/O queue I/O request
Time slice
expired
fork a child
Wait for an
interrupt
Child
executes
Interrupt
occurs
Newly
admitted
process
Dispatched
Fig. Queueing diagram of process scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 8
Who Schedules The Process?
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 9
◾ Scheduler
◾ Selects processes from different queues for scheduling purpose
◾ Types of schedulers
◾ Long-term Scheduler (Job Scheduler)
◾ Selects processes from job pool (job queue) and loads them into main memory for execution
◾ Less frequently executed
◾ Controls the degree of multiprogramming (no. of processes in main memory)
◾ Short-term Scheduler (CPU Scheduler)
◾ Selects processes from ready queue and allocates CPU to one of them.
◾ Most frequently executed
◾ Medium-term Scheduler (Memory Scheduler)
◾ Medium-term schedulers are responsible for swapping of a process from the Main Memory to Secondary Memory and
vice-versa.
◾ It is helpful in maintaining a perfect balance between the I/O bound and the CPU bound.
Context Switch
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 11
◾ When interrupt occurs, the system saves the current context of a process running on
the CPU
◾ The context is represented in the PCB
◾ We perform
◾ State save of the current state of the process
◾ State restore to resume operations
◾ Context switch
◾ Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process
◾ Context-switch time is pure overhead
◾ Highly dependent on hardware support
Context Switch
Fig. CPU Switching from Process to Process
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 12
Process Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 13
Basic Concept of Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 14
◾ Multiprogramming concept
◾ Properties of a process/program
◾ CPU-bound program
◾ I/O-bound program
◾ CPU Scheduler selects process to execute
◾ Scheduling
◾ Non-preemptive or cooperative
◾ Preemptive
◾ Incurs a cost associated with access to shared data
◾ Affects the design of the operating-system kernel
When CPU Scheduling is to be done?
When a process moves from
1. Running state  waiting
state
2. Running state  ready state
3. Waiting state  ready state
4. Terminates
• Non-Preemptive:
• In this case, a process’s resource cannot be taken before the process has finished running.
• When a running process finishes and transitions to a waiting state, resources are switched.
• Preemptive:
• In this case, the OS assigns resources to a process for a predetermined period.
• The process switches from running state to ready state or from waiting state to ready state
during resource allocation.
• This switching happens because the CPU may give other processes priority and substitute
the currently active process for the higher priority process.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 15
Basic Concept of Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 17
◾ Dispatcher
◾ Gives control of the CPU to the process selected by short-term scheduler
◾ Involves
◾ Switching context
◾ Switching to user mode
◾ Jumping to the proper location in the user program to restart that program
◾ Dispatch latency
◾ The time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
◾ CPU Utilization
◾ To keep the CPU as busy as possible
◾ Throughput
◾ The number of processes that are completed per unit time
◾ Turnaround time
◾ The interval from the time of submission of a process to the time of completion
Waiting
to get in
memory
Waiting
to ready
queue
Executing
on the
CPU
Doing
I/O
Turnaround
Time
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 18
Scheduling Criteria
◾ Waiting time
◾ The sum of the periods spent waiting in the ready queue
◾ Response time
◾ The time from the submission of a request until the first response is produced
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 19
Scheduling Algorithms
First-Come First-Served Scheduling (FCFS)
Shortest Job First Scheduling (SJF)
Priority Scheduling
Round Robin Scheduling (RR)
Multilevel Queue Scheduling
Multilevel Feedback Queue
Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 20
Scenario 1
◾ How movie tickets are issued?
First-Come First-Served Scheduling (FCFS)
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 21
First-Come First-Served Scheduling (FCFS)
◾ Managed with FIFO queue
◾ Working of an algorithm
◾ When a process enters in ready queue, its PCB is linked to the tail of the queue
◾ When CPU is free, it is allocated to the process at the head of the queue
◾ Average waiting time is quite long
𝑷𝒙 . . . . . . . . . . . . . . . 𝑷𝒃 𝑷𝒂
𝑷𝒃
Busy Free
Tail
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 22
First-Come First-Served Scheduling (FCFS)
◾ Average Waiting time:
𝒔𝒖𝒎 𝒐𝒇 𝒘𝒂𝒊𝒕𝒊𝒏𝒈 𝒕𝒊𝒎𝒆 𝒐𝒇 𝒂𝒍𝒍 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
◾ Total Turnaround time:
𝒔𝒖𝒎 𝒐𝒇 𝒕𝒐𝒕𝒂𝒍 𝒆𝒙𝒆𝒄𝒖𝒕𝒊𝒐𝒏 𝒕𝒊𝒎𝒆 + 𝒘𝒂𝒊𝒕𝒊𝒏𝒈 𝒕𝒊𝒎𝒆
𝒓𝒆𝒒𝒖𝒊𝒓𝒆𝒅 𝒇𝒐𝒓 𝒑𝒓𝒐𝒄𝒆𝒔𝒔
◾ Average Turnaround time:
𝒕𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅 𝒕𝒊𝒎𝒆
𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
◾ Schedules in order of arrival of processes
◾ Algorithm is non-preemptive
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 23
Remember …
◾ Process Times
◾ Burst Time (BT)
◾ Arrival Time (AT)
◾ Completion Time (CT)
◾ Turnaround Time (TT)
◾ Avg. Turnaround Time (ATT)
◾ Waiting Time (WT)
◾ Avg. Waiting Time (AWT)
𝑻𝑻 = 𝑪𝑻 −
𝑨𝑻
𝑨𝑻𝑻
=
𝒊=
𝟏
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 24
σ𝒏 𝑻𝑻𝒊
𝒏𝒐. 𝒐𝒇
𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
𝑾𝑻 = 𝑻𝑻 −
𝑩𝑻
𝑨𝑾𝑻
=
𝒊
=
σ𝒏
𝟏 𝑾𝑻𝒊
𝒏𝒐. 𝒐𝒇
𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
Example 1
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 25
P1 P2 P3 P4
0 7 17 40 44
Suppose following set of processes arrived in system at time 0 in given order.
Process BT WT TT
P1
P2
P3
P4
Average
Process BT WT TT
P1 7 0 7
P2 10 7 17
P3 23 17 40
P4 4 40 44
Average 16 27
Process BT
P1 7
P2 10
P3 23
P4 4
Gantt Chart
Example 2
Process BT AT CT TT WT
P1 4 0
P2 5 1
P3 3 2
P4 5 3
P5 6 4
P1 P2 P3 P4 P5
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 26
Suppose following set of five processes arriving at different times.
Ready
queue
Example 2
CT TT WT
Process BT AT CT TT WT
P1 4 0 4 4 0
P2 5 1 9 8 3
P3 3 2 12 10 7
P4 5 3 17 14 9
P5 6 4 23 19 13
P1 P2 P3 P4 P5
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 27
Suppose following set of five processes arriving at different times.
Ready
queue
ATT = 11
AWT = 6.4
P1 P2 P3 P4 P5
0 4 9 12 17 23
First-Come First-Served Scheduling (FCFS)
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 27
• Suppose, we have one CPU-bound process and many I/O-bound processes. What will be the situation??
• When all I/O-bound processes wait in ready queue, I/O devices are idle
• When all processes wait in I/O queue, CPU sits idle.
• Convoy Effect
• All the other processes wait for one big process to release the CPU
Scenario 2
I just want
to book a
flight ticket
I want to
place an
order for
grocery
I want to play
Counter Strike.
My friends are
waiting
I want to
see cartoon
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 28
Shortest Job First (SJF)
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 29
◾ Each process is associated with the length of the process's next CPU burst
◾ Shortest-next-CPU-burst algorithm
◾ CPU is assigned to process that has the smallest next CPU burst.
◾ If two processes have same CPU burst, FCFS is used.
◾ Algorithm can be either preemptive or non-preemptive
◾ The choice arises when a new process arrives at the ready queue while a previous process
is still running
◾ Preemptive algorithm : Shortest-Remaining-Time-First (SRTF)
Example 4
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 30
◾ Suppose process are arrived in system at different times, Find average turnaround time and average
waiting time using SJF scheduling (Non-preemptive).
Process AT BT
A 0 8
B 2 5
C 1 9
D 3 2
Example 4
◾ Suppose process are arrived in system at different times, Find average turnaround time and average
waiting time using SJF scheduling (Non-preemptive). (SRTN)
A D B C
0 8 10 15 24
CT TT WT
Process AT BT CT TT WT
A 0 8 8 8 0
B 2 5 15 13 8
C 1 9 24 23 14
D 3 2 10 7 5
�
�
𝟓𝟏
𝑨𝒗𝒈. 𝑻𝑻 = = 𝟏𝟐.
𝟕𝟓
𝟐𝟕
𝑨𝒗𝒈. 𝑾𝑻 =
= 𝟔. 𝟕𝟓
𝟒
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 31
Example 4
◾ Suppose process are arrived in system at different times, Find average turnaround time and average
waiting time using SJF scheduling (preemptive). (SRTF)
0 9 15 24
CT TT WT
Process AT BT CT TT WT
A 0 8 15 15 7
B 2 5 9 7 2
C 1 9 24 23 14
D 3 2 5 2 0
�
�
𝟓𝟏
𝑨𝒗𝒈. 𝑻𝑻 = = 𝟏𝟏.
𝟕𝟓
𝟐𝟕
𝑨𝒗𝒈. 𝑾𝑻 =
= 𝟓. 𝟕𝟓
𝟒
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 32
A B D B A C
2 3 5
Example 5
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 33
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling
of processes. Consider the following set of processes with their arrival times and CPU burst times (in
milliseconds):
Find the average waiting time of processes.
Process AT BT
P1 0 12
P2 2 4
P3 3 8
P4 8 4
Example 5
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling
of processes. Consider the following set of processes with their arrival times and CPU burst times (in
milliseconds):
Find the average waiting time of processes.
P1 P2 P3 P4 P3 P1
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 34
0 2 6 8 12 18 28
CT TT WT
Process AT BT CT TT WT
P1 0 12 28 28 16
P2 2 4 6 4 0
P3 3 8 18 15 7
P4 8 4 12 4 0
𝑨𝒗𝒈. 𝑾𝑻
= 𝟓. 𝟕𝟓
Scenario 3
How to share the bicycle among all four friends?
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 35
Round Robin Scheduling
◾ Designed especially for time sharing systems.
◾ FCFS + Preemption
◾ Time quantum or time slice
◾ The ready queue is treated as a circular queue
◾ The ready queue is treated as a FIFO queue of processes
P1
P2
Pn
. . . P3
Circular Ready Queue
P1 P2 … Pn
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 36
Ready Queue
Round Robin Scheduling
◾ Working of the Algorithm
◾ The scheduler picks the first process from the ready queue
◾ Sets a timer to interrupt after 1 time quantum
◾ Dispatches the process
◾ If 𝑏𝑢𝑟𝑠𝑡_𝑡𝑖𝑚𝑒 < 1 𝑡𝑖𝑚𝑒 𝑞𝑢𝑎𝑛𝑡𝑢𝑚
◾ Currently running process releases the CPU voluntarily
◾ Scheduler selects the next process in the ready queue.
◾ If 𝑏𝑢𝑟𝑠𝑡_𝑡𝑖𝑚𝑒 > 1 𝑡𝑖𝑚𝑒 𝑞𝑢𝑎𝑛𝑡𝑢𝑚
◾ Timer goes off and causes interrupt to OS and preempts the currently running process
◾ On interrupt, context switch will be executed
◾ The process will be put at the tail of the ready queue
◾ Selects the next process in the ready queue.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 37
Round Robin Scheduling
P3=4
P2=1
P1=8
P4=10
CPU
P2 6
P3 4
P4 10
Process Burst Time
P1 13
Time quantum = 5 ms
P4=10
P2=1
P1=8
CPU
P4=5
P1=3
CPU
P2=1
P1=3
P4=5
CPU
P2=6
P1=8
P4=10
P3=4
CPU
P1=8
P4=5
P2=1
CPU
P1=13
P4=10
P3=4
P2=6
CPU
P1=3
CPU
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 38
Example
Process Burst Time
P1 18
P2 3
P3 5
Time quantum = 3 ms
P1 P2 P3 P1 P3 P1 P1 P1 P1
0 3 6 9 12 14 17 20 23 26
Avg. waiting time: (0+6+2)+(3)+(6+3) = 20/3 = 6.67
Total turnaround time: (8+18)+(3+3)+(9+5)= 46
Avg. turnaround time: 46 / 3 = 15.33
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 39
Round Robin Scheduling
◾ Average waiting time under the RR policy is often long
◾ Algorithm is preemptive
◾ The performance depends heavily on the size of the time quantum
◾ Approach is called processor sharing
◾ The effect of context switching on the performance of RR scheduling
◾ The time quantum has to be large with respect to the context switch time, but it should
not be too large
◾ If the time quantum is too large, RR scheduling degenerates to FCFS policy.
◾ Turnaround time also depends on the size of the time quantum.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 40
Example
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 41
◾ Suppose 6 processes are sharing the CPU in FCFS fashion. If the context switch requires 1 unit time,
calculate the average turn waiting time. (All times are in milliseconds)
Process
Name
Arrival Time Burst Time
A 0 3
B 1 2
C 2 1
D 3 4
E 4 5
F 5 2
Scenario 4
What will Andy do first??
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 42
Priority Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 43
◾ A priority is associated with each process
◾ Priority is a number assigned to the process
◾ CPU is allocated to the process with the highest priority
◾ Equal-priority processes are scheduled in FCFS manner
◾ What number is to consider as higher priority?
◾ Convention: Low numbers represent high priority order
◾ Can be either preemptive or non-preemptive
Priority Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 44
◾ When a newly arrived process has higher priority than the priority of the currently
running process then
◾ Put the newly arrived process at the head of the ready queue and let the current process
continue. (Non-Preemptive)
OR
◾ Preempt the CPU from currently process, allocate it to the newly arrived process.
(Preemptive)
Example 6
P2 P5 P1 P3 P4
0 3 4 12
17 19
Process BT Priority CT WT
P1 8 3 12 4
P2 3 1 3 0
P3 5 4 17 12
P4 2 5 19 17
P5 1 2 4 3
Assume following set of processes are arrived in system. What will be the average
waiting time when priority scheduling is applied?
4 + 0 + 12 + 17 + 3
36
Avg. Waiting Time =
5
=
5
=
7.2
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 45
Example 6
0
Process BT AT Priority CT WT
P1 8 0 3 12 4
P2 3 1 1 4 0
P3 5 2 4 17 10
P4 2 3 5 19 14
P5 1 4 2 5 0
Assume following set of processes are arrived in system. What will be the average
waiting time when priority scheduling (preemptive approach) is applied?
4 + 0 + 10 + 14 + 0
28
Avg. Waiting Time =
5
=
5
=
5.6
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 46
P1 P2 P5 P1 P3 P4
1 4 5 12 17 19
Priority Scheduling
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 47
◾ Indefinite blocking or starvation
◾ Keeps a low priority processes waiting indefinitely
◾ Solution : aging
◾ Gradually increasing the priority of processes that wait in the system for a long time
Threads
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 48
Threads
◾ What is thread?
◾ A thread is a basic unit of CPU
utilization, consisting of a
program counter, a stack, and a
set of registers, ( and a thread
ID.)
◾ Processes (heavyweight) have a
single thread of control.
◾ Also called as lightweight
process.
Process
Threads
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 49
Threads
◾ Multithreading is an ability of an OS to support multiple, concurrent paths of
execution within a single process.
One Process
One Thread
Multiple Processes
One Thread per process
One Process
Multiple Threads
Multiple Processes
Multiple Thread per process
Fig. Single Threaded Approach Fig. Multithreaded Approaches
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 50
Threads
◾ In single threaded process model, a process includes
◾ Its PCB,
◾ User address space,
◾ User and kernel stack
◾ To manage call/return behavior of the execution of the process
◾ While the process is running, it controls the processor
registers.
◾ The contents of these registers are saved when the process
is not running
Fig. Single Threaded Process Model
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 51
Threads
◾ In multithreaded process model, there is
◾ A single PCB
◾ User Address space
◾ Separate stack for each thread
◾ Separate control block of each thread
◾ All threads share the state and resources of
that process
◾ All threads reside in same address space and
have access to all data
Fig. Multithreaded Process Model
Process
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 52
Threads
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 53
◾ Benefits of threads:
◾ Takesless time to create a new thread in an existing process than to create a brand new
process.
◾ Takes less time to terminate than the process.
◾ Takes less time to switch between two threads within the same process than to switch
between two processes.
◾ Enhance efficient in communication between different executing programs.
Types of Threads
◾ User Level Threads
◾ All of the work of thread management is carried out by the
application and the kernel is not aware about the
existence of the threads.
◾ Any application can be programmed to be multithreaded
by using thread libraries.
◾ By default an application starts with single thread.
◾ While application is running, at any time, the application
may spawn a new thread to run within the same
process.
Threads
User Level
Threads
Kernel Level
Threads
Types of Threads
Fig. Pure User Level Thread
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 54
Types of Threads
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 55
◾ User Level Threads
◾ Advantages
◾ Thread switching does not require kernel mode privileges.
◾ Scheduling can be application specific.
◾ User level threads can run on any operating system.
◾ Disadvantages
◾ When a user level thread executes a system call, not only is that thread blocked, but also all of
the threads within the process are blocked.
◾ In pure user level thread strategy, a multithreaded application cannot take advantage of
multiprocessing OS.
Types of Threads
◾ Kernel Level Threads
◾ All of the work of thread management is done by the kernel.
◾ There is no thread management code in the application level.
◾ Kernel saves context information for process as a whole and for
individual threads within the process.
◾ Scheduling is done by kernel on thread basis.
◾ Kernel can schedule multiple threads on multiple processors
◾ If one thread is blocked, kernel schedules another thread within the
same process.
◾ Transferring control from one thread to another thread in same
process requires a mode switch to the kernel. Fig. Pure Kernel Level Thread
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 56
Types of Threads
◾ Combined Approach
◾ Thread creation is done completely in user space.
◾ Multiple user level threads are mapped onto some (smaller or
equal) number of kernel level threads
◾ Multiple threads within the same application can run in parallel on
multiple processors
◾ Blocking system calls need not block the entire process.
Fig. Combined Approach
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 57
Multi-Threading Models
One-to-One Model
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 58
Many-to-One Model Many-to-Many Model
Concurrency in OS
• In the world of modern computing, operating systems (OS) play a critical role in
ensuring that a computer can perform multiple tasks simultaneously.
• One of the key techniques used to achieve this is concurrency.
• Concurrency in OS allows multiple tasks or processes to run concurrently, providing
simultaneous execution and significantly improving system efficiency.
• However, the implementation of concurrency in operating systems brings its own
set of challenges and complexities.
• In this lecture, we will explore the concept of concurrency in OS, exploring its
principles, advantages, limitations, and the problems it presents.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 60
What is Concurrency in OS?
• Concurrency in operating systems refers to the ability of an OS to manage and execute multiple
tasks or processes simultaneously.
• It allows multiple tasks to overlap in execution, giving the appearance of parallelism even on single-core processors.
• Concurrency is achieved through various techniques such as multitasking, multithreading, and multiprocessing.
• Multitasking involves the execution of multiple tasks by rapidly switching between them. Each task
gets a time slot, and the OS switches between them so quickly that it seems as if they are running
simultaneously.
• Multithreading takes advantage of modern processors with multiple cores. It allows different
threads of a process to run on separate cores, enabling true parallelism within a single process.
• Multiprocessing goes a step further by distributing multiple processes across multiple physical
processors or cores, achieving parallel execution at a higher level.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 61
Why Allow Concurrent Execution?
The need for concurrent execution arises from the desire to utilize computer
resources efficiently. Here are some key reasons why concurrent execution is
essential:
• Resource Utilization:
• Concurrency ensures that the CPU, memory, and other resources are used optimally. Without
concurrency, a CPU might remain idle while waiting for I/O operations to complete, leading to inefficient
resource utilization.
• Responsiveness:
• Concurrent systems are more responsive. Users can interact with multiple applications simultaneously,
and the OS can switch between them quickly, providing a smoother user experience.
• Throughput:
• Concurrency increases the overall throughput of the system. Multiple tasks can progress simultaneously,
allowing more work to be done in a given time frame.
• Real-Time Processing:
• Certain applications, such as multimedia playback and gaming, require real-time processing. Concurrency
ensures that these applications can run without interruptions, delivering a seamless experience.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 63
Principles of Concurrency in Operating
Systems
To effectively implement concurrency, OS designers adhere to several key principles:
•Process Isolation:
• Each process should have its own memory space and resources to prevent interference between processes. This isolation is
critical to maintain system stability.
•Synchronization:
• Concurrency introduces the possibility of data races and conflicts. Synchronization mechanisms like locks, semaphores,
and mutexes are used to coordinate access to shared resources and ensure data consistency.
•Deadlock Avoidance:
• OSs implement algorithms to detect and avoid deadlock situations where processes are stuck waiting for resources
indefinitely. Deadlocks can halt the entire system.
•Fairness:
• The OS should allocate CPU time fairly among processes to prevent any single process from monopolizing system resources.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 64
Problems in Concurrency
While concurrency offers numerous benefits, it also introduces a range of challenges and problems:
•Race Conditions:
• They occur when multiple threads or processes access shared resources simultaneously without proper synchronization. In the absence of
synchronization mechanisms, race conditions can lead to unpredictable behavior and data corruption. This can result into data
inconsistencies, application crashes, or even security vulnerabilities if sensitive data is involved.
•Deadlocks:
• A deadlock arises when two or more processes or threads become unable to progress as they are mutually waiting for resources that are
currently held by each other. This situation can bring the entire system to a standstill, causing disruptions and frustration for users.
•Priority Inversion:
• Priority inversion occurs when a lower-priority task temporarily holds a resource that a higher-priority task needs. This can lead to delays in
the execution of high-priority tasks, reducing system efficiency and responsiveness.
•Resource Starvation:
• Resource starvation occurs when some processes are unable to obtain the resources they need, leading to poor performance and
responsiveness for those processes. This can happen if the OS does not manage resource allocation effectively or if certain processes
monopolize resources.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 65
Advantages of Concurrency
Concurrency in operating systems offers several distinct advantages:
•Improved Performance:
Concurrency significantly enhances system performance by effectively utilizing available resources. With multiple tasks running concurrently,
the CPU, memory, and I/O devices are continuously engaged, reducing idle time and maximizing overall throughput.
•Responsiveness:
Concurrency ensures that users enjoy fast response times, even when juggling multiple applications. The ability of the operating system to
swiftly switch between tasks gives the impression of seamless multitasking and enhances the user experience.
•Scalability:
Concurrency allows systems to scale horizontally by adding more processors or cores, making it suitable for both single-core and multi-core
environments.
•Fault Tolerance:
Concurrency contributes to fault tolerance, a critical aspect of system reliability. In multiprocessor systems, if one processor encounters a
failure, the remaining processors can continue processing tasks. This redundancy minimizes downtime and ensures uninterrupted system
operation.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 67
Limitations of Concurrency
Despite its advantages, concurrency has its limitations:
• Complexity:
• Debugging and testing concurrent code is often more challenging than sequential code. The potential for hard-to-reproduce bugs
necessitates careful design and thorough testing.
• Overhead:
• Synchronization mechanisms introduce overhead, which can slow down the execution of individual tasks, especially in scenarios where
synchronization is excessive.
• Race Conditions:
• Dealing with race conditions requires careful consideration during design and rigorous testing to prevent data corruption and erratic
behavior.
• Resource Management:
• Balancing resource usage to prevent both resource starvation and excessive contention is a critical task. Careful resource management is
vital to maintain system stability.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 68
Issues of Concurrency
Concurrency introduces several critical issues that OS designers and developers must address:
•Security:
• Concurrent execution may inadvertently expose data to unauthorized access or data leaks. Managing access control and
data security in a concurrent environment is a non-trivial task, that demands thorough consideration.
•Compatibility:
• Compatibility issues can arise when integrating legacy software into concurrent environments, potentially limiting their
performance.
•Testing and Debugging:
• Debugging concurrent code is a tough task. Identifying and reproducing race conditions and other concurrency-related
bugs can be difficult.
•Scalability:
• While concurrency can improve performance, not all applications can be easily parallelized. Identifying tasks that can be
parallelized and those that cannot is crucial in optimizing system performance.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 69
Mutual Exclusion In OS
• Mutual exclusion in OS locks is a frequently used method for synchronizing
processes or threads that want to access some shared resource.
• Their work justifies their name, if a thread operates on a resource, another thread
that wants to do tasks on it must wait until the first one is done with its process.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 70
What is Mutual Exclusion in OS?
• It’s a condition in which a thread of execution does not ever get
involved in a critical section at the same time as a concurrent thread of
execution so far using the critical section.
• This critical section can be a period for which the thread of execution
uses the shared resource which can be defined as a data object, that
different concurrent threads may attempt to alter (where the number
of concurrent read operations allowed is two but on the other hand two
write or one read and write is not allowed, as it may guide it to data
instability).
• Mutual exclusion in OS is designed so that when a write operation is in
the process then another thread is not granted to use the very object
before the first one has done writing on the critical section after that
releases the object because the rest of the processes have to read and
write it.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 71
Why is Mutual Exclusion Required?
• An easy example of the importance of Mutual Exclusion can be envisioned by implementing a
linked list of multiple items, considering the fourth and fifth need removal.
• The deletion of the node that sits between the other two nodes is done by modifying the previous node’s next
reference directing the succeeding node.
• In a simple explanation, whenever node “i” wants to be removed, at that moment node “ith - 1” 's
next reference is modified, directing towards the node “ith + 1”.
• Whenever a shared linked list is in the middle of many threads, two separate nodes can be removed by two threads at
the same time meaning the first thread modifies node “ith - 1” next reference, directing towards the node “ith + 1”, at
the same time second thread modifies node “ith” next reference, directing towards the node “ith + 2”.
• Despite the removal of both achieved, linked lists required state is not yet attained because node “i + 1” still exists in
the list, due to node “ith - 1” next reference still directing towards the node “i + 1”.
• Now, this situation is called a race condition. Race conditions can be prevented by mutual exclusion
so that updates at the same time cannot happen to the very bit about the list.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 72
Necessary Conditions for Mutual
Exclusion
There are four conditions applied to mutual exclusion, which are mentioned below :
• Mutual exclusion should be ensured in the middle of different processes when accessing shared resources.
There must not be two processes within their critical sections at any time.
• Assumptions should not be made as to the respective speed of the unstable processes.
• The process that is outside the critical section must not interfere with another for access to the critical section.
• When multiple processes access its critical section, they must be allowed access in a finite time, i.e. they should
never be kept waiting in a loop that has no limits.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 73
Example of Mutual Exclusion
There are many types of mutual exclusion, some of them
are mentioned below :
•Locks :
• It is a mechanism that applies restrictions on access to a resource when multiple
threads of execution exist.
•Recursive lock :
• It is a certain type of mutual exclusion (mutex) device that is locked several times by
the very same process/thread, without making a deadlock. While trying to perform
the "lock" operation on any mutex may fail or block when the mutex is already
locked, while on a recursive mutex the operation will be a success only if the locking
thread is the one that already holds the lock.
•Semaphore :
• It is an abstract data type designed to control the way into a shared resource by
multiple threads and prevents critical section problems in a concurrent system such
as a multitasking operating system. They are a kind of synchronization primitive.
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 74
Readers writer (RW) lock :
• It is a synchronization primitive that works out reader-writer problems.
• It grants concurrent access to the read-only processes, and writing processes require
exclusive access.
• This conveys that multiple threads can read the data in parallel however exclusive lock is
required for writing or making changes in data.
• It can be used to manipulate access to a data structure inside the memory
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 75
Thank
"PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 59

Operating Systems - scheduling Details.pptx

  • 1.
    Process And ProcessScheduling Bhargavi Varala "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 1
  • 2.
    MODULE OBJECTIVES "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 2 ◾ To introduce the notion of a process ◾ To describe the various features of processes ◾ To describe various scheduling algorithms ◾ To understand the concept of threads
  • 3.
    Recipe (Program) Recipe beingmade (Program in execution) (Process) "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 3
  • 4.
    What Is AProcess? ◾ Process is a program in execution ◾ Process contains ◾ Program code ◾ Program counter and registers ◾ Stack ◾ Data section ◾ Heap Stack Heap Data Text Fig. Process in memory ◾ Program is a passive entity while process is an active entity. Main Memory Program Process ◾ Two or more processes of same program might run at the same time but each of them is a separate process. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 4
  • 5.
    Process States "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 5
  • 6.
    Process Description: ProcessControl Block (PCB) "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 6 ◾ Each is represented by a Process Control Block (PCB) ◾ Also known as task control block ◾ PCB contains ◾ Process state ◾ Program counter ◾ CPU registers ◾ CPU Scheduling information ◾ Memory management information ◾ Accounting information ◾ I/O status information ◾ PCB is a repository for any information that vary from process to process Process State Process Number Program Counter Registers Memory Limits List of Open Files . . . . Fig. Process Control Block
  • 7.
    Scheduling Queues • Containsall processes Job Queue • Ready to execute • Waiting for CPU Ready Queue • Waiting for a particular device I/O device Device Queue Fig. Different Scheduling Queues "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 7
  • 8.
    How A ProcessIs Scheduled? "PROCESS AND PROCESS SCHEDULING" BY MS. RASHMI BHAT 8 ready queue CPU I/O I/O queue I/O request Time slice expired fork a child Wait for an interrupt Child executes Interrupt occurs Newly admitted process Dispatched Fig. Queueing diagram of process scheduling "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 8
  • 9.
    Who Schedules TheProcess? "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 9 ◾ Scheduler ◾ Selects processes from different queues for scheduling purpose ◾ Types of schedulers ◾ Long-term Scheduler (Job Scheduler) ◾ Selects processes from job pool (job queue) and loads them into main memory for execution ◾ Less frequently executed ◾ Controls the degree of multiprogramming (no. of processes in main memory) ◾ Short-term Scheduler (CPU Scheduler) ◾ Selects processes from ready queue and allocates CPU to one of them. ◾ Most frequently executed ◾ Medium-term Scheduler (Memory Scheduler) ◾ Medium-term schedulers are responsible for swapping of a process from the Main Memory to Secondary Memory and vice-versa. ◾ It is helpful in maintaining a perfect balance between the I/O bound and the CPU bound.
  • 11.
    Context Switch "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 11 ◾ When interrupt occurs, the system saves the current context of a process running on the CPU ◾ The context is represented in the PCB ◾ We perform ◾ State save of the current state of the process ◾ State restore to resume operations ◾ Context switch ◾ Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process ◾ Context-switch time is pure overhead ◾ Highly dependent on hardware support
  • 12.
    Context Switch Fig. CPUSwitching from Process to Process "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 12
  • 13.
    Process Scheduling "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 13
  • 14.
    Basic Concept ofScheduling "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 14 ◾ Multiprogramming concept ◾ Properties of a process/program ◾ CPU-bound program ◾ I/O-bound program ◾ CPU Scheduler selects process to execute ◾ Scheduling ◾ Non-preemptive or cooperative ◾ Preemptive ◾ Incurs a cost associated with access to shared data ◾ Affects the design of the operating-system kernel When CPU Scheduling is to be done? When a process moves from 1. Running state  waiting state 2. Running state  ready state 3. Waiting state  ready state 4. Terminates
  • 15.
    • Non-Preemptive: • Inthis case, a process’s resource cannot be taken before the process has finished running. • When a running process finishes and transitions to a waiting state, resources are switched. • Preemptive: • In this case, the OS assigns resources to a process for a predetermined period. • The process switches from running state to ready state or from waiting state to ready state during resource allocation. • This switching happens because the CPU may give other processes priority and substitute the currently active process for the higher priority process. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 15
  • 17.
    Basic Concept ofScheduling "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 17 ◾ Dispatcher ◾ Gives control of the CPU to the process selected by short-term scheduler ◾ Involves ◾ Switching context ◾ Switching to user mode ◾ Jumping to the proper location in the user program to restart that program ◾ Dispatch latency ◾ The time it takes for the dispatcher to stop one process and start another running
  • 18.
    Scheduling Criteria ◾ CPUUtilization ◾ To keep the CPU as busy as possible ◾ Throughput ◾ The number of processes that are completed per unit time ◾ Turnaround time ◾ The interval from the time of submission of a process to the time of completion Waiting to get in memory Waiting to ready queue Executing on the CPU Doing I/O Turnaround Time "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 18
  • 19.
    Scheduling Criteria ◾ Waitingtime ◾ The sum of the periods spent waiting in the ready queue ◾ Response time ◾ The time from the submission of a request until the first response is produced "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 19
  • 20.
    Scheduling Algorithms First-Come First-ServedScheduling (FCFS) Shortest Job First Scheduling (SJF) Priority Scheduling Round Robin Scheduling (RR) Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 20
  • 21.
    Scenario 1 ◾ Howmovie tickets are issued? First-Come First-Served Scheduling (FCFS) "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 21
  • 22.
    First-Come First-Served Scheduling(FCFS) ◾ Managed with FIFO queue ◾ Working of an algorithm ◾ When a process enters in ready queue, its PCB is linked to the tail of the queue ◾ When CPU is free, it is allocated to the process at the head of the queue ◾ Average waiting time is quite long 𝑷𝒙 . . . . . . . . . . . . . . . 𝑷𝒃 𝑷𝒂 𝑷𝒃 Busy Free Tail "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 22
  • 23.
    First-Come First-Served Scheduling(FCFS) ◾ Average Waiting time: 𝒔𝒖𝒎 𝒐𝒇 𝒘𝒂𝒊𝒕𝒊𝒏𝒈 𝒕𝒊𝒎𝒆 𝒐𝒇 𝒂𝒍𝒍 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔 𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔 ◾ Total Turnaround time: 𝒔𝒖𝒎 𝒐𝒇 𝒕𝒐𝒕𝒂𝒍 𝒆𝒙𝒆𝒄𝒖𝒕𝒊𝒐𝒏 𝒕𝒊𝒎𝒆 + 𝒘𝒂𝒊𝒕𝒊𝒏𝒈 𝒕𝒊𝒎𝒆 𝒓𝒆𝒒𝒖𝒊𝒓𝒆𝒅 𝒇𝒐𝒓 𝒑𝒓𝒐𝒄𝒆𝒔𝒔 ◾ Average Turnaround time: 𝒕𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅 𝒕𝒊𝒎𝒆 𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔 ◾ Schedules in order of arrival of processes ◾ Algorithm is non-preemptive "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 23
  • 24.
    Remember … ◾ ProcessTimes ◾ Burst Time (BT) ◾ Arrival Time (AT) ◾ Completion Time (CT) ◾ Turnaround Time (TT) ◾ Avg. Turnaround Time (ATT) ◾ Waiting Time (WT) ◾ Avg. Waiting Time (AWT) 𝑻𝑻 = 𝑪𝑻 − 𝑨𝑻 𝑨𝑻𝑻 = 𝒊= 𝟏 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 24 σ𝒏 𝑻𝑻𝒊 𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔 𝑾𝑻 = 𝑻𝑻 − 𝑩𝑻 𝑨𝑾𝑻 = 𝒊 = σ𝒏 𝟏 𝑾𝑻𝒊 𝒏𝒐. 𝒐𝒇 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒆𝒔
  • 25.
    Example 1 "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 25 P1 P2 P3 P4 0 7 17 40 44 Suppose following set of processes arrived in system at time 0 in given order. Process BT WT TT P1 P2 P3 P4 Average Process BT WT TT P1 7 0 7 P2 10 7 17 P3 23 17 40 P4 4 40 44 Average 16 27 Process BT P1 7 P2 10 P3 23 P4 4 Gantt Chart
  • 26.
    Example 2 Process BTAT CT TT WT P1 4 0 P2 5 1 P3 3 2 P4 5 3 P5 6 4 P1 P2 P3 P4 P5 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 26 Suppose following set of five processes arriving at different times. Ready queue
  • 27.
    Example 2 CT TTWT Process BT AT CT TT WT P1 4 0 4 4 0 P2 5 1 9 8 3 P3 3 2 12 10 7 P4 5 3 17 14 9 P5 6 4 23 19 13 P1 P2 P3 P4 P5 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 27 Suppose following set of five processes arriving at different times. Ready queue ATT = 11 AWT = 6.4 P1 P2 P3 P4 P5 0 4 9 12 17 23
  • 28.
    First-Come First-Served Scheduling(FCFS) "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 27 • Suppose, we have one CPU-bound process and many I/O-bound processes. What will be the situation?? • When all I/O-bound processes wait in ready queue, I/O devices are idle • When all processes wait in I/O queue, CPU sits idle. • Convoy Effect • All the other processes wait for one big process to release the CPU
  • 29.
    Scenario 2 I justwant to book a flight ticket I want to place an order for grocery I want to play Counter Strike. My friends are waiting I want to see cartoon "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 28
  • 30.
    Shortest Job First(SJF) "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 29 ◾ Each process is associated with the length of the process's next CPU burst ◾ Shortest-next-CPU-burst algorithm ◾ CPU is assigned to process that has the smallest next CPU burst. ◾ If two processes have same CPU burst, FCFS is used. ◾ Algorithm can be either preemptive or non-preemptive ◾ The choice arises when a new process arrives at the ready queue while a previous process is still running ◾ Preemptive algorithm : Shortest-Remaining-Time-First (SRTF)
  • 31.
    Example 4 "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 30 ◾ Suppose process are arrived in system at different times, Find average turnaround time and average waiting time using SJF scheduling (Non-preemptive). Process AT BT A 0 8 B 2 5 C 1 9 D 3 2
  • 32.
    Example 4 ◾ Supposeprocess are arrived in system at different times, Find average turnaround time and average waiting time using SJF scheduling (Non-preemptive). (SRTN) A D B C 0 8 10 15 24 CT TT WT Process AT BT CT TT WT A 0 8 8 8 0 B 2 5 15 13 8 C 1 9 24 23 14 D 3 2 10 7 5 � � 𝟓𝟏 𝑨𝒗𝒈. 𝑻𝑻 = = 𝟏𝟐. 𝟕𝟓 𝟐𝟕 𝑨𝒗𝒈. 𝑾𝑻 = = 𝟔. 𝟕𝟓 𝟒 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 31
  • 33.
    Example 4 ◾ Supposeprocess are arrived in system at different times, Find average turnaround time and average waiting time using SJF scheduling (preemptive). (SRTF) 0 9 15 24 CT TT WT Process AT BT CT TT WT A 0 8 15 15 7 B 2 5 9 7 2 C 1 9 24 23 14 D 3 2 5 2 0 � � 𝟓𝟏 𝑨𝒗𝒈. 𝑻𝑻 = = 𝟏𝟏. 𝟕𝟓 𝟐𝟕 𝑨𝒗𝒈. 𝑾𝑻 = = 𝟓. 𝟕𝟓 𝟒 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 32 A B D B A C 2 3 5
  • 34.
    Example 5 "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 33 An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds): Find the average waiting time of processes. Process AT BT P1 0 12 P2 2 4 P3 3 8 P4 8 4
  • 35.
    Example 5 An operatingsystem uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds): Find the average waiting time of processes. P1 P2 P3 P4 P3 P1 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 34 0 2 6 8 12 18 28 CT TT WT Process AT BT CT TT WT P1 0 12 28 28 16 P2 2 4 6 4 0 P3 3 8 18 15 7 P4 8 4 12 4 0 𝑨𝒗𝒈. 𝑾𝑻 = 𝟓. 𝟕𝟓
  • 36.
    Scenario 3 How toshare the bicycle among all four friends? "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 35
  • 37.
    Round Robin Scheduling ◾Designed especially for time sharing systems. ◾ FCFS + Preemption ◾ Time quantum or time slice ◾ The ready queue is treated as a circular queue ◾ The ready queue is treated as a FIFO queue of processes P1 P2 Pn . . . P3 Circular Ready Queue P1 P2 … Pn "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 36 Ready Queue
  • 38.
    Round Robin Scheduling ◾Working of the Algorithm ◾ The scheduler picks the first process from the ready queue ◾ Sets a timer to interrupt after 1 time quantum ◾ Dispatches the process ◾ If 𝑏𝑢𝑟𝑠𝑡_𝑡𝑖𝑚𝑒 < 1 𝑡𝑖𝑚𝑒 𝑞𝑢𝑎𝑛𝑡𝑢𝑚 ◾ Currently running process releases the CPU voluntarily ◾ Scheduler selects the next process in the ready queue. ◾ If 𝑏𝑢𝑟𝑠𝑡_𝑡𝑖𝑚𝑒 > 1 𝑡𝑖𝑚𝑒 𝑞𝑢𝑎𝑛𝑡𝑢𝑚 ◾ Timer goes off and causes interrupt to OS and preempts the currently running process ◾ On interrupt, context switch will be executed ◾ The process will be put at the tail of the ready queue ◾ Selects the next process in the ready queue. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 37
  • 39.
    Round Robin Scheduling P3=4 P2=1 P1=8 P4=10 CPU P26 P3 4 P4 10 Process Burst Time P1 13 Time quantum = 5 ms P4=10 P2=1 P1=8 CPU P4=5 P1=3 CPU P2=1 P1=3 P4=5 CPU P2=6 P1=8 P4=10 P3=4 CPU P1=8 P4=5 P2=1 CPU P1=13 P4=10 P3=4 P2=6 CPU P1=3 CPU "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 38
  • 40.
    Example Process Burst Time P118 P2 3 P3 5 Time quantum = 3 ms P1 P2 P3 P1 P3 P1 P1 P1 P1 0 3 6 9 12 14 17 20 23 26 Avg. waiting time: (0+6+2)+(3)+(6+3) = 20/3 = 6.67 Total turnaround time: (8+18)+(3+3)+(9+5)= 46 Avg. turnaround time: 46 / 3 = 15.33 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 39
  • 41.
    Round Robin Scheduling ◾Average waiting time under the RR policy is often long ◾ Algorithm is preemptive ◾ The performance depends heavily on the size of the time quantum ◾ Approach is called processor sharing ◾ The effect of context switching on the performance of RR scheduling ◾ The time quantum has to be large with respect to the context switch time, but it should not be too large ◾ If the time quantum is too large, RR scheduling degenerates to FCFS policy. ◾ Turnaround time also depends on the size of the time quantum. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 40
  • 42.
    Example "PROCESS AND PROCESSSCHEDULING" BY Bhargavi Varala 41 ◾ Suppose 6 processes are sharing the CPU in FCFS fashion. If the context switch requires 1 unit time, calculate the average turn waiting time. (All times are in milliseconds) Process Name Arrival Time Burst Time A 0 3 B 1 2 C 2 1 D 3 4 E 4 5 F 5 2
  • 43.
    Scenario 4 What willAndy do first?? "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 42
  • 44.
    Priority Scheduling "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 43 ◾ A priority is associated with each process ◾ Priority is a number assigned to the process ◾ CPU is allocated to the process with the highest priority ◾ Equal-priority processes are scheduled in FCFS manner ◾ What number is to consider as higher priority? ◾ Convention: Low numbers represent high priority order ◾ Can be either preemptive or non-preemptive
  • 45.
    Priority Scheduling "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 44 ◾ When a newly arrived process has higher priority than the priority of the currently running process then ◾ Put the newly arrived process at the head of the ready queue and let the current process continue. (Non-Preemptive) OR ◾ Preempt the CPU from currently process, allocate it to the newly arrived process. (Preemptive)
  • 46.
    Example 6 P2 P5P1 P3 P4 0 3 4 12 17 19 Process BT Priority CT WT P1 8 3 12 4 P2 3 1 3 0 P3 5 4 17 12 P4 2 5 19 17 P5 1 2 4 3 Assume following set of processes are arrived in system. What will be the average waiting time when priority scheduling is applied? 4 + 0 + 12 + 17 + 3 36 Avg. Waiting Time = 5 = 5 = 7.2 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 45
  • 47.
    Example 6 0 Process BTAT Priority CT WT P1 8 0 3 12 4 P2 3 1 1 4 0 P3 5 2 4 17 10 P4 2 3 5 19 14 P5 1 4 2 5 0 Assume following set of processes are arrived in system. What will be the average waiting time when priority scheduling (preemptive approach) is applied? 4 + 0 + 10 + 14 + 0 28 Avg. Waiting Time = 5 = 5 = 5.6 "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 46 P1 P2 P5 P1 P3 P4 1 4 5 12 17 19
  • 48.
    Priority Scheduling "PROCESS ANDPROCESS SCHEDULING" BY Bhargavi Varala 47 ◾ Indefinite blocking or starvation ◾ Keeps a low priority processes waiting indefinitely ◾ Solution : aging ◾ Gradually increasing the priority of processes that wait in the system for a long time
  • 49.
    Threads "PROCESS AND PROCESSSCHEDULING" BY Bhargavi Varala 48
  • 50.
    Threads ◾ What isthread? ◾ A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, ( and a thread ID.) ◾ Processes (heavyweight) have a single thread of control. ◾ Also called as lightweight process. Process Threads "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 49
  • 51.
    Threads ◾ Multithreading isan ability of an OS to support multiple, concurrent paths of execution within a single process. One Process One Thread Multiple Processes One Thread per process One Process Multiple Threads Multiple Processes Multiple Thread per process Fig. Single Threaded Approach Fig. Multithreaded Approaches "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 50
  • 52.
    Threads ◾ In singlethreaded process model, a process includes ◾ Its PCB, ◾ User address space, ◾ User and kernel stack ◾ To manage call/return behavior of the execution of the process ◾ While the process is running, it controls the processor registers. ◾ The contents of these registers are saved when the process is not running Fig. Single Threaded Process Model "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 51
  • 53.
    Threads ◾ In multithreadedprocess model, there is ◾ A single PCB ◾ User Address space ◾ Separate stack for each thread ◾ Separate control block of each thread ◾ All threads share the state and resources of that process ◾ All threads reside in same address space and have access to all data Fig. Multithreaded Process Model Process "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 52
  • 54.
    Threads "PROCESS AND PROCESSSCHEDULING" BY Bhargavi Varala 53 ◾ Benefits of threads: ◾ Takesless time to create a new thread in an existing process than to create a brand new process. ◾ Takes less time to terminate than the process. ◾ Takes less time to switch between two threads within the same process than to switch between two processes. ◾ Enhance efficient in communication between different executing programs.
  • 55.
    Types of Threads ◾User Level Threads ◾ All of the work of thread management is carried out by the application and the kernel is not aware about the existence of the threads. ◾ Any application can be programmed to be multithreaded by using thread libraries. ◾ By default an application starts with single thread. ◾ While application is running, at any time, the application may spawn a new thread to run within the same process. Threads User Level Threads Kernel Level Threads Types of Threads Fig. Pure User Level Thread "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 54
  • 56.
    Types of Threads "PROCESSAND PROCESS SCHEDULING" BY Bhargavi Varala 55 ◾ User Level Threads ◾ Advantages ◾ Thread switching does not require kernel mode privileges. ◾ Scheduling can be application specific. ◾ User level threads can run on any operating system. ◾ Disadvantages ◾ When a user level thread executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. ◾ In pure user level thread strategy, a multithreaded application cannot take advantage of multiprocessing OS.
  • 57.
    Types of Threads ◾Kernel Level Threads ◾ All of the work of thread management is done by the kernel. ◾ There is no thread management code in the application level. ◾ Kernel saves context information for process as a whole and for individual threads within the process. ◾ Scheduling is done by kernel on thread basis. ◾ Kernel can schedule multiple threads on multiple processors ◾ If one thread is blocked, kernel schedules another thread within the same process. ◾ Transferring control from one thread to another thread in same process requires a mode switch to the kernel. Fig. Pure Kernel Level Thread "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 56
  • 58.
    Types of Threads ◾Combined Approach ◾ Thread creation is done completely in user space. ◾ Multiple user level threads are mapped onto some (smaller or equal) number of kernel level threads ◾ Multiple threads within the same application can run in parallel on multiple processors ◾ Blocking system calls need not block the entire process. Fig. Combined Approach "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 57
  • 59.
    Multi-Threading Models One-to-One Model "PROCESSAND PROCESS SCHEDULING" BY Bhargavi Varala 58 Many-to-One Model Many-to-Many Model
  • 60.
    Concurrency in OS •In the world of modern computing, operating systems (OS) play a critical role in ensuring that a computer can perform multiple tasks simultaneously. • One of the key techniques used to achieve this is concurrency. • Concurrency in OS allows multiple tasks or processes to run concurrently, providing simultaneous execution and significantly improving system efficiency. • However, the implementation of concurrency in operating systems brings its own set of challenges and complexities. • In this lecture, we will explore the concept of concurrency in OS, exploring its principles, advantages, limitations, and the problems it presents. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 60
  • 61.
    What is Concurrencyin OS? • Concurrency in operating systems refers to the ability of an OS to manage and execute multiple tasks or processes simultaneously. • It allows multiple tasks to overlap in execution, giving the appearance of parallelism even on single-core processors. • Concurrency is achieved through various techniques such as multitasking, multithreading, and multiprocessing. • Multitasking involves the execution of multiple tasks by rapidly switching between them. Each task gets a time slot, and the OS switches between them so quickly that it seems as if they are running simultaneously. • Multithreading takes advantage of modern processors with multiple cores. It allows different threads of a process to run on separate cores, enabling true parallelism within a single process. • Multiprocessing goes a step further by distributing multiple processes across multiple physical processors or cores, achieving parallel execution at a higher level. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 61
  • 63.
    Why Allow ConcurrentExecution? The need for concurrent execution arises from the desire to utilize computer resources efficiently. Here are some key reasons why concurrent execution is essential: • Resource Utilization: • Concurrency ensures that the CPU, memory, and other resources are used optimally. Without concurrency, a CPU might remain idle while waiting for I/O operations to complete, leading to inefficient resource utilization. • Responsiveness: • Concurrent systems are more responsive. Users can interact with multiple applications simultaneously, and the OS can switch between them quickly, providing a smoother user experience. • Throughput: • Concurrency increases the overall throughput of the system. Multiple tasks can progress simultaneously, allowing more work to be done in a given time frame. • Real-Time Processing: • Certain applications, such as multimedia playback and gaming, require real-time processing. Concurrency ensures that these applications can run without interruptions, delivering a seamless experience. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 63
  • 64.
    Principles of Concurrencyin Operating Systems To effectively implement concurrency, OS designers adhere to several key principles: •Process Isolation: • Each process should have its own memory space and resources to prevent interference between processes. This isolation is critical to maintain system stability. •Synchronization: • Concurrency introduces the possibility of data races and conflicts. Synchronization mechanisms like locks, semaphores, and mutexes are used to coordinate access to shared resources and ensure data consistency. •Deadlock Avoidance: • OSs implement algorithms to detect and avoid deadlock situations where processes are stuck waiting for resources indefinitely. Deadlocks can halt the entire system. •Fairness: • The OS should allocate CPU time fairly among processes to prevent any single process from monopolizing system resources. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 64
  • 65.
    Problems in Concurrency Whileconcurrency offers numerous benefits, it also introduces a range of challenges and problems: •Race Conditions: • They occur when multiple threads or processes access shared resources simultaneously without proper synchronization. In the absence of synchronization mechanisms, race conditions can lead to unpredictable behavior and data corruption. This can result into data inconsistencies, application crashes, or even security vulnerabilities if sensitive data is involved. •Deadlocks: • A deadlock arises when two or more processes or threads become unable to progress as they are mutually waiting for resources that are currently held by each other. This situation can bring the entire system to a standstill, causing disruptions and frustration for users. •Priority Inversion: • Priority inversion occurs when a lower-priority task temporarily holds a resource that a higher-priority task needs. This can lead to delays in the execution of high-priority tasks, reducing system efficiency and responsiveness. •Resource Starvation: • Resource starvation occurs when some processes are unable to obtain the resources they need, leading to poor performance and responsiveness for those processes. This can happen if the OS does not manage resource allocation effectively or if certain processes monopolize resources. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 65
  • 67.
    Advantages of Concurrency Concurrencyin operating systems offers several distinct advantages: •Improved Performance: Concurrency significantly enhances system performance by effectively utilizing available resources. With multiple tasks running concurrently, the CPU, memory, and I/O devices are continuously engaged, reducing idle time and maximizing overall throughput. •Responsiveness: Concurrency ensures that users enjoy fast response times, even when juggling multiple applications. The ability of the operating system to swiftly switch between tasks gives the impression of seamless multitasking and enhances the user experience. •Scalability: Concurrency allows systems to scale horizontally by adding more processors or cores, making it suitable for both single-core and multi-core environments. •Fault Tolerance: Concurrency contributes to fault tolerance, a critical aspect of system reliability. In multiprocessor systems, if one processor encounters a failure, the remaining processors can continue processing tasks. This redundancy minimizes downtime and ensures uninterrupted system operation. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 67
  • 68.
    Limitations of Concurrency Despiteits advantages, concurrency has its limitations: • Complexity: • Debugging and testing concurrent code is often more challenging than sequential code. The potential for hard-to-reproduce bugs necessitates careful design and thorough testing. • Overhead: • Synchronization mechanisms introduce overhead, which can slow down the execution of individual tasks, especially in scenarios where synchronization is excessive. • Race Conditions: • Dealing with race conditions requires careful consideration during design and rigorous testing to prevent data corruption and erratic behavior. • Resource Management: • Balancing resource usage to prevent both resource starvation and excessive contention is a critical task. Careful resource management is vital to maintain system stability. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 68
  • 69.
    Issues of Concurrency Concurrencyintroduces several critical issues that OS designers and developers must address: •Security: • Concurrent execution may inadvertently expose data to unauthorized access or data leaks. Managing access control and data security in a concurrent environment is a non-trivial task, that demands thorough consideration. •Compatibility: • Compatibility issues can arise when integrating legacy software into concurrent environments, potentially limiting their performance. •Testing and Debugging: • Debugging concurrent code is a tough task. Identifying and reproducing race conditions and other concurrency-related bugs can be difficult. •Scalability: • While concurrency can improve performance, not all applications can be easily parallelized. Identifying tasks that can be parallelized and those that cannot is crucial in optimizing system performance. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 69
  • 70.
    Mutual Exclusion InOS • Mutual exclusion in OS locks is a frequently used method for synchronizing processes or threads that want to access some shared resource. • Their work justifies their name, if a thread operates on a resource, another thread that wants to do tasks on it must wait until the first one is done with its process. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 70
  • 71.
    What is MutualExclusion in OS? • It’s a condition in which a thread of execution does not ever get involved in a critical section at the same time as a concurrent thread of execution so far using the critical section. • This critical section can be a period for which the thread of execution uses the shared resource which can be defined as a data object, that different concurrent threads may attempt to alter (where the number of concurrent read operations allowed is two but on the other hand two write or one read and write is not allowed, as it may guide it to data instability). • Mutual exclusion in OS is designed so that when a write operation is in the process then another thread is not granted to use the very object before the first one has done writing on the critical section after that releases the object because the rest of the processes have to read and write it. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 71
  • 72.
    Why is MutualExclusion Required? • An easy example of the importance of Mutual Exclusion can be envisioned by implementing a linked list of multiple items, considering the fourth and fifth need removal. • The deletion of the node that sits between the other two nodes is done by modifying the previous node’s next reference directing the succeeding node. • In a simple explanation, whenever node “i” wants to be removed, at that moment node “ith - 1” 's next reference is modified, directing towards the node “ith + 1”. • Whenever a shared linked list is in the middle of many threads, two separate nodes can be removed by two threads at the same time meaning the first thread modifies node “ith - 1” next reference, directing towards the node “ith + 1”, at the same time second thread modifies node “ith” next reference, directing towards the node “ith + 2”. • Despite the removal of both achieved, linked lists required state is not yet attained because node “i + 1” still exists in the list, due to node “ith - 1” next reference still directing towards the node “i + 1”. • Now, this situation is called a race condition. Race conditions can be prevented by mutual exclusion so that updates at the same time cannot happen to the very bit about the list. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 72
  • 73.
    Necessary Conditions forMutual Exclusion There are four conditions applied to mutual exclusion, which are mentioned below : • Mutual exclusion should be ensured in the middle of different processes when accessing shared resources. There must not be two processes within their critical sections at any time. • Assumptions should not be made as to the respective speed of the unstable processes. • The process that is outside the critical section must not interfere with another for access to the critical section. • When multiple processes access its critical section, they must be allowed access in a finite time, i.e. they should never be kept waiting in a loop that has no limits. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 73
  • 74.
    Example of MutualExclusion There are many types of mutual exclusion, some of them are mentioned below : •Locks : • It is a mechanism that applies restrictions on access to a resource when multiple threads of execution exist. •Recursive lock : • It is a certain type of mutual exclusion (mutex) device that is locked several times by the very same process/thread, without making a deadlock. While trying to perform the "lock" operation on any mutex may fail or block when the mutex is already locked, while on a recursive mutex the operation will be a success only if the locking thread is the one that already holds the lock. •Semaphore : • It is an abstract data type designed to control the way into a shared resource by multiple threads and prevents critical section problems in a concurrent system such as a multitasking operating system. They are a kind of synchronization primitive. "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 74
  • 75.
    Readers writer (RW)lock : • It is a synchronization primitive that works out reader-writer problems. • It grants concurrent access to the read-only processes, and writing processes require exclusive access. • This conveys that multiple threads can read the data in parallel however exclusive lock is required for writing or making changes in data. • It can be used to manipulate access to a data structure inside the memory "PROCESS AND PROCESS SCHEDULING" BY Bhargavi Varala 75
  • 76.
    Thank "PROCESS AND PROCESSSCHEDULING" BY Bhargavi Varala 59