Memory Allocation fora Process
Code
Data
Heap
Stack
Contains program code. Also called as text part
Contains temporary data such as function
parameters, return addresses, and local variables
Contains global variables
For dynamic allocation during runtime
5.
Scheduling Queues
• JobQueue
• A newly created process joins the job queue
• Ready Queue
• Ready to run processes are kept in the ready queue
• Scheduler dispatches processes from this to the CPU in accordance with the scheduling
algorithm
• Device Queue
• Each I/O device has its own queue. Processes must wait for access if another process is using
the device
6.
Life Cycle ofa Process
New
Ready Running
Terminated
Waiting
Allotted memory
& Resources
Processor Allotted
Time Out
Completed
Waiting for
I/O Event
I/O Event
Completed
Job Queue
Ready Queue
Waiting Queue/
Device Queue
7.
Process Control Block
Currentstate of the process
Access privileges
ID of the process
Address of the next instruction to be executed
Values of the CPU registers
Priority value, pointer to the scheduling queue
Values of base & limit registers, paging/segmentation Information
CPU or real time, time limits, job/process numbers
list of allocated I/O devices, list of open files
Any other information
Process State
Process Privileges
Process ID
Program Counter
CPU Registers
Scheduling Information
Memory Management Information
Accounting Information
I/O Information
⋮
Types of Scheduling
•Combines long-term and short-term scheduling for efficient resource management
• Long-term scheduler selects processes to bring into main memory from secondary storage
• Short-term scheduler handles execution of processes already in memory
• Sometimes a medium term schedular can also be used to temporarily suspend or swap
processes to manage limited RAM when system is under memory pressure
• Reduces memory contention by limiting active processes in the system
• Ideal for systems with high I/O operations or limited physical memory, such as-
• Cloud and Virtualized Environments
• High-Performance Computing (HPC) Systems
• Embedded Systems
10.
Process Scheduler
• Processscheduler decides which process to admit in the ready queue
• It is also called a long-term scheduler as it is used only to bring a process into the
CPU
• It needs to be invoked less frequently for the same reason, hence, it is slower
than the CPU scheduler
• The process scheduler decides how many processes to be admitted to the CPU
• This is called as the ‘degree of multi-programming’
• The admission of a process may depend on the availability of the needed
resources
11.
CPU Scheduler
• CPUscheduler decides which process gets the CPU time next
• It is called a short-term scheduler as it only schedules for the next CPU time
• It needs to be invoked more frequently for the same reason, hence, it must be
faster than the Process scheduler
• The next process to be scheduled is decided by the scheduling policy of the
operating system
• Example- First-Come-First-Served (FCFS), Shortest Job First (SJF), Shortest Remaining Time
First (SRTF), Round Robin (RR), Priority Scheduling, etc.
• Some systems also have a medium-Term scheduler which swaps processes in and
out of CPU to reduce/increase the degree of multiprogramming
12.
Preemptive vs. Non-preemptiveScheduling
• Each process, which has been allotted CPU time, is given some resources that are
required to run that process
• An operating system may decide on a policy of resource allocation that is-
• Preemptive- where the allocated resources can be revoked from a process
Example- RR, SRTF, Preemptive priority scheduling
• Non-preemptive- where the allocated resources cannot be revoked from a process
Example- FCFS, SJF and non-preemptive priority scheduling
• Preemptive scheduling ensures that no process dominates the CPU and improves
average response time
• Non-preemptive scheduling ensures that no process is starved due to priority
preferences and has less context switches
13.
Non-Preemptive Scheduling Algorithms
•First-Come-First-Served (FCFS)
• The processes are scheduled for CPU in the same order that they arrive
• Each process regardless of the required time, completes its execution and then leaves CPU
• Only after the current process has completed its execution, next process can be admitted
• Convoy effect in FCFS occurs when shorter processes wait behind a long-running process,
leading to poor CPU utilization
• Shortest Job First (SJF)
• Another simple non-preemptive scheduling algorithm
• Here, in the beginning or after the completion of one process, CPU scheduler assesses the
time required for all the processes in the ready queue and admits the process that needs the
least amount of CPU time
• Non-Preemptive Priority Scheduling (NPS)
• Processes are selected based on their priority values- smaller numbers have higher priority
• A process once dispatched to the CPU cannot be interrupted even if other higher priority
process comes
Exercise
• Apply allthe scheduling algorithms on the following processes and compute the
following-
PID AT BT Priority ST CT RT WT TAT
P1 0 7 1
P2 1 6 0
P3 2 2 1
P4 4 3 0
Avg - - -
18.
Preemptive Scheduling Algorithms
•Shortest Remaining Time First (SRTF)
• A preemptive version of SJF where, whenever a new process enter the ready queue, the
remaining time of all the processes is compared and the process with shorter remaining time
is granted CPU access
• If a shorter process enter the ready queue, CPU access of currently running longer process is
revoked
• Round Robin (RR)
• Each process gets to access the CPU for a predetermined amount of time (Time Quantum)
• Process exits the CPU one its TQ is over, or it has completed its execution (whichever time is
smaller), then the next process in queue gets the CPU access for TQ amount of time
• The process is followed in a loop over all the processes in the ready queue
• Preemptive Priority Scheduling (PPS)
• A preemptive version of NPS, only here, a process once dispatched to the CPU can be
interrupted when other higher priority process joins the ready queue
19.
Round Robin (RR):Example(Time Quantum-3)
Process ID (PID) Arrival Time (AT) Burst Time (BT) Priority
P1 0 8 3
P2 1 5 1
P3 2 5 2
P4 3 2 5
P5 5 6 4
P1 P2 P3 P4 P1 P5 P2 P3 P1 P5
0 3 6 9 11 14 17 19 21 23 26
Process ID
(PID)
Start Time
(ST)
Completion Time (CT)
= Time at Termination
Response Time (RT)
= ST-AT
Waiting Time (WT)
= CT-BT-AT= TAT-BT
Turnaround Time (TAT)
= CT-AT or = WT+BT
P1 0 23 0 15 23
P2 3 19 2 13 18
P3 6 21 4 14 19
P4 11 11 8 6 8
P5 14 26 9 15 21
20.
Shortest Remaining TimeFirst (SRTF): Example
Process ID (PID) Arrival Time (AT) Burst Time (BT) Priority
P1 0 8 3
P2 1 5 1
P3 2 5 2
P4 3 2 5
P5 5 6 4
P1 P2 P2 P4 P4 P2 P2 P2 P3 P3 P3 P3 P3 P5 P5 P5 P5 P5 P5 P1 P1 P1 P1 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Process ID
(PID)
Start Time
(ST)
Completion Time (CT)
= Time at Termination
Response Time (RT)
= ST-AT
Waiting Time (WT)
= CT-BT-AT
Turnaround Time (TAT)
= CT-AT or = WT+BT
P1 0 26 0 18 26
P2 1 8 0 2 7
P3 8 13 6 6 11
P4 3 5 0 0 2
P5 13 19 8 8 14
Exercise
• Apply allthe scheduling algorithms on the following processes and compute the
following-
PID AT BT Priority ST CT RT WT TAT
P1 0 7 1
P2 1 6 0
P3 2 2 1
P4 4 3 0
Avg - - -
23.
Multiple Queues
• Dividesprocesses into distinct queues based on priority or type
• Each queue can have its own scheduling algorithm, such as-
• RR for foreground processes
• FCFS for background processes
• PS for system processes
• Offers better resource management by isolating processes with different needs
• Processes are statically or dynamically assigned to queues using predefined criteria
• Helps in
• Preventing Processes from Starving Indefinitely
• Regulating CPU usage
• Fair CPU Time Allocation to All Users in Multi-user Systems
Threads
• A threadis a subset of a process that has an ID, program counter, registers and a
stack, and shares other resources (code, data, files, etc.) with the parent process
• Thread is created with fork() system call
• Advantages
• Responsiveness- by enabling concurrent execution, avoiding application freeze
• Resource sharing- share the same memory and resources of the parent process, reducing
duplication
• Economy- faster than processes to create and switch between, saving computational overhead
• Scalability- leverage multi-core processors for parallelism, improving scalability
26.
Threads
• User Threads
•User threads are managed at the application level without kernel involvement
• Switching between user threads is faster as it avoids kernel overhead
• User threads lack direct support for hardware resources like multiple processors
• Kernel Threads
• Kernel threads are managed directly by the operating system's kernel
• They can run on multiple processors, enabling true parallelism in execution
• Kernel thread management incurs higher overhead due to system calls
• Their mappings can be-
• Many-to-One (Many user threads mapped to one kernel thread)
• One-to-One (One user thread mapped to one kernel thread)
• Many-to-Many (Many user threads mapped to many kernel threads)
27.
Mapping of UserThreads to Kernel Threads
• One-to-One Mapping
• Each user thread maps to a kernel thread, allowing true parallelism
• Provides efficient use of multiprocessors but incurs higher resource costs
• Applications needing high responsiveness and parallelism, like GUI programs or real-time
systems
• Many-to-One Mapping
• Multiple user threads map to a single kernel thread
• Minimizes kernel involvement but can lead to poor performance due to blocking
• Applications requiring lightweight threads without frequent context switching use this model
• Many-to-Many Mapping
• Multiple user threads map to multiple kernel threads dynamically
• Balances parallelism and resource efficiency, ideal for scalable applications
• Applications with high scalability needs, such as database servers or scientific computations,
use this mapping for efficient resource sharing