SlideShare a Scribd company logo
1 of 7
Rate-monotonic scheduling
 Last Updated : 13 May, 2020
Rate monotonic scheduling is a priority algorithm that belongs to the static priority scheduling category of RealTime Operating
Systems. It is preemptive in nature. The priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority. Thus if a process with highest priority starts execution, it will
preempt the other running processes. The priority of a process is inversely proportional to the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation :
Where n is the number of processes in the process set,Ci is the computation time of the process,Ti is the Time period for the
process to run and U is the processor utilization.
Example:
An example to understand the working of Rate monotonic scheduling algorithm.
Processes Execution Time (C) Time period (T)
P1 3 20
P2 2 5
P3 2 10
n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977
U = 3/20 + 2/5 + 2/10 = 0.75
It is less than 1 or 100% utilization. The combined utilization of three processes is less than the threshold of these processes that
means the above set of processes are schedulable and thus satisfies the above equation of the algorithm.
1. Scheduling time –
For calculating the Scheduling time of algorithm we have to take the LCM of the Time period of all the processes. LCM ( 20,
5, 10 ) of the above example is 20. Thus we can schedule it by 20 time units.
2. Priority –
As discussed above, the priority will be the highest for the process which has the least running time period. Thus P2 will have
the highest priority, after that P3 and lastly P1.
P2 > P3 > P1
3. Representation and flow –
Above figure says that, Process P2 will execute two times for every 5 time units, Process P3 will execute two times for every
10 time units and Process P1 will execute three times in 20 time units. This has to be kept in mind for understanding the
entire execution of the algorithm below.
Process P2 will run first for 2 time units because it has the highest priority. After completing its two units, P3 will get the
chance and thus it will run for 2 time units.
As we know that process P2 will run 2 times in the interval of 5 time units and process P3 will run 2 times in the interval of
10 time units, they have fulfilled the criteria and thus now process P1 which has the least priority will get the chance and it
will run for 1 time. And here the interval of five time units have completed. Because of its priority P2 will preempt P1 and
thus will run 2 times. As P3 have completed its 2 time units for its interval of 10 time units, P1 will get chance and it will run
for the remaining 2 times, completing its execution which was thrice in 20 time units.
Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run for 2 times completing its criteria
for the third interval ( 10-15 ). Process P3 will now run for two times completing its execution. Interval 14-15 will again
remain idle for the same reason mentioned above. At 15 time unit, process P2 will execute for two times completing its
execution.This is how the rate monotonic scheduling works.
Conditions :
The analysis of Rate monotonic scheduling assumes few properties that every process should possess. They are :
1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.
3. Process running with highest priority that needs to run, will preempt all the other processes.
4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic scheduling.
Advantages :
1. It is easy to implement.
2. If any static priority assignment algorithm can meet the deadlines then rate monotonic scheduling can also do the same. It is
optimal.
3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as Round robin which neglects the
scheduling needs of the processes.
Disadvantages :
1. It is very difficult to support aperiodic and sporadic tasks under RMA.
2. RMA is not optimal when tasks period and deadline differ.
Earliest Deadline First (EDF) CPU scheduling
algorithm
 Difficulty Level : Medium
 Last Updated : 13 May, 2020
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-time
systems.
It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the absolute
deadline. The task whose deadline is closest gets the highest priority. The priorities are assigned and changed
in a dynamic fashion. EDF is very efficient as compared to other scheduling algorithms in real-time systems.
It can make the CPU utilization to about 100% while still guaranteeing the deadlines of all the tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all the tasks
have met the deadline. EDF finds an optimal feasible schedule. The feasible schedule is one in which all the
tasks in the system are executed within the deadline. If EDF is not able to find a feasible schedule for all the
tasks in the real-time system, then it means that no other task scheduling algorithms in real-time systems can
give a feasible schedule. All the tasks which are ready for execution should announce their deadline to EDF
when the task becomes runnable.
EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or processes
require a fixed CPU burst time. In EDF, any executing task can be preempted if any other periodic instance
with an earlier deadline is ready for execution and becomes active. Preemption is allowed in the Earliest
Deadline First scheduling algorithm.
Example:
Consider two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25Let the period of P2 be period2 = 75Let the processing time of P2 be
t2 = 30
Steps for solution:
1. Deadline pf P1 is earlier, so priority of P1>P2.
2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8. Repeat the above steps…
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its
processing time after which P1 starts to execute.
Limitations of EDF scheduling algorithm:
 Transient Overload Problem
 Resource Sharing Problem
 Efficient Implementation Problem
Thread Scheduling
 Difficulty Level : Hard
 Last Updated : 03 Feb, 2022
Scheduling of threads involves two boundary scheduling,
 Scheduling of user level threads (ULT) to kernel level threads (KLT) via lightweight process (LWP) by
the application developer.
 Scheduling of kernel level threads by the system scheduler to perform different unique os functions.
Lightweight Process (LWP) :
Light-weight process are threads in the user space that acts as an interface for the ULT to access the physical
CPU resources. Thread library schedules which thread of a process to run on which LWP and how long. The
number of LWP created by the thread library depends on the type of application. In the case of an I/O bound
application, the number of LWP depends on the number of user-level threads. This is because when an LWP
is blocked on an I/O operation, then to invoke the other ULT the thread library needs to create and schedule
another LWP. Thus, in an I/O bound application, the number of LWP is equal to the number of the ULT. In
the case of a CPU bound application, it depends only on the application. Each LWP is attached to a separate
kernel-level thread.
In real-time, the first boundary of thread scheduling is beyond specifying the scheduling policy and the
priority. It requires two controls to be specified for the User level threads: Contention scope, and Allocation
domain. These are explained as following below.
1. Contention Scope :
The word contention here refers to the competition or fight among the User level threads to access the kernel
resources. Thus, this control defines the extent to which contention takes place. It is defined by the
application developer using the thread library. Depending upon the extent of contention it is classified
as Process ContentionScope and System Contention Scope.
1. Process ContentionScope (PCS) –
The contention takes place among threads withina same process. The thread library schedules the high-
prioritized PCS thread to access the resources via available LWPs (priority as specified by the application
developer during thread creation).
System Contention Scope (SCS) –
The contention takes place among all threads in the system. In this case, every SCS thread is associated
to each LWP by the thread library and are scheduled by the system scheduler to access the kernel
resources.
In LINUX and UNIX operating systems, the POSIX Pthread library provides a
function Pthread_attr_setscope to define the type of contention scope for a thread during its creation.
int Pthread_attr_setscope(pthread_attr_t *attr, int scope)
1. The first parameter denotes to which thread within the process the scope is defined.
The second parameter defines the scope of contention for the thread pointed. It takes two values.
PTHREAD_SCOPE_SYSTEM
PTHREAD_SCOPE_PROCESS
1. If the scope value specified is not supported by the system, then the function returns ENOTSUP.
2. AllocationDomain :
The allocation domain is a set of one or more resources for which a thread is competing. In a multicore
system, there may be one or more allocation domains where each consists of one or more cores. One ULT
can be a part of one or more allocation domain. Due to this high complexity in dealing with hardware and
software architectural interfaces, this control is not specified. But by default, the multicore system will have
an interface that affects the allocation domain of a thread.
Consider a scenario, an operating system with three process P1, P2, P3 and 10 user level threads (T1 to T10)
with a single allocation domain. 100% of CPU resources will be distributed among all the three processes.
The amount of CPU resources allocated to each process and to each thread depends on the contention scope,
scheduling policy and priority of each thread defined by the application developer using thread library and
also depends on the system scheduler. These User level threads are of a different contention scope.
In this case, the contention for allocation domain takes place as follows,
1. Process P1:
All PCS threads T1, T2, T3 of Process P1 will compete among themselves. The PCS threads of the same
process can share one or more LWP. T1 and T2 share an LWP and T3 are allocated to a separate LWP.
Between T1 and T2 allocation of kernel resources via LWP is based on preemptive priority scheduling by
the thread library. A Thread with a high priority will preempt low priority threads. Whereas, thread T1 of
process p1 cannot preempt thread T3 of process p3 even if the priority of T1 is greater than the priority of
T3. If the priority is equal, then the allocation of ULT to available LWPs is based on the scheduling policy
of threads by the system scheduler(not by thread library, in this case).
2. Process P2:
Both SCS threads T4 and T5 of process P2 will compete with processes P1 as a whole and with SCS
threads T8, T9, T10 of process P3. The system scheduler will schedule the kernel resources among P1,
T4, T5, T8, T9, T10, and PCS threads (T6, T7) of process P3 considering each as a separate process.
Here, the Thread library has no control of scheduling the ULT to the kernel resources.
3. Process P3:
Combination of PCS and SCS threads. Consider if the system scheduler allocates 50% of CPU resources
to process P3, then 25% of resources is for process scoped threads and the remaining 25% for system
scoped threads. The PCS threads T6 and T7 will be allocated to access the 25% resources based on the
priority by the thread library. The SCS threads T8, T9, T10 will divide the 25% resources among
themselves and access the kernel resources via separate LWP and KLT. The SCS scheduling is by the
system scheduler.
Note:
For every system call to access the kernel resources, a Kernel Level thread is created and associated to
separate LWP by the system scheduler.
Number of Kernel Level Threads = Total Number of LWP
Total Number of LWP = Number of LWP for SCS + Number of LWP for PCS
Number of LWP for SCS = Number of SCS threads
Number of LWP for PCS = Depends on application developer
Here,
Number of SCS threads = 5
Number of LWP for PCS = 3
Number of SCS threads = 5
Number of LWP for SCS = 5
Total Number of LWP = 8 (=5+3)
Number of Kernel Level Threads = 8
Advantages of PCS over SCS :
 If all threads are PCS, then context switching, synchronization, scheduling everything takes place within
the userspace. This reduces system calls and achieves better performance.
 PCS is cheaper than SCS.
 PCS threads share one or more available LWPs. For every SCS thread, a separate LWP is associated.For
every system call, a separate KLT is created.
 The number of KLT and LWPs created highly depends on the number of SCS threads created. This
increases the kernel complexity of handling scheduling and synchronization. Thereby, results in a
limitation over SCS thread creation, stating that, the number of SCS threads to be smaller than the number
of PCS threads.
 If the system has more than one allocation domain, then scheduling and synchronization of resources
becomes more tedious. Issues arise when an SCS thread is a part of more than one allocation domain, the
The second boundary of thread scheduling involves CPU scheduling by the system scheduler.

More Related Content

Similar to Rate.docx

Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)Harshit Jain
 
Embedded system scheduling Algorithm .pptx
Embedded system scheduling Algorithm .pptxEmbedded system scheduling Algorithm .pptx
Embedded system scheduling Algorithm .pptxSwati Shekapure
 
Operating System Scheduling
Operating System SchedulingOperating System Scheduling
Operating System SchedulingVishnu Prasad
 
Ch6
Ch6Ch6
Ch6C.U
 
3 process scheduling
3 process scheduling3 process scheduling
3 process schedulingahad alam
 
Document 14 (6).pdf
Document 14 (6).pdfDocument 14 (6).pdf
Document 14 (6).pdfRajMantry
 
Clock driven scheduling
Clock driven schedulingClock driven scheduling
Clock driven schedulingKamal Acharya
 
RTAI - Earliest Deadline First
RTAI - Earliest Deadline FirstRTAI - Earliest Deadline First
RTAI - Earliest Deadline FirstStefano Bragaglia
 
Survey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling AlgorithmsSurvey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling AlgorithmsIOSR Journals
 
fggggggggggggggggggggggggggggggfffffffffffffffffff
fggggggggggggggggggggggggggggggffffffffffffffffffffggggggggggggggggggggggggggggggfffffffffffffffffff
fggggggggggggggggggggggggggggggfffffffffffffffffffadugnanegero
 
Process Scheduling Algorithms for Operating Systems
Process Scheduling Algorithms for Operating SystemsProcess Scheduling Algorithms for Operating Systems
Process Scheduling Algorithms for Operating SystemsKathirvelRajan2
 
3_process_scheduling.ppt----------------
3_process_scheduling.ppt----------------3_process_scheduling.ppt----------------
3_process_scheduling.ppt----------------DivyaBorade3
 
Real Time most famous algorithms
Real Time most famous algorithmsReal Time most famous algorithms
Real Time most famous algorithmsAndrea Tino
 

Similar to Rate.docx (20)

task_sched2.ppt
task_sched2.ppttask_sched2.ppt
task_sched2.ppt
 
Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)
 
exp 3.docx
exp 3.docxexp 3.docx
exp 3.docx
 
Embedded system scheduling Algorithm .pptx
Embedded system scheduling Algorithm .pptxEmbedded system scheduling Algorithm .pptx
Embedded system scheduling Algorithm .pptx
 
Operating System Scheduling
Operating System SchedulingOperating System Scheduling
Operating System Scheduling
 
Fcfs and sjf
Fcfs and sjfFcfs and sjf
Fcfs and sjf
 
RTS
RTSRTS
RTS
 
Ch6
Ch6Ch6
Ch6
 
3 process scheduling
3 process scheduling3 process scheduling
3 process scheduling
 
Document 14 (6).pdf
Document 14 (6).pdfDocument 14 (6).pdf
Document 14 (6).pdf
 
Clock driven scheduling
Clock driven schedulingClock driven scheduling
Clock driven scheduling
 
RTAI - Earliest Deadline First
RTAI - Earliest Deadline FirstRTAI - Earliest Deadline First
RTAI - Earliest Deadline First
 
Survey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling AlgorithmsSurvey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling Algorithms
 
3_process_scheduling.ppt
3_process_scheduling.ppt3_process_scheduling.ppt
3_process_scheduling.ppt
 
fggggggggggggggggggggggggggggggfffffffffffffffffff
fggggggggggggggggggggggggggggggffffffffffffffffffffggggggggggggggggggggggggggggggfffffffffffffffffff
fggggggggggggggggggggggggggggggfffffffffffffffffff
 
3_process_scheduling.ppt
3_process_scheduling.ppt3_process_scheduling.ppt
3_process_scheduling.ppt
 
Process Scheduling Algorithms for Operating Systems
Process Scheduling Algorithms for Operating SystemsProcess Scheduling Algorithms for Operating Systems
Process Scheduling Algorithms for Operating Systems
 
3_process_scheduling.ppt----------------
3_process_scheduling.ppt----------------3_process_scheduling.ppt----------------
3_process_scheduling.ppt----------------
 
Real Time most famous algorithms
Real Time most famous algorithmsReal Time most famous algorithms
Real Time most famous algorithms
 
Ch5
Ch5Ch5
Ch5
 

Recently uploaded

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 

Recently uploaded (20)

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 

Rate.docx

  • 1. Rate-monotonic scheduling  Last Updated : 13 May, 2020 Rate monotonic scheduling is a priority algorithm that belongs to the static priority scheduling category of RealTime Operating Systems. It is preemptive in nature. The priority is decided according to the cycle time of the processes that are involved. If the process has a small job duration, then it has the highest priority. Thus if a process with highest priority starts execution, it will preempt the other running processes. The priority of a process is inversely proportional to the period it will run for. A set of processes can be scheduled only if they satisfy the following equation : Where n is the number of processes in the process set,Ci is the computation time of the process,Ti is the Time period for the process to run and U is the processor utilization. Example: An example to understand the working of Rate monotonic scheduling algorithm. Processes Execution Time (C) Time period (T) P1 3 20 P2 2 5 P3 2 10 n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977 U = 3/20 + 2/5 + 2/10 = 0.75 It is less than 1 or 100% utilization. The combined utilization of three processes is less than the threshold of these processes that means the above set of processes are schedulable and thus satisfies the above equation of the algorithm. 1. Scheduling time – For calculating the Scheduling time of algorithm we have to take the LCM of the Time period of all the processes. LCM ( 20, 5, 10 ) of the above example is 20. Thus we can schedule it by 20 time units. 2. Priority – As discussed above, the priority will be the highest for the process which has the least running time period. Thus P2 will have the highest priority, after that P3 and lastly P1. P2 > P3 > P1 3. Representation and flow –
  • 2. Above figure says that, Process P2 will execute two times for every 5 time units, Process P3 will execute two times for every 10 time units and Process P1 will execute three times in 20 time units. This has to be kept in mind for understanding the entire execution of the algorithm below. Process P2 will run first for 2 time units because it has the highest priority. After completing its two units, P3 will get the chance and thus it will run for 2 time units. As we know that process P2 will run 2 times in the interval of 5 time units and process P3 will run 2 times in the interval of 10 time units, they have fulfilled the criteria and thus now process P1 which has the least priority will get the chance and it will run for 1 time. And here the interval of five time units have completed. Because of its priority P2 will preempt P1 and thus will run 2 times. As P3 have completed its 2 time units for its interval of 10 time units, P1 will get chance and it will run for the remaining 2 times, completing its execution which was thrice in 20 time units. Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run for 2 times completing its criteria for the third interval ( 10-15 ). Process P3 will now run for two times completing its execution. Interval 14-15 will again remain idle for the same reason mentioned above. At 15 time unit, process P2 will execute for two times completing its execution.This is how the rate monotonic scheduling works. Conditions : The analysis of Rate monotonic scheduling assumes few properties that every process should possess. They are : 1. Processes involved should not share the resources with other processes. 2. Deadlines must be similar to the time periods. Deadlines are deterministic. 3. Process running with highest priority that needs to run, will preempt all the other processes. 4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic scheduling. Advantages : 1. It is easy to implement. 2. If any static priority assignment algorithm can meet the deadlines then rate monotonic scheduling can also do the same. It is optimal. 3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as Round robin which neglects the scheduling needs of the processes.
  • 3. Disadvantages : 1. It is very difficult to support aperiodic and sporadic tasks under RMA. 2. RMA is not optimal when tasks period and deadline differ. Earliest Deadline First (EDF) CPU scheduling algorithm  Difficulty Level : Medium  Last Updated : 13 May, 2020 Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-time systems. It can be used for both static and dynamic real-time scheduling. EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the absolute deadline. The task whose deadline is closest gets the highest priority. The priorities are assigned and changed in a dynamic fashion. EDF is very efficient as compared to other scheduling algorithms in real-time systems. It can make the CPU utilization to about 100% while still guaranteeing the deadlines of all the tasks. EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all the tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible schedule is one in which all the tasks in the system are executed within the deadline. If EDF is not able to find a feasible schedule for all the tasks in the real-time system, then it means that no other task scheduling algorithms in real-time systems can give a feasible schedule. All the tasks which are ready for execution should announce their deadline to EDF when the task becomes runnable. EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or processes require a fixed CPU burst time. In EDF, any executing task can be preempted if any other periodic instance with an earlier deadline is ready for execution and becomes active. Preemption is allowed in the Earliest Deadline First scheduling algorithm. Example: Consider two processes P1 and P2. Let the period of P1 be p1 = 50 Let the processing time of P1 be t1 = 25Let the period of P2 be period2 = 75Let the processing time of P2 be
  • 4. t2 = 30 Steps for solution: 1. Deadline pf P1 is earlier, so priority of P1>P2. 2. Initially P1 runs and completes its execution of 25 time. 3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute. 4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute. 5. P2 completes its processing at time 55. 6. P1 starts to execute until time 75, when P2 is able to execute. 7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute. 8. Repeat the above steps… 9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its processing time after which P1 starts to execute. Limitations of EDF scheduling algorithm:  Transient Overload Problem  Resource Sharing Problem  Efficient Implementation Problem Thread Scheduling  Difficulty Level : Hard  Last Updated : 03 Feb, 2022 Scheduling of threads involves two boundary scheduling,  Scheduling of user level threads (ULT) to kernel level threads (KLT) via lightweight process (LWP) by the application developer.  Scheduling of kernel level threads by the system scheduler to perform different unique os functions. Lightweight Process (LWP) : Light-weight process are threads in the user space that acts as an interface for the ULT to access the physical CPU resources. Thread library schedules which thread of a process to run on which LWP and how long. The number of LWP created by the thread library depends on the type of application. In the case of an I/O bound application, the number of LWP depends on the number of user-level threads. This is because when an LWP is blocked on an I/O operation, then to invoke the other ULT the thread library needs to create and schedule
  • 5. another LWP. Thus, in an I/O bound application, the number of LWP is equal to the number of the ULT. In the case of a CPU bound application, it depends only on the application. Each LWP is attached to a separate kernel-level thread. In real-time, the first boundary of thread scheduling is beyond specifying the scheduling policy and the priority. It requires two controls to be specified for the User level threads: Contention scope, and Allocation domain. These are explained as following below. 1. Contention Scope : The word contention here refers to the competition or fight among the User level threads to access the kernel resources. Thus, this control defines the extent to which contention takes place. It is defined by the application developer using the thread library. Depending upon the extent of contention it is classified as Process ContentionScope and System Contention Scope. 1. Process ContentionScope (PCS) – The contention takes place among threads withina same process. The thread library schedules the high- prioritized PCS thread to access the resources via available LWPs (priority as specified by the application developer during thread creation). System Contention Scope (SCS) – The contention takes place among all threads in the system. In this case, every SCS thread is associated to each LWP by the thread library and are scheduled by the system scheduler to access the kernel resources. In LINUX and UNIX operating systems, the POSIX Pthread library provides a function Pthread_attr_setscope to define the type of contention scope for a thread during its creation. int Pthread_attr_setscope(pthread_attr_t *attr, int scope) 1. The first parameter denotes to which thread within the process the scope is defined. The second parameter defines the scope of contention for the thread pointed. It takes two values. PTHREAD_SCOPE_SYSTEM PTHREAD_SCOPE_PROCESS 1. If the scope value specified is not supported by the system, then the function returns ENOTSUP. 2. AllocationDomain : The allocation domain is a set of one or more resources for which a thread is competing. In a multicore system, there may be one or more allocation domains where each consists of one or more cores. One ULT can be a part of one or more allocation domain. Due to this high complexity in dealing with hardware and software architectural interfaces, this control is not specified. But by default, the multicore system will have an interface that affects the allocation domain of a thread. Consider a scenario, an operating system with three process P1, P2, P3 and 10 user level threads (T1 to T10) with a single allocation domain. 100% of CPU resources will be distributed among all the three processes. The amount of CPU resources allocated to each process and to each thread depends on the contention scope, scheduling policy and priority of each thread defined by the application developer using thread library and also depends on the system scheduler. These User level threads are of a different contention scope.
  • 6. In this case, the contention for allocation domain takes place as follows, 1. Process P1: All PCS threads T1, T2, T3 of Process P1 will compete among themselves. The PCS threads of the same process can share one or more LWP. T1 and T2 share an LWP and T3 are allocated to a separate LWP. Between T1 and T2 allocation of kernel resources via LWP is based on preemptive priority scheduling by the thread library. A Thread with a high priority will preempt low priority threads. Whereas, thread T1 of process p1 cannot preempt thread T3 of process p3 even if the priority of T1 is greater than the priority of T3. If the priority is equal, then the allocation of ULT to available LWPs is based on the scheduling policy of threads by the system scheduler(not by thread library, in this case). 2. Process P2: Both SCS threads T4 and T5 of process P2 will compete with processes P1 as a whole and with SCS threads T8, T9, T10 of process P3. The system scheduler will schedule the kernel resources among P1, T4, T5, T8, T9, T10, and PCS threads (T6, T7) of process P3 considering each as a separate process.
  • 7. Here, the Thread library has no control of scheduling the ULT to the kernel resources. 3. Process P3: Combination of PCS and SCS threads. Consider if the system scheduler allocates 50% of CPU resources to process P3, then 25% of resources is for process scoped threads and the remaining 25% for system scoped threads. The PCS threads T6 and T7 will be allocated to access the 25% resources based on the priority by the thread library. The SCS threads T8, T9, T10 will divide the 25% resources among themselves and access the kernel resources via separate LWP and KLT. The SCS scheduling is by the system scheduler. Note: For every system call to access the kernel resources, a Kernel Level thread is created and associated to separate LWP by the system scheduler. Number of Kernel Level Threads = Total Number of LWP Total Number of LWP = Number of LWP for SCS + Number of LWP for PCS Number of LWP for SCS = Number of SCS threads Number of LWP for PCS = Depends on application developer Here, Number of SCS threads = 5 Number of LWP for PCS = 3 Number of SCS threads = 5 Number of LWP for SCS = 5 Total Number of LWP = 8 (=5+3) Number of Kernel Level Threads = 8 Advantages of PCS over SCS :  If all threads are PCS, then context switching, synchronization, scheduling everything takes place within the userspace. This reduces system calls and achieves better performance.  PCS is cheaper than SCS.  PCS threads share one or more available LWPs. For every SCS thread, a separate LWP is associated.For every system call, a separate KLT is created.  The number of KLT and LWPs created highly depends on the number of SCS threads created. This increases the kernel complexity of handling scheduling and synchronization. Thereby, results in a limitation over SCS thread creation, stating that, the number of SCS threads to be smaller than the number of PCS threads.  If the system has more than one allocation domain, then scheduling and synchronization of resources becomes more tedious. Issues arise when an SCS thread is a part of more than one allocation domain, the The second boundary of thread scheduling involves CPU scheduling by the system scheduler.