SlideShare a Scribd company logo
1 of 83
Download to read offline
© 2017 VMware Inc. All rights reserved.
Understanding SCHED_DEADLINE
Controlling CPU Bandwidth
Steven Rostedt
26/9/2017
What is SCHED_DEADLINE?
●
A new scheduling class (Well since v3.14)
– others are: SCHED_OTHER/NORMAL, SCHED_FIFO, SCHED_RR
●
SCHED_IDLE, SCHED_BATCH (out of scope for today)
●
Constant Bandwidth Scheduler
●
Earliest Deadline First
Other Schedulers
●
SCHED_OTHER / SCHED_NORMAL
– Completely Fair Scheduler (CFS)
– Uses “nice” priority
– Each task gets a fair share of the CPU bandwidth
●
SCHED_FIFO
– First in, first out
– Each task runs till it gives up the CPU or a higher priority task preempts it
●
SCHED_RR
– Like SCHED_FIFO but same prio tasks get slices of CPU
Priorities
●
You have two programs running on the same CPU
– One runs a nuclear power plant
●
Requires 1/2 second out of every second of the CPU (50% of the CPU)
– The other runs a washing machine
●
Requires 50 millisecond out of every 200 milliseconds (25% of the CPU)
– Which one gets the higher priority?
Priorities
½ sec
½ sec
1 sec
1 sec
Priorities Nuke > Washing Machine
½ sec
½ sec
1 sec
1 sec
Priorities Nuke < Washing Machine
½ sec
½ sec
1 sec
1 sec
Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
Rate Monotonic Scheduling (RMS)
0 1 12111098765432 13
FAILED!
Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
≤n(
n
√2−1)
Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
U ≤n(
n
√2−1)=3(
3
√2−1)=0.77976
Rate Monotonic Scheduling (RMS)
U =∑
i=1
n
Ci
Ti
≤n(
n
√2−1)
lim
n→ ∞
n(
n
√2−1)=ln 2≈0.693147
SCHED_DEADLINE
●
Utilizes Earliest Deadline First (EDF)
●
Dynamic priority
●
The task with next deadline has highest priority
U =∑
i=1
n
Ci
Ti
=1
Earliest Deadline First (EDF)
0 1 12111098765432 13
Earliest Deadline First (EDF)
0 1 12111098765432 13
Earliest Deadline First (EDF)
0 1 12111098765432 13
Earliest Deadline First (EDF)
0 1 12111098765432 13
:) HAPPY :)
Earliest Deadline First (EDF)
0 1 12111098765432 13
Earliest Deadline First (EDF)
0 1 12111098765432 13
Setting a RT priority
sched_setscheduler(pid_t pid, int policy, struct sched_param *param)
struct sched_param {
u32 sched_priority;
};
Implementing SCHED_DEADLINE in Linux
30
Two new syscalls
sched_getattr(pid_t pid, struct sched_attr *attr, unsigned int size, unsigned int flags)
(Similar to sched_getparam(pid_t pid, struct sched_param *param)
sched_setattr(pid_t pid, struct sched_attr *attr, unsigned int flags)
(Similar to sched_setparam(pid_t pid, struct sched_param *param)
Implementing SCHED_DEADLINE
struct sched_attr {
u32 size; /* Size of this structure */
u32 sched_policy; /* Policy (SCHED_*) */
u64 sched_flags; /* Flags */
s32 sched_nice; /* Nice value (SCHED_OTHER,
SCHED_BATCH) */
u32 sched_priority; /* Static priority (SCHED_FIFO,
SCHED_RR) */
/* Remaining fields are for SCHED_DEADLINE */
u64 sched_runtime;
u64 sched_deadline;
u64 sched_period;
};
Implementing SCHED_DEADLINE
struct sched_attr attr;
ret = sched_getattr(0, &attr, sizeof(attr), 0);
if (ret < 0)
error();
attr.sched_policy = SCHED_DEADLINE;
attr.sched_runtime = runtime_ns;
attr.sched_deadline = deadline_ns;
ret = sched_setattr(0, &attr, 0);
if (ret < 0)
error();
sched_yield()
●
Most use cases are buggy
– Most tasks will not give up the CPU
●
SCHED_OTHER
– Gives up current CPU time slice
●
SCHED_FIFO / SCHED_RR
– Gives up the CPU to a task of the SAME PRIORITY
– Voluntary scheduling among same priority tasks
sched_yield()
●
Buggy code!
again:
pthread_mutex_lock(&mutex_A);
B = A->B;
if (pthread_mutex_trylock(&B->mutex_B)) {
pthread_mutex_unlock(&mutex_A);
sched_yield();
goto again;
}
sched_yield()
●
What you want for SCHED_DEADLINE!
●
Tells the kernel the task is done with current period
●
Used to relinquish the rest of the runtime budget
Constant Bandwidth Server
36
scheduling deadline = current time + deadline
remaining runtime = runtime
remaining runtime
scheduling deadline−current time
>
runtime
period
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Remainder = 2/3 > 3/9
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Deadline = current time + new deadline
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Remaining Runtime = Runtime (3 units)
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Another Deadline task?
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Another Deadline task?
Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Only ran 2 units in the original 9
Original Deadline
Donut Hole Puncher!
Deadline vs Period
●
Can't have offset holes in our donuts
●
Have a specific deadline to make within a period
runtime <= deadline <= period
●
But is this too constrained?
U =∑
i=1
n
Ci
Di
=1
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
1/1 > 2/4
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Move deadline from 4 to 7 (period from 10 to 13)
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Runs for 1 and sleeps again
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Wakes up again with 1 to go (moves deadline to 10, period to 16)
Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
4 out of 10! Instead of 2 out 4 in 10
Multi processors!
It's all fun and games until someone throws another processor into
your eye
●
M CPUs
●
M+1 tasks
●
One task with runtime 999ms out of 1000ms
●
M tasks of runtime of 10ms out of 999ms
●
All start at the same time
●
The M tasks have a shorted deadline
●
All M tasks run on all CPUs for 10ms
●
That one task now only has 990 ms left to run 999ms.
Multi processors! (Dhall's Effect)
999
1000
+ M (
10
999
)=0.999+.01001 M <M
M=2;
999
1000
+2(
10
999
)=0.999+2∗0.01001=1.01902<2
2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9
2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9 9/10
Multi processors!
●
EDF can not give you better than U = 1
– No matter how many processors you have
– Full utilization should be U = N CPUs
●
Two methods
– Partitioning (Bind each task to a CPU)
– Global (let all tasks migrate wherever)
– Neither give better than U = 1 guarantees
Multi processors!
●
EDF partitioned
Can not always be used:
●
U_t1 = .6
●
U_t2 = .6
●
U_t3 = .5
●
The above would need special scheduling to work anyway
To figure out the best utilization is the bin packing problem
●
Sorry folks, it's NP complete
●
Don't even bother trying
Multi processors!
●
Global Earliest Deadline First (gEDF)
●
Can not guarantee deadlines of U > 1 for all cases
●
But special cases can be satisfied for U > 1
D_i = P_i
U_max = max{C_i/P_i}
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
Multi processors!
●
M = 8
●
U_max = 0.5
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
U =∑
i=1
n
Ci
Pi
≤8−(7)∗.5=4.5
Multi processors!
●
M = 2
●
U_max = 999/1000
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
U =∑
i=1
n
Ci
Pi
≤2−(1)∗0.999=1.001
The limits of SCHED_DEADLINE
●
Runs on all CPUS (well sorta)
No limited sched affinity allowed
Global EDF is the default
Must account for sched migration overheads
●
Can not have children (no forking)
Your SCHED_DEADLINE tasks have been fixed
●
Calculating Worse Case Execution Time (WCET)
If you get it wrong, SCHED_DEADLINE may throttle your task before it finishes
Giving SCHED_DEADLINE Affinity
Setting task affinity on SCHED_DEADLINE is not allowed
But you can limit them by creating new sched domains
CPU sets
Implementing Partitioned EDF
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
echo 0 > my_set/cpuset.mems
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
echo 0 > my_set/cpuset.mems
echo 1 > my_set/cpuset.sched_load_balance
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
echo 0 > my_set/cpuset.mems
echo 1 > my_set/cpuset.sched_load_balance
echo 1 > my_set/cpuset.cpu_exclusive
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
echo 0 > my_set/cpuset.mems
echo 1 > my_set/cpuset.sched_load_balance
echo 1 > my_set/cpuset.cpu_exclusive
echo 0 > cpuset.sched_load_balance
Giving SCHED_DEADLINE Affinity
cd /sys/fs/cgroup/cpuset
mkdir my_set
mkdir other_set
echo 0-2 > other_set/cpuset.cpus
echo 0 > other_set/cpuset.mems
echo 1 > other_set/cpuset.sched_load_balance
echo 1 > other_set/cpuset.cpu_exclusive
echo 3 > my_set/cpuset.cpus
echo 0 > my_set/cpuset.mems
echo 1 > my_set/cpuset.sched_load_balance
echo 1 > my_set/cpuset.cpu_exclusive
echo 0 > cpuset.sched_load_balance
That’s a lot!
Giving SCHED_DEADLINE Affinity
cat tasks | while read task; do
echo $task > other_set/tasks
done
echo $sched_deadline_task > my_set/tasks
Calculating WCET
●
Today's hardware is extremely unpredictable
●
Worse Case Execution Time is impossible to know
●
Allocate too much bandwidth instead
●
Need something between RMS and CBS
GRUB (not the boot loader)
●
Greedy Reclaim of Unused Bandwidth
●
Allows for SCHED_DEADLINE tasks to use up the unused utilization
of the CPU (or part of it)
●
Allows for tasks to handle WCET of a bit more than calculated.
●
Just went into mainline (v4.13)
New Work
●
Revisiting affinity
●
Semi-partitioning (dynamic deadlines to boot)
2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9 9/10
2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2/9 2/9/10
7/8/10
Links
Documentation/scheduler/sched_deadline.txt
http://disi.unitn.it/~abeni/reclaiming/rtlws14-grub.pdf
http://www.evidence.eu.com/sched_deadline.html
http://www.atc.uniovi.es/rsa/starts/documents/Lopez_2004_rts.pdf
https://cs.unc.edu/~anderson/papers/rtj06a.pdf
Thank You
Steven Rostedt

More Related Content

Similar to Embedded Recipes 2017 - Understanding SCHED_DEADLINE - Steven Rostedt

Operations Research_18ME735_module 5 sequencing notes.pdf
Operations Research_18ME735_module 5 sequencing notes.pdfOperations Research_18ME735_module 5 sequencing notes.pdf
Operations Research_18ME735_module 5 sequencing notes.pdfRoopaDNDandally
 
Avoiding Catastrophic Performance Loss
Avoiding Catastrophic Performance LossAvoiding Catastrophic Performance Loss
Avoiding Catastrophic Performance Lossbasisspace
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in AndroidOpersys inc.
 
Brace yourselves, leap second is coming
Brace yourselves, leap second is comingBrace yourselves, leap second is coming
Brace yourselves, leap second is comingNati Cohen
 
Multiprocessor scheduling 3
Multiprocessor scheduling 3Multiprocessor scheduling 3
Multiprocessor scheduling 3mrbourne
 
CS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfCS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfssuser034ce1
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in AndroidOpersys inc.
 
When the OS gets in the way
When the OS gets in the wayWhen the OS gets in the way
When the OS gets in the wayMark Price
 
Cpu scheduling algorithm on windows
Cpu scheduling algorithm on windowsCpu scheduling algorithm on windows
Cpu scheduling algorithm on windowssiddhartha pande
 
Cpu Schedule Algorithm
Cpu Schedule AlgorithmCpu Schedule Algorithm
Cpu Schedule AlgorithmArchit Jain
 
Task allocation and scheduling inmultiprocessors
Task allocation and scheduling inmultiprocessorsTask allocation and scheduling inmultiprocessors
Task allocation and scheduling inmultiprocessorsDon William
 
Understanding of linux kernel memory model
Understanding of linux kernel memory modelUnderstanding of linux kernel memory model
Understanding of linux kernel memory modelSeongJae Park
 
Optimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESOptimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
 
CS205 Final project
CS205 Final projectCS205 Final project
CS205 Final projectDanny Gibbs
 
Cpm module iii reference
Cpm module iii referenceCpm module iii reference
Cpm module iii referenceahsanrabbani
 

Similar to Embedded Recipes 2017 - Understanding SCHED_DEADLINE - Steven Rostedt (20)

Operations Research_18ME735_module 5 sequencing notes.pdf
Operations Research_18ME735_module 5 sequencing notes.pdfOperations Research_18ME735_module 5 sequencing notes.pdf
Operations Research_18ME735_module 5 sequencing notes.pdf
 
02 performance
02 performance02 performance
02 performance
 
Avoiding Catastrophic Performance Loss
Avoiding Catastrophic Performance LossAvoiding Catastrophic Performance Loss
Avoiding Catastrophic Performance Loss
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in Android
 
03 performance
03 performance03 performance
03 performance
 
Brace yourselves, leap second is coming
Brace yourselves, leap second is comingBrace yourselves, leap second is coming
Brace yourselves, leap second is coming
 
Multiprocessor scheduling 3
Multiprocessor scheduling 3Multiprocessor scheduling 3
Multiprocessor scheduling 3
 
CS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfCS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdf
 
04 performance
04 performance04 performance
04 performance
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in Android
 
Extlect02
Extlect02Extlect02
Extlect02
 
When the OS gets in the way
When the OS gets in the wayWhen the OS gets in the way
When the OS gets in the way
 
Cpu scheduling algorithm on windows
Cpu scheduling algorithm on windowsCpu scheduling algorithm on windows
Cpu scheduling algorithm on windows
 
Cpu Schedule Algorithm
Cpu Schedule AlgorithmCpu Schedule Algorithm
Cpu Schedule Algorithm
 
Task allocation and scheduling inmultiprocessors
Task allocation and scheduling inmultiprocessorsTask allocation and scheduling inmultiprocessors
Task allocation and scheduling inmultiprocessors
 
Understanding of linux kernel memory model
Understanding of linux kernel memory modelUnderstanding of linux kernel memory model
Understanding of linux kernel memory model
 
Optimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESOptimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTES
 
Kubernetes Workload Rebalancing
Kubernetes Workload RebalancingKubernetes Workload Rebalancing
Kubernetes Workload Rebalancing
 
CS205 Final project
CS205 Final projectCS205 Final project
CS205 Final project
 
Cpm module iii reference
Cpm module iii referenceCpm module iii reference
Cpm module iii reference
 

More from Anne Nicolas

Kernel Recipes 2019 - Driving the industry toward upstream first
Kernel Recipes 2019 - Driving the industry toward upstream firstKernel Recipes 2019 - Driving the industry toward upstream first
Kernel Recipes 2019 - Driving the industry toward upstream firstAnne Nicolas
 
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMI
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMIKernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMI
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMIAnne Nicolas
 
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernelKernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernelAnne Nicolas
 
Kernel Recipes 2019 - Metrics are money
Kernel Recipes 2019 - Metrics are moneyKernel Recipes 2019 - Metrics are money
Kernel Recipes 2019 - Metrics are moneyAnne Nicolas
 
Kernel Recipes 2019 - Kernel documentation: past, present, and future
Kernel Recipes 2019 - Kernel documentation: past, present, and futureKernel Recipes 2019 - Kernel documentation: past, present, and future
Kernel Recipes 2019 - Kernel documentation: past, present, and futureAnne Nicolas
 
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...Anne Nicolas
 
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataKernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
 
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...Anne Nicolas
 
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and Barebox
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and BareboxEmbedded Recipes 2019 - Remote update adventures with RAUC, Yocto and Barebox
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and BareboxAnne Nicolas
 
Embedded Recipes 2019 - Making embedded graphics less special
Embedded Recipes 2019 - Making embedded graphics less specialEmbedded Recipes 2019 - Making embedded graphics less special
Embedded Recipes 2019 - Making embedded graphics less specialAnne Nicolas
 
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre Silicon
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre SiliconEmbedded Recipes 2019 - Linux on Open Source Hardware and Libre Silicon
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre SiliconAnne Nicolas
 
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) picture
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) pictureEmbedded Recipes 2019 - From maintaining I2C to the big (embedded) picture
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) pictureAnne Nicolas
 
Embedded Recipes 2019 - Testing firmware the devops way
Embedded Recipes 2019 - Testing firmware the devops wayEmbedded Recipes 2019 - Testing firmware the devops way
Embedded Recipes 2019 - Testing firmware the devops wayAnne Nicolas
 
Embedded Recipes 2019 - Herd your socs become a matchmaker
Embedded Recipes 2019 - Herd your socs become a matchmakerEmbedded Recipes 2019 - Herd your socs become a matchmaker
Embedded Recipes 2019 - Herd your socs become a matchmakerAnne Nicolas
 
Embedded Recipes 2019 - LLVM / Clang integration
Embedded Recipes 2019 - LLVM / Clang integrationEmbedded Recipes 2019 - LLVM / Clang integration
Embedded Recipes 2019 - LLVM / Clang integrationAnne Nicolas
 
Embedded Recipes 2019 - Introduction to JTAG debugging
Embedded Recipes 2019 - Introduction to JTAG debuggingEmbedded Recipes 2019 - Introduction to JTAG debugging
Embedded Recipes 2019 - Introduction to JTAG debuggingAnne Nicolas
 
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimedia
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimediaEmbedded Recipes 2019 - Pipewire a new foundation for embedded multimedia
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimediaAnne Nicolas
 
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all started
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedKernel Recipes 2019 - ftrace: Where modifying a running kernel all started
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedAnne Nicolas
 
Kernel Recipes 2019 - Suricata and XDP
Kernel Recipes 2019 - Suricata and XDPKernel Recipes 2019 - Suricata and XDP
Kernel Recipes 2019 - Suricata and XDPAnne Nicolas
 
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Anne Nicolas
 

More from Anne Nicolas (20)

Kernel Recipes 2019 - Driving the industry toward upstream first
Kernel Recipes 2019 - Driving the industry toward upstream firstKernel Recipes 2019 - Driving the industry toward upstream first
Kernel Recipes 2019 - Driving the industry toward upstream first
 
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMI
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMIKernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMI
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMI
 
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernelKernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
 
Kernel Recipes 2019 - Metrics are money
Kernel Recipes 2019 - Metrics are moneyKernel Recipes 2019 - Metrics are money
Kernel Recipes 2019 - Metrics are money
 
Kernel Recipes 2019 - Kernel documentation: past, present, and future
Kernel Recipes 2019 - Kernel documentation: past, present, and futureKernel Recipes 2019 - Kernel documentation: past, present, and future
Kernel Recipes 2019 - Kernel documentation: past, present, and future
 
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...
 
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataKernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
 
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...
 
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and Barebox
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and BareboxEmbedded Recipes 2019 - Remote update adventures with RAUC, Yocto and Barebox
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and Barebox
 
Embedded Recipes 2019 - Making embedded graphics less special
Embedded Recipes 2019 - Making embedded graphics less specialEmbedded Recipes 2019 - Making embedded graphics less special
Embedded Recipes 2019 - Making embedded graphics less special
 
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre Silicon
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre SiliconEmbedded Recipes 2019 - Linux on Open Source Hardware and Libre Silicon
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre Silicon
 
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) picture
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) pictureEmbedded Recipes 2019 - From maintaining I2C to the big (embedded) picture
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) picture
 
Embedded Recipes 2019 - Testing firmware the devops way
Embedded Recipes 2019 - Testing firmware the devops wayEmbedded Recipes 2019 - Testing firmware the devops way
Embedded Recipes 2019 - Testing firmware the devops way
 
Embedded Recipes 2019 - Herd your socs become a matchmaker
Embedded Recipes 2019 - Herd your socs become a matchmakerEmbedded Recipes 2019 - Herd your socs become a matchmaker
Embedded Recipes 2019 - Herd your socs become a matchmaker
 
Embedded Recipes 2019 - LLVM / Clang integration
Embedded Recipes 2019 - LLVM / Clang integrationEmbedded Recipes 2019 - LLVM / Clang integration
Embedded Recipes 2019 - LLVM / Clang integration
 
Embedded Recipes 2019 - Introduction to JTAG debugging
Embedded Recipes 2019 - Introduction to JTAG debuggingEmbedded Recipes 2019 - Introduction to JTAG debugging
Embedded Recipes 2019 - Introduction to JTAG debugging
 
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimedia
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimediaEmbedded Recipes 2019 - Pipewire a new foundation for embedded multimedia
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimedia
 
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all started
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedKernel Recipes 2019 - ftrace: Where modifying a running kernel all started
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all started
 
Kernel Recipes 2019 - Suricata and XDP
Kernel Recipes 2019 - Suricata and XDPKernel Recipes 2019 - Suricata and XDP
Kernel Recipes 2019 - Suricata and XDP
 
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
 

Recently uploaded

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 

Recently uploaded (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 

Embedded Recipes 2017 - Understanding SCHED_DEADLINE - Steven Rostedt

  • 1. © 2017 VMware Inc. All rights reserved. Understanding SCHED_DEADLINE Controlling CPU Bandwidth Steven Rostedt 26/9/2017
  • 2. What is SCHED_DEADLINE? ● A new scheduling class (Well since v3.14) – others are: SCHED_OTHER/NORMAL, SCHED_FIFO, SCHED_RR ● SCHED_IDLE, SCHED_BATCH (out of scope for today) ● Constant Bandwidth Scheduler ● Earliest Deadline First
  • 3. Other Schedulers ● SCHED_OTHER / SCHED_NORMAL – Completely Fair Scheduler (CFS) – Uses “nice” priority – Each task gets a fair share of the CPU bandwidth ● SCHED_FIFO – First in, first out – Each task runs till it gives up the CPU or a higher priority task preempts it ● SCHED_RR – Like SCHED_FIFO but same prio tasks get slices of CPU
  • 4. Priorities ● You have two programs running on the same CPU – One runs a nuclear power plant ● Requires 1/2 second out of every second of the CPU (50% of the CPU) – The other runs a washing machine ● Requires 50 millisecond out of every 200 milliseconds (25% of the CPU) – Which one gets the higher priority?
  • 6. Priorities Nuke > Washing Machine ½ sec ½ sec 1 sec 1 sec
  • 7. Priorities Nuke < Washing Machine ½ sec ½ sec 1 sec 1 sec
  • 8. Rate Monotonic Scheduling (RMS) ● Computational time vs Period ● Can be implemented by SCHED_FIFO ● Smallest period gets highest priority ● Compute computation time (C) ● Compute period time (T) U =∑ i=1 n Ci Ti
  • 9. Rate Monotonic Scheduling (RMS) ● Add a Dishwasher to the mix... ● Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU) ● Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU) ● Washing Machine: C = 100ms T = 800ms (12.5% of the CPU) U = 500 1000 + 300 900 + 100 800 =.958333
  • 10. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 11. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 12. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 13. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 14. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 15. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13
  • 16. Rate Monotonic Scheduling (RMS) 0 1 12111098765432 13 FAILED!
  • 17. Rate Monotonic Scheduling (RMS) ● Computational time vs Period ● Can be implemented by SCHED_FIFO ● Smallest period gets highest priority ● Compute computation time (C) ● Compute period time (T) U =∑ i=1 n Ci Ti
  • 18. Rate Monotonic Scheduling (RMS) ● Computational time vs Period ● Can be implemented by SCHED_FIFO ● Smallest period gets highest priority ● Compute computation time (C) ● Compute period time (T) U =∑ i=1 n Ci Ti ≤n( n √2−1)
  • 19. Rate Monotonic Scheduling (RMS) ● Add a Dishwasher to the mix... ● Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU) ● Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU) ● Washing Machine: C = 100ms T = 800ms (12.5% of the CPU) U = 500 1000 + 300 900 + 100 800 =.958333
  • 20. Rate Monotonic Scheduling (RMS) ● Add a Dishwasher to the mix... ● Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU) ● Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU) ● Washing Machine: C = 100ms T = 800ms (12.5% of the CPU) U = 500 1000 + 300 900 + 100 800 =.958333 U ≤n( n √2−1)=3( 3 √2−1)=0.77976
  • 21. Rate Monotonic Scheduling (RMS) U =∑ i=1 n Ci Ti ≤n( n √2−1) lim n→ ∞ n( n √2−1)=ln 2≈0.693147
  • 22. SCHED_DEADLINE ● Utilizes Earliest Deadline First (EDF) ● Dynamic priority ● The task with next deadline has highest priority U =∑ i=1 n Ci Ti =1
  • 23. Earliest Deadline First (EDF) 0 1 12111098765432 13
  • 24. Earliest Deadline First (EDF) 0 1 12111098765432 13
  • 25. Earliest Deadline First (EDF) 0 1 12111098765432 13
  • 26. Earliest Deadline First (EDF) 0 1 12111098765432 13 :) HAPPY :)
  • 27. Earliest Deadline First (EDF) 0 1 12111098765432 13
  • 28. Earliest Deadline First (EDF) 0 1 12111098765432 13
  • 29. Setting a RT priority sched_setscheduler(pid_t pid, int policy, struct sched_param *param) struct sched_param { u32 sched_priority; };
  • 30. Implementing SCHED_DEADLINE in Linux 30 Two new syscalls sched_getattr(pid_t pid, struct sched_attr *attr, unsigned int size, unsigned int flags) (Similar to sched_getparam(pid_t pid, struct sched_param *param) sched_setattr(pid_t pid, struct sched_attr *attr, unsigned int flags) (Similar to sched_setparam(pid_t pid, struct sched_param *param)
  • 31. Implementing SCHED_DEADLINE struct sched_attr { u32 size; /* Size of this structure */ u32 sched_policy; /* Policy (SCHED_*) */ u64 sched_flags; /* Flags */ s32 sched_nice; /* Nice value (SCHED_OTHER, SCHED_BATCH) */ u32 sched_priority; /* Static priority (SCHED_FIFO, SCHED_RR) */ /* Remaining fields are for SCHED_DEADLINE */ u64 sched_runtime; u64 sched_deadline; u64 sched_period; };
  • 32. Implementing SCHED_DEADLINE struct sched_attr attr; ret = sched_getattr(0, &attr, sizeof(attr), 0); if (ret < 0) error(); attr.sched_policy = SCHED_DEADLINE; attr.sched_runtime = runtime_ns; attr.sched_deadline = deadline_ns; ret = sched_setattr(0, &attr, 0); if (ret < 0) error();
  • 33. sched_yield() ● Most use cases are buggy – Most tasks will not give up the CPU ● SCHED_OTHER – Gives up current CPU time slice ● SCHED_FIFO / SCHED_RR – Gives up the CPU to a task of the SAME PRIORITY – Voluntary scheduling among same priority tasks
  • 34. sched_yield() ● Buggy code! again: pthread_mutex_lock(&mutex_A); B = A->B; if (pthread_mutex_trylock(&B->mutex_B)) { pthread_mutex_unlock(&mutex_A); sched_yield(); goto again; }
  • 35. sched_yield() ● What you want for SCHED_DEADLINE! ● Tells the kernel the task is done with current period ● Used to relinquish the rest of the runtime budget
  • 36. Constant Bandwidth Server 36 scheduling deadline = current time + deadline remaining runtime = runtime remaining runtime scheduling deadline−current time > runtime period
  • 37. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9
  • 38. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Remainder = 2/3 > 3/9
  • 39. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Deadline = current time + new deadline
  • 40. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Remaining Runtime = Runtime (3 units)
  • 41. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Another Deadline task?
  • 42. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Another Deadline task?
  • 43. Self sleeping tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 3/9 ● Only ran 2 units in the original 9 Original Deadline
  • 45. Deadline vs Period ● Can't have offset holes in our donuts ● Have a specific deadline to make within a period runtime <= deadline <= period ● But is this too constrained? U =∑ i=1 n Ci Di =1
  • 46. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10
  • 47. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10 ● 1/1 > 2/4
  • 48. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10 ● Move deadline from 4 to 7 (period from 10 to 13)
  • 49. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10 ● Runs for 1 and sleeps again
  • 50. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10 ● Wakes up again with 1 to go (moves deadline to 10, period to 16)
  • 51. Self sleeping constrained tasks Courtesy of Daniel Bristot de Oliveira 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 U = 2/4/10 ● 4 out of 10! Instead of 2 out 4 in 10
  • 52. Multi processors! It's all fun and games until someone throws another processor into your eye
  • 53. ● M CPUs ● M+1 tasks ● One task with runtime 999ms out of 1000ms ● M tasks of runtime of 10ms out of 999ms ● All start at the same time ● The M tasks have a shorted deadline ● All M tasks run on all CPUs for 10ms ● That one task now only has 990 ms left to run 999ms. Multi processors! (Dhall's Effect) 999 1000 + M ( 10 999 )=0.999+.01001 M <M M=2; 999 1000 +2( 10 999 )=0.999+2∗0.01001=1.01902<2
  • 54. 2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9
  • 55. 2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 9/10
  • 56. Multi processors! ● EDF can not give you better than U = 1 – No matter how many processors you have – Full utilization should be U = N CPUs ● Two methods – Partitioning (Bind each task to a CPU) – Global (let all tasks migrate wherever) – Neither give better than U = 1 guarantees
  • 57. Multi processors! ● EDF partitioned Can not always be used: ● U_t1 = .6 ● U_t2 = .6 ● U_t3 = .5 ● The above would need special scheduling to work anyway To figure out the best utilization is the bin packing problem ● Sorry folks, it's NP complete ● Don't even bother trying
  • 58. Multi processors! ● Global Earliest Deadline First (gEDF) ● Can not guarantee deadlines of U > 1 for all cases ● But special cases can be satisfied for U > 1 D_i = P_i U_max = max{C_i/P_i} U =∑ i=1 n Ci Pi ≤M −( M−1)∗U max
  • 59. Multi processors! ● M = 8 ● U_max = 0.5 U =∑ i=1 n Ci Pi ≤M −( M−1)∗U max U =∑ i=1 n Ci Pi ≤8−(7)∗.5=4.5
  • 60. Multi processors! ● M = 2 ● U_max = 999/1000 U =∑ i=1 n Ci Pi ≤M −( M−1)∗U max U =∑ i=1 n Ci Pi ≤2−(1)∗0.999=1.001
  • 61. The limits of SCHED_DEADLINE ● Runs on all CPUS (well sorta) No limited sched affinity allowed Global EDF is the default Must account for sched migration overheads ● Can not have children (no forking) Your SCHED_DEADLINE tasks have been fixed ● Calculating Worse Case Execution Time (WCET) If you get it wrong, SCHED_DEADLINE may throttle your task before it finishes
  • 62. Giving SCHED_DEADLINE Affinity Setting task affinity on SCHED_DEADLINE is not allowed But you can limit them by creating new sched domains CPU sets Implementing Partitioned EDF
  • 63. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset
  • 64. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set
  • 65. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set
  • 66. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus
  • 67. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems
  • 68. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance
  • 69. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive
  • 70. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus
  • 71. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus echo 0 > my_set/cpuset.mems
  • 72. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus echo 0 > my_set/cpuset.mems echo 1 > my_set/cpuset.sched_load_balance
  • 73. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus echo 0 > my_set/cpuset.mems echo 1 > my_set/cpuset.sched_load_balance echo 1 > my_set/cpuset.cpu_exclusive
  • 74. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus echo 0 > my_set/cpuset.mems echo 1 > my_set/cpuset.sched_load_balance echo 1 > my_set/cpuset.cpu_exclusive echo 0 > cpuset.sched_load_balance
  • 75. Giving SCHED_DEADLINE Affinity cd /sys/fs/cgroup/cpuset mkdir my_set mkdir other_set echo 0-2 > other_set/cpuset.cpus echo 0 > other_set/cpuset.mems echo 1 > other_set/cpuset.sched_load_balance echo 1 > other_set/cpuset.cpu_exclusive echo 3 > my_set/cpuset.cpus echo 0 > my_set/cpuset.mems echo 1 > my_set/cpuset.sched_load_balance echo 1 > my_set/cpuset.cpu_exclusive echo 0 > cpuset.sched_load_balance That’s a lot!
  • 76. Giving SCHED_DEADLINE Affinity cat tasks | while read task; do echo $task > other_set/tasks done echo $sched_deadline_task > my_set/tasks
  • 77. Calculating WCET ● Today's hardware is extremely unpredictable ● Worse Case Execution Time is impossible to know ● Allocate too much bandwidth instead ● Need something between RMS and CBS
  • 78. GRUB (not the boot loader) ● Greedy Reclaim of Unused Bandwidth ● Allows for SCHED_DEADLINE tasks to use up the unused utilization of the CPU (or part of it) ● Allows for tasks to handle WCET of a bit more than calculated. ● Just went into mainline (v4.13)
  • 80. 2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 9/10
  • 81. 2 tasks: 2/9; 1 task 9/10; U = 2/9 + 2/9 + 9/10 = 1.34444 < 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2/9 2/9/10 7/8/10