PhD Candidate,
Department of Computer science
Mälardalen University
Time: Tuesday, Dec. 30, 2014, 11:30 a.m.
Location: Computer Engineering Department, Urmia University
Abstract:
The processor is the brain of a computer system. Usually, one or more programs run on a processor where each program is typically responsible for performing a particular task or function of the system. The performance of all the tasks together results in the system functionality. In many computer systems, it is not only enough that all tasks deliver correct output, but it is also crucial that these activities are delivered in a proper time. This type of systems that have timing requirements are known as real-time systems. A scheduler is responsible for scheduling all tasks on the processor, i.e., it dictates which task to run and when to run to ensure that all tasks are carried out on time. Typically, such tasks/programs need to use the computer system’s hardware and software resources to perform their calculation. Examples of such type of resources that are shared among programs are I/O devices, buffers and memories. Technology that is used for the management of shared resources is known as resource sharing synchronization protocol.
In recent years, a shift from single-processor platforms to multiprocessor platforms has become inevitable due to availability of processor chips and requirements on increased performance. Scheduling and resource sharing protocols have been well studied for uniprocessor systems. However, in the context of multiprocessors, still such techniques are not fully mature. The shift towards multi-core technology has revealed the demand for real-time scheduling algorithms along with synchronization protocols to support real-time applications on multiprocessors, both with and without dependencies.
In this talk, we first have an introduction to real-time embedded systems. Next, we look at scheduling and resource sharing policies in uniprocessor platforms. Further, we discuss the extension of scheduling and resource sharing policies for multiprocessor platforms and present the recent challenges arisen in this context.
Biography:
Sara Afshar is a PhD student at Mälardalen University. She has received her B.Sc. degree in Electrical Engineering from Tabriz University, Iran in 2002. She worked at different engineering companies until 2009. In the year 2010 she started her M.Sc. in Embedded Systems at Mälardalen University. She obtained her Master degree in 2012 and at the same year she started her PhD studies in Mälardalen University. Currently she is working on the topic of resource sharing in multiprocessor systems. She is part of the Complex Real-Time Embedded Systems group at Mälardalen University.
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
1. Resource Sharing in Real-Time
Uni/Multiprocessor Embedded Systems
Sara Afshar
sara.afshar@mdh.se
2. Stockhol
m
Umeå
Göteborg
Malmö
Västerås
Eskilstuna
About Me!
• BSc. in Electrical Engineering in Tabriz University
• Master in Intelligent Embedded Systems
• Mälardalen University, Västerås, Sweden
• Currently, PhD candidate (3rd year)
• Research topic:
Resource sharing in real-time multiprocessors
4. Outline
• Embedded systems
• Real-time systems
• Scheduling and timing analysis
• Resource sharing in uniprocessors
• Multiprocessors
• Open problems
5. Daily Computers
• A special-purpose computer
• Computers perform the functionality
7. Real-Time Systems
Embedded systems:
– Specialized
– Efficient
But in many cases it is not enough. The system
has to react to the environment at the right time
instance.
Timing Requirements
9. Example
Airbag example (a very classical example!)
time
Too early Too late
Collision
Real-Time ≠ Fast
Real-Time = Predictable
10. Hard vs. Soft Real-Time
Each program has a deadline which should not
be missed.
Hard Real-Time
– Missing the deadline cause catastrophe
– E.g., automotive, airplane, industrial control
Soft-Real-Time
– Can miss some deadlines
– E.g., TV, video streaming
11. Real-Time Tasks
• Program is written by means of different tasks
• On a single-core processor two tasks cannot execute
in parallel
– Some tasks are preempted in order that all tasks execute on time
– Scheduler is responsible to schedule tasks in a processor
sens
Task A
sens sens sens sens
sens sens sens sens sens
12. Real-Time Tasks
Periodic Tasks: repeat in periodic intervals
e.g., control loops, sensor reading, etc.
Execute Execute ExecuteSleeping Sleeping
𝑇𝑖 𝑇𝑖
13. Real-Time Tasks
Aperiodic Task: tasks may arrive at any point in
time
e.g., alarm tasks, emergency button, etc.
Execute
The task executes once
An interrupt
event occurs,
e.g., a button
is pushed!
Task may be
trigerred
again at some
point in time!
14. Real-Time Tasks
Sporadic Task: Similar to aperiodic tasks, however,
minimum time for task’s next activation is known
e.g., task handling keyboard input- minimum time
between pressing two keys known
Minimum inter arrival time = known
Execute𝑇𝑖
Execute
?
15. Scheduling
The process of deciding the execution order of real-
time tasks, depends of the priority of the task.
There are different mechanism to do that.
Task A misses its deadline Both deadlines are met
19. Response Time Analysis
A A A A A
H H H H
Response Time of Task A
Ready
time of A
Finishing
time of A
W𝑅 𝐴 = 𝐶𝐴 + ∀𝑗∈ℎ𝑝(𝐴)
𝑊𝑅 𝐴
𝑇 𝑗
× 𝐶𝑗
Interference from higher
priority tasks
Worst-case response time
20. Resource Sharing
• Some tasks are independent
• Some tasks are aware of each other
– E.g., using a shared memory
– E.g., two tasks writing in a same buffer
21. Resource Sharing
• Tasks may use hardware/software components
such as a database, hard-drive, sensor etc.
PROBLEM
critical section = part of the task execution that access to a resource
22. • In real-time systems, semaphore-based locking
synchronization techniques handles mutually-
exclusive access to resources among tasks
– Every task that wants to use a resource first has to lock the resource,
use it and then unlock (release) the resource
Resource Sharing
critical section
23. Resource Sharing
• Tasks may experience delay due to resource
sharing
– E.g., the task that needs the same resource has to wait for the resource to
be released by other tasks
critical section
BLOCK
24. Resource Sharing
• Blocking can endanger system correctness
– Priority inversion: a high priority task (in this example task 1) is
forced to wait for a lower priority task (in this example task 2) for an
unbounded amount of time
http://www.idt.mdh.se/kurser/CDT315/index.php?choice=contents
critical section
high priority task already missed its
deadline due to extra waiting for
middle prio. task
Task 1: priority = Low
Task 2 priority = Middle
Task 3: priority = High
deadline
extra delay due to normal execution
of middle prio. task which can be
considerably long compared to
critical sections
high prio. task requests the same
resource which is not available and
is blocked
high prio. task arrives and
preempt the low priority task
middle prio. task
arrives and preempts
low priority task
low prio. task continues
middle prio. task finishes
low prio. task continues and
then releases th resource
25. Resource Sharing
• Blocking can endanger system correctness
– Priority inversion: high priority task is forced to wait for a lower
priority task for an unbounded amount of time
• Mars Pathfinder
– Landing on July 4, 1997
– Pathfinder experiences repeated resets after staring gathering of
meteorological data.
– Resets caused by timing overruns when using shared communication
buss- a classical case of priority inversion problem.
26. PIP
• Priority Inheritance Protocol (PIP)
– High priority task cannot be delayed by middle priority task
http://www.idt.mdh.se/kurser/CDT315/index.php?choice=contents
critical section
Task 1: priority = Low
Low priority task
inherits priotiy of
high priority task
Middle priority task arrives
prio. = Low prio. = High prio. = Low
Task 2 priority = Middle
Task 3: priority = High
Low priority task
gets back its own
priotiy
:high priority task meets its deadline
deadline
27. Synchronization Protocols
• PIP: priority inversion
• PCP: deadlock, chain blocking
• IPIP, IPCP, SRP: blocking only in the beginning
28. Ready time of A
• By enabling resource sharing, a blocking term is added to the worst-case
response time of a task
Response Time Analysis
A A A A A
H H H H
Response Time of Task A
Finishing time of A
L
W𝑅 𝐴 = 𝐶𝐴 + 𝐵𝐴 + ∀𝑗∈ℎ𝑝(𝐴)
𝑊𝑅 𝐴
𝑇 𝑗
× 𝐶𝑗
Blocking incurred to task A
30. Multiprocessors
• Integration of multiple processors on a chip
• Multiprocessors platforms have become
popular in the industry
– Power consumption
– Performance
31. Multiprocessors
• Integration of multiple processors on a chip
• Multiprocessors platforms have become
popular in the industry
– Power consumption
– Performance
• Migrating to multiprocessor technology
– Immature scheduling and synchronization techniques
– Over simplification
32. Multiprocessor Scheduling
• Partitioned scheduling:
– Tasks are fixed assigned to processors in design time
– Each processor has its own scheduler and ready queue
– Task migration among processors is not allowed in run-time
local ready queues
per-processor schedulers
processors
33. Multiprocessor Scheduling
• Global scheduling
– Only one scheduler and one ready queue
– Scheduler assign tasks to processors in run-time
– Task migration among processors is allowed
global ready queue
global scheduler
processors
34. Multiprocessor Scheduling
• Hybrid scheduling, semi-partitioned scheduling
– Combination of partitioned and global scheduling
• Most tasks are fixed assigned to processor in design time
• A few tasks can migrate among processors in run-time
– Benefits from advantages of both approaches
…
task partitions local ready queues
per-processor schedulers
…
…
processors
partitioned task
migrating task
41. Semi-Partitioned Scheduling
• Partitioned tasks: tasks that are fixed assigned to
processors and execute only on those processors, i.e.,
they fit (utilization-wise) on processors during
partitioning (𝜏1, … , 𝜏8)
• Migrating tasks: Task(s) that do(es) not fit on any
processor (𝜏9)
𝜏1
𝜏2
𝜏3
𝜏4
𝜏5
𝜏6
𝜏7
𝑃1 𝑃2 𝑃3
𝜏8
𝜏9
42. Semi-Partitioned Scheduling
• Migrating tasks split among processors which can
provide capacity remained from partitioning (𝜏9 splits
among processors 1 to 3)
𝜏1
𝜏2
𝜏3
𝜏4
𝜏5
𝜏6
𝜏7
𝑃1 𝑃2 𝑃3
𝜏8
𝜏9
43. Resource Sharing
• Variation in execution time of tasks
• May cause critical sections to happen at any point during
task execution
• In case of semi-partitioned scheduling, critical sections may
happen in any part of a split task and therefore on any
processor that the task is split over
cs1
P1 P2
cs2
P3
cs2
P1 P2
cs1
P3 P1 P2
cs1
P3
cs2
cs1
Case 1 Case 2 Case 3
44. Resource Sharing
• Therefore, in semi-partitioned scheduling,
existing synchronization protocols cannot be
used directly
cs1
P1 P2
cs2
P3
cs2
P1 P2
cs1
P3 P1 P2
cs1
P3
cs2
cs1
Case 1 Case 2 Case 3
45. Centralized Solution
• Critical sections migrate to marked processor
• Advantages:
– Centralize resource access
– Remote blocking on marked processor
• Disadvantages:
– Extra migration overhead
P1 P2
Rs
1
i
2
i
marked
processor
2
i
2
i
2
i
P2
P1
migration overhead
non-split tasks
split task
46. Decentralized Solution
• Critical sections served where they occur
• Advantages:
– Decreased migration overhead
• Disadvantages:
– Introduced blocking to local tasks
– Increases remote blocking non-split tasks
split task
P1 P2
Rs
1
i
2
i
2
i
2
i
P2
P1Rs
1
i
1
i
47. Analysis
• Local blocking due to local resources
𝐵𝑖,1 = min{𝑛𝑖
𝐺
+ 1,
𝜌 𝑖<𝜌 𝑗
𝑇𝑖
𝑇𝑗
+ 1 𝑛𝑗
𝐿
(𝜏𝑖)} max
𝜌 𝑖<𝜌 𝑗
𝜋 𝑖,𝜋 𝑗∈𝑃 𝑘
𝑅 𝑙∈𝑅 𝑃 𝑘
𝐿
𝜌 𝑖≤𝑐𝑒𝑖𝑙(𝑅 𝑙)
{𝐶𝑠𝑗,𝑙}
𝑐𝑒𝑖𝑙 𝑅𝑙 = max{𝜌2 𝜏2 ∈ 𝜏𝑙,1}
48. Analysis
• Local blocking due to global resources
𝐵𝑖,2 =
∀𝜌 𝑗<𝜌𝑖
𝜏 𝑖,𝜏 𝑗 𝜖𝑃 𝑘
min{𝑛𝑖
𝐺
+ 1, (
𝑇𝑖
𝑇𝑗
+ 1)𝑛𝑗
𝐺
} max
𝑅 𝑞∈𝑅 𝑃 𝑘
𝐺
{𝐶𝑠𝑗,𝑞}
52. Details in Paper
“A Resource Sharing under Multiprocessor Semi-
Partitioned Scheduling.”
Sara Afshar, Farhang Nemati, Thomas Nolte. In proceedings of the
18th IEEE International Conference on Embedded and Real-Time
Computing Systems and Applications (RTCSA), 2012, August.
53. Open Problems
• Improving synchronization techniques
– Improving of analysis
– Improving of protocols
• Blocking aware partitioning
• Compositional scheduling