SlideShare a Scribd company logo
1 of 123
UNIT – II
Chapter 4: Real-Time Operating Systems
Chapter 5: Tasks
Chapter 6: Semaphore
Chapter – 7 : Message Queues
B Prasad, Assoc. Prof., Dept. of CSE, MLRITM 1
Dept. of CSE
MLRITM
Chapter 4
Real-Time Operating
Systems
Chapter 4: Real-Time
Operating Systems
4.1 Introduction
4.2 A Brief History of Operating Systems
4.3 Defining an RTOS
4.4 The Scheduler
4.4.1 Schedulable Entities
4.4.2 Multitasking
4.4.3 The Context Switch
4.4.4 The Dispatcher
4.4.5 Scheduling Algorithms Preemptive Priority-Based Scheduling Round-
Robin Scheduling
4.5 Objects
4.6 Services
4.7 Key Characteristics of an RTOS
4.7.1 Reliability
4.7.2 Predictability
4.7.3 Performance
4.7.4 Compactness
4.7.5 Scalability
Chapter 5
Tasks
Chapter 5: Tasks
5.1 Introduction
5.2 Defining a Task
5.3 Task States and Scheduling
5.3.1 Ready State
5.3.2 Running State
5.3.3 Blocked State
5.4 Typical Task Operations
5.4.1 Task Creation and Deletion
5.4.2 Task Scheduling
5.4.3 Obtaining Task Information
5.5 Typical Task Structure
5.5.1 Run-to-Completion Tasks
5.5.2 Endless-Loop Tasks
5.6 Synchronization, Communication, and Concurrency
5.1 Introduction
• Simple software applications are typically
designed to run sequently.
• However, this scheme is inappropriate for
real-time embedded applications.
• These applications generally handle
multiple inputs and outputs within tight
time constraints.
– They must be designed for concurrency.
5.1 Definition of Tasks
• A task is an independent thread of
execution that can compete with other
concurrent tasks for processor execution
time.
• A task is schedulable.
A task and its associated data
structures
System tasks
• Initialization or startup task
• Idle task
• Logging task
• Exception-handling task
• Debug agent task
5.3 Task states and Scheduling
5.3.1 Ready state
• In this state, the task actively competes
with all other ready tasks for the
processor’s execution time.
• The kernel’s scheduler uses the priority of
each task to determine which task to move
to the running state.
An example (255=lowest, 0=highest)
5.3.2 Running state
• On a single-processor system, only one task
can run at a time.
• When a task is preempted by a higher
priority task, it moves to the ready state.
• It also can move to the blocked state.
– Making a call that requests an unavailable
resource
– Making a call that requests to wait for an event
to occur
– Making a call to delay the task for some
duration
5.3.3 Blocked state
• CPU starvation occurs when higher priority
tasks use all of the CPU execution time and
lower priority tasks do not get to run.
• The cases when blocking conditions are met
– A semaphore token for which a task is waiting
is released
– A message, on which the task is waiting, arrives
in a message queue
– A time delay imposed on the task expires
5.4 Typical task operations
• Creating and deleting tasks
• Controlling task scheduling
• Obtaining task information
5.4.1 Task creation and deletion
• Two common approaches
– Two system calls: first to create and then to
move to the ready state
– One systems calls: All done in a system call
• User-configurable hooks
– A mechanism to execute programmer-supplied
functions at the time of specific kernel events
• Premature deletion
– May get memory or resource leaks
5.4.2 Task scheduling
• The scheduling can be handled
automatically.
• Many kernels also provide a set of API
calls that allows developers to control the
state changes.
– Manual scheduling
Task Scheduling
• Scheduling:
– Select the most deserving process to run based
on system-specific policy
• Also called CPU scheduling
– Since tasks complete for the CPU execution
• All most RTOSes support the preemptive
priority-based scheduling algorithm
Policy Considerations
• Policy can control/influence
– CPU utilization
– Average time a process waits for service
– Average amount of time to complete a job
• Could strive for any of the following
– Equitability
– Favor very short or long jobs
– Meet priority requirements
– Meet deadlines
When the Scheduler is Invoked?
• The scheduler is run from several points within
the kernel
– Run after putting the current process onto a wait queue
(when wait for some events)
• The current process becomes blocked
– Run at the end of a system call
• Just before a process is returned to user mode from
system mode
– When the system timer has just set the current
processes counter to zero.
• The current process run out of its time quota
– Process exits
• The current process exits
– Run when a more higher priority becomes ready
• The current process is preempted
Well-known Task Scheduling
Algorithms in RTOSes
• Priority-based scheduling algorithm
• Priority-based round-robin scheduling
algorithm
• EDF (Earliest-Deadline-First) scheduling
algorithm
• RM (Rate-Monotonic) scheduling algorithm
Context Switch (1/2)
• Context switch
– Also called task switch or process switch
– Occurred when a scheduler switches from one
task to another
• Although each process can have its own
address space, all processes have to share
the CPU registers
– Kernel ensure that each such register is loaded
with the value it had when the process was
suspended
Context Switch (2/2)
• Thus, each task has its own context
– The state of the CPU registers required for tasks’
running
– When a task running, its context is highly dynamic
– The context of a task is stored in its process descriptor
• Operations
– Save the context of the current process
– Load the context of new process
– If have page table, update the page table entries
– Flush those TLB entries that belonged to the old process
Operations for task scheduling
Task scheduling operations
• Suspend and Resume
– For debugging purposes
– Suspend a high-priority task so that lower
priority task can execute
• Delay a task
– Allow manual scheduling
– Wait for an external condition that does not
have an associated interrupt
• Polling: wait up after a set time to check a specified
condition or event had occurred
Task scheduling operations
• Restart
– Begin the task as if it had not been previously executing
– Useful during debugging or when initializing a task after
a catastrophic error
• Get and Set Priority
– Control task scheduling manually
– Helpful during a priority inversion
• Preemption lock
– A pairs of calls used to disable and enable preemption in
applications
– Useful if a task is in a critical section of code
5.4.3 Obtaining Task Information
• Useful for debugging and monitoring
• Get ID: obtain a task’s ID
• Get TCB: obtain a task’s TCB by itsID
– Only a snapshot of the task context
5.5 Typical task structure
• Two kinds of tasks
– Run to completion
– Endless loop
5.6 Synchronization, Communication,
and Concurrency
• Tasks synchronize and communicate
amongst themselves by using intertask
primitives, which are kernel objects that
facilitate synchronization and
communication between two or more
threads of execution.
• Examples of such objects include
semaphores, message queues, signals, and
pipes, as well as other types of objects.
Synchronization, Communication, and
Concurrency
• Task object is the fundamental construct of most
kernels.
• Tasks, along with task-management services, allow
developers to design applications for concurrency to
meet multiple time constraints and to address various
design problems inherent to real-time embedded
applications.
Chapter
Embedded RTOS
Inter-Process
Communication
31
Introduction
B Prasad, Assoc. Prof., 3
• Inter-Process Communication (IPC)
Classification
– Synchronization / communication
– Communication with/without data
– Uni-directional/bi-directional transfer
– Structured/un-structured data
– Destructive/non-destructive read
32
IPC Classification (1/2)
B Prasad, Assoc. Prof., 3
• Mutual exclusion & synchronization
– Semaphore
• Binary semaphore
• Counting semaphore
• Mutex semaphore
33
IPC Classification (2/2)
B Prasad, Assoc. Prof., 3
• Communication with data
– Structured data
• destructive read
– Message queue
– Unstructured data
• Uni-direction
– destructive read
» Named pipe
(FIFO)
» Unnamed pipe
• Bi-direction
– non-destructive read
» Shared memory
• Communication without
data
– Event register
– Signal
– Condition variable
34
Chapter 6
Semaphore
B Prasad, Assoc. Prof., 35
6.1 Introduction
6.2 Defining Semaphores
6.2.1 Binary Semaphores
6.2.2 Counting Semaphores
6.2.3 Mutual Exclusion (Mutex)
6.3 Typical Semaphore Operations
6.3.1 Creating and Deleting Semaphores
6.3.2 Acquiring and Releasing Semaphores
6.3.3 Clearing Semaphore Task-Waiting Lists
6.3.4 Getting Semaphore Information
6.4 Typical Semaphore Use
6.4.1 Wait-and-Signal Synchronization
6.4.2 Multiple-Task Wait-and-Signal Synchronization
6.4.3 Credit-Tracking Synchronization
6.4.4 Single Shared-Resource-Access Synchronization
6.4.5 Recursive Shared-Resource-Access Synchronization
6.4.6 Multiple Shared-Resource-Access Synchronization
Chapter 6: Semaphore
B Prasad, Assoc. Prof., 36
6.1 Semaphore Introduction
B Prasad, Assoc. Prof., Dept. of CSE, MLRITM 37
• Multiple concurrent threads of execution
within an application must be able to
– Synchronize their execution
– Coordinate mutually exclusive access to shared
resources
• RTOS provides a semaphore object and
associated semaphore management
services
Semaphores
B Prasad, Assoc. Prof., 38
• A semaphore S is an integer variable
• Can only be accessed through two
indivisible (atomic) operations
– wait
– signal
wait(S) {
while (S  0) ; // no-op
S--;
}
signal(S) {
S++;
}
Implementation of Blocking
Semaphores
B Prasad, Assoc. Prof., 39
• Semaphore operations now defined as:
void wait(semaphore S) {
S.value --;
if (S.value < 0) then {
add this process to S.L;
block();
}
}
wait(S)
void signal(semaphore S) {
S.value ++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
}
signal(S)
Usage of Semaphores -
mutual exclusion
• Example: critical-section for n-processes
• Shared data:
semaphore mutex; // initially mutex = 1
B Prasad, Assoc. Prof., 40
wait (mutex);
signal (mutex);
do {
critical section
remainder section
} while (1);
Process Pi
Usage of Semaphores -
synchronization
B Prasad, Assoc. Prof., 41
• Semaphore as general synchronization tool
to synchronize processes
– Use semaphore (with value initialized to 0)
• Example: if we want to execute S1 and
then S2
• Shared data:
semaphore synch; // initially synch = 0
Process P1
S1;
signal(synch);
wait(synch);
S2;
Process P2
Deadlocks and Starvation
• Two or more processes are waiting indefinitely for
an event that can be caused by only one of the
waiting processes.
• Let S and Q be two semaphores, initialized to 1.
P1
P0
wait(S);
wait(Q);
.
signal(S);
signal(Q);
wait(Q);
wait(S);
.
.
signal(Q);
signal(S);
P0: wait(S) … pass
P1: wait(Q) … pass
P0: wait(Q) … block
P1: wait(S) … block
Deadlock occurs !
B Prasad, Assoc. Prof., 42
Incorrect Use of Semaphores
wait(mutex);
...
critical section
...
signal(mutex);
• Incorrect use of semaphores
may cause timing errors
wait(mutex);
...
critical section
...
wait(mutex);
signal(mutex);
...
critical section
...
wait(mutex);
...
critical section
..
signal(mutex);
wait(mutex);
...
critical section
...
will violate
mutual exclusion
B Prasad, Assoc. Prof., 43
will cause
deadlock
will violate mutual exclusion
or cause deadlock
6.2 Defining Semaphores
B Prasad, Assoc. Prof., 44
• A semaphore or semaphore token
– A kernel object that one or more threads of
execution can acquire or release for the
purpose of synchronization or mutual exclusion
• When a semaphore is first created, kernel
assigns
– An semaphore control block (SCB)
– A unique ID
– A value (binary or a count)
– A task-waiting list
The Associated Parameters, and
Supporting Data Structures
B Prasad, Assoc. Prof., 45
Semaphore (1/2)
B Prasad, Assoc. Prof., 46
• A key that allows a task to carry out some
operation or to access a resource
– A single semaphore can be acquired a finite
number of times
– Acquire a semaphore is like acquiring the
duplicate of a key
– If the token counter reaches 0
• The semaphore has no token left
• A requesting task cannot acquire the semaphore and
may block if it chooses to wait
Semaphore (2/2)
B Prasad, Assoc. Prof., 47
• Task-waiting list tracks all tasks blocked while
waiting on an unavailable semaphore
– FIFO: first-in-first-out order
– Highest-priority first order
• When an unavailable semaphore becomes available
– Kernel allows the first task in task-waiting list to acquire
it
• Different types of semaphores
– Binary semaphores
– Counting semaphores
– Mutual-exclusion (mutex) semaphores
6.2.1 Binary Semaphores
B Prasad, Assoc. Prof., 48
• Have a value of either 0 or 1
– 0: the semaphore is considered unavailable
– 1: the semaphore is considered available
• When a binary semaphore is first created
– It can be initialized to either available or
unavailable (1 or 0, respectively)
• When created as global resources
– Shared among all tasks that need them
– Any task could release a binary semaphore even
if the task did not initially acquire it
The State Diagram of a
Binary Semaphore
B Prasad, Assoc. Prof., 49
6.2.2 Counting Semaphores (1/2)
B Prasad, Assoc. Prof., 50
• Use a counter to allow it to be acquired or
released multiple times
• The semaphore count assigned when it was
first created denotes the number of
semaphore tokens it has initially
• Global resources that can be shared by all
tasks that need them
The State Diagram of a
Counting Semaphore
B Prasad, Assoc. Prof., 51
Counting Semaphores (2/2)
B Prasad, Assoc. Prof., 52
• Bounded count
– A count in which the initial count set for the
counting semaphore
– Act as the maximum count for the semaphore
– Determined when the semaphore was first
created
• Unbounded count
– Allow the counting semaphore to count beyond
the initial count to the maximum value that can
be held by the counter’s data type
• An unsigned integer or an unsigned long value
6.2.3 Mutual Exclusion
(Mutex) Semaphores
B Prasad, Assoc. Prof., 53
• A special binary semaphore that supports
– Ownership
– Recursive access
– Task deletion safety
– One or more protocols avoiding problems
inherent to mutual exclusions
• The states of a mutex
– Locked (1) and unlocked (0)
• A mutex is initially created in the unlocked
state
The State Diagram of a Mutual
Exclusion (Mutex) Semaphore
B Prasad, Assoc. Prof., 54
Mutex Ownership
B Prasad, Assoc. Prof., 55
• Ownership of a mutex is gained when a task
first locks the mutex by acquiring it.
• When a task owns the mutex
– No other task can lock or unlock that mutex
– Contrast to the binary semaphore that can be
released by any task
Recursive Locking (1/3)
B Prasad, Assoc. Prof., 56
• Allow a task that owns the mutex to
acquire it multiple times in the locked state
• The mutex with recursive locking is called a
recursive mutex
– Useful when a task requiring exclusive access
to a shared resource calls one or more routines
that also require access to the same resource
– Allows nested attempts to lock the mutex to
succeed, rather than cause deadlock
Recursive Locking (2/3)
B Prasad, Assoc. Prof., 57
• Implementation
– A lock count track two states of a mutex and
the number of times it has been recursively
locked
– A mutex might maintain two counts
• A binary value to track its state
• A separate lock count to track the number of times it
has been acquired in the lock state by the task owns
it
Recursive Locking (3/3)
B Prasad, Assoc. Prof., 58
• Comparison
– The count used for the mutex
• Track the number of times that the task owning the
mutex has locked or unlocked the mutex
• Always unbounded, which allows multiple recursive
accesses
– The count used for the counting semaphore
• Track the number of tokens that have been acquired
or released by any task
Task Deletion Safety
B Prasad, Assoc. Prof., 59
• Premature task deletion is avoided
• While a task owns the mutex with built-in
task deletion safety capability, the task
cannot be deleted
Priority Inversion Avoidance
B Prasad, Assoc. Prof., 60
• When a higher priority task is blocked and
is waiting for a resource being used by a
lower priority task, which has itself been
preempted by an unrelated medium-priority
task
• Priority inversion avoidance protocols
– Priority inheritance protocol
– Ceiling priority protocol
Priority Inversion Example
B Prasad, Assoc. Prof., 61
• Priority inversion
– a situation in which a low-priority task executes
while a higher priority task waits on it due to
resource contentions
Unbounded Priority
Inversion Example
B Prasad, Assoc. Prof., 62
Priority Inheritance Protocol
Example
B Prasad, Assoc. Prof., 63
Ceiling Priority Protocol Example
• When acquiring the mutex, the task’ priority is
automatically set to the highest priority of all
possible tasks that might request it until it is
released
B Prasad, Assoc. Prof., 64
6.3 Typical Semaphore Operations
B Prasad, Assoc. Prof., 65
• Creating and deleting semaphores
• Acquiring and releasing semaphores
• Clearing a semaphore’s task-waiting list
• Getting semaphore information
6.3.1 Creating and Deleting
Semaphores (1/2)
B Prasad, Assoc. Prof., 66
• Different calls might be used for creating
binary, counting, and mutex semaphores
– Binary: specify the initial semaphore state and
the task-waiting order
– Counting: specify the initial semaphore count
and the task-waiting order
– Mutex: specify the task-waiting order and
enable task deletion safety, recursion, and
priority-inversion avoidance protocols
Creating and Deleting
Semaphores (2/2)
• When a semaphore is deleted
– Blocked tasks in its task-waiting list are
unblocked and moved to the ready or running
states
• Do not delete a semaphore while it is in use
B Prasad, Assoc. Prof., 67
6.3.2 Acquiring and Releasing
Semaphores
• Tasks acquires a semaphore can
– Wait forever
– Wait with a timeout
– Do not wait
• Any task can release a binary or counting
semaphore
– However, a mutex can only be released (unlocked) by the
task that first acquired (locked)
B Prasad, Assoc. Prof., 68
6.3.3 Clearing Semaphore
Task- Waiting Lists
• Clear all tasks waiting on a semaphore task-
waiting list
• Useful for thread rendezvous
– When multiple tasks’ executions need to meet
at some point in time to synchronize execution
control
B Prasad, Assoc. Prof., 69
6.3.4 Getting Semaphore Information
• Useful for performing monitoring or
debugging
• Should be used judiciously
– The semaphore information might be dynamic at
the time it is requested
B Prasad, Assoc. Prof., 70
6.4 Typical Semaphore Use
B Prasad, Assoc. Prof., 71
• Semaphore useful for
– Synchronize execution of multiple tasks
– Coordinate access to a shared resource
• Examples
– Wait-and-signal synchronization
– Multiple-task wait-and-signal synchronization
– Credit-tracking synchronization
– Single shared-resource-access synchronization
– Recursive shared-resource-access synchronization
– Multiple shared-resource-access synchronization
6.4.1 Wait-and-Signal
Synchronization (1/4)
B Prasad, Assoc. Prof., 72
• Two tasks can communicate for the
purpose of synchronization without
exchanging data.
• Example: a binary semaphore can be used
between two tasks to coordinate the
transfer of execution control
– Binary semaphore is initially unavailable
– tWaitTask has higher priority and runs first
• Acquire the semaphore but blocked
– tSignTask has a chance to run
• Release semaphore and thus unlock tWaitTask
Wait-and-Signal Synchronization
Between Two Tasks (2/4)
B Prasad, Assoc. Prof., 73
Wait-and-Signal
Synchronization (3/4)
B Prasad, Assoc. Prof., 74
• To coordinate the synchronization of more
than two tasks
– Use the flush operation on the task-waiting list
of a binary semaphore
• Example
– Binary semaphore is initially unavailable
– tWaitTask has higher priority and runs first
• Acquire the semaphore but blocked
– tSignTask has a chance to run
• Invoke a flush operation and thus unlock the three
tWaitTask
Wait-and-Signal
Synchronization (4/4)
B Prasad, Assoc. Prof., 75
tWaitTask ()
{
:
Do some processing specific to task
Acquire binary semaphore token
:
}
tSignalTask ()
{
:
Do some processing
Flush binary semaphore's task-waiting list
:
}
6.4.3 Credit-Tracking
Synchronization (1/4)
B Prasad, Assoc. Prof., 76
• The rate at which the signaling task
executes is higher than that of the
signaled task
– Need to count each signaling occurrence
– By means of the counting semaphore
Credit-Tracking Synchronization
Between Two Tasks (2/4)
B Prasad, Assoc. Prof., 77
Credit-Tracking
Synchronization (3/4)
B Prasad, Assoc. Prof., 78
• Counting semaphore’s count is initially 0
• tSignalTask has higher priority than tWaitTask
– tSignalTask continues to run until it relinquishes the CPU
by making a blocking system call or delaying itself
– Thus, tSignalTask might increment the counting
semaphore multiple times before tWaitTask task running
• The counting semaphore allows a credit buildup of
the number of times that the tWaitTask can
execute before the semaphore becomes available
Credit-Tracking
Synchronization (4/4)
B Prasad, Assoc. Prof., 79
• Credit-tracking mechanism is useful if
tSignalTask releases semaphore in bursts
– Giving tWaitTask the chance to catch up every
once in a while
• Useful in ISR handling to signal task
waiting on a semaphore
6.4.4 Single Shared-Resource-
Access Synchronization (1/3)
B Prasad, Assoc. Prof., 80
• Provide for mutually exclusive access to a
shared resource
• A shared resource may be a
– Memory location
– A data structure
– An I/O devices
• Example
– Use a binary semaphore to protect the shared
resource
Single Shared-Resource-Access
Synchronization (2/3)
B Prasad, Assoc. Prof., 81
Single Shared-Resource-Access
Synchronization (3/3)
B Prasad, Assoc. Prof., 82
• Dangers
– Any task can accidentally release the binary
semaphore, even never acquired the semaphore
first
• Both tasks can access the shared resource at the
same time
• Solutions
– Use a mutex semaphore instead
• Support the concept of ownership
6.4.5 Recursive Shared-Resource-
Access Synchronization (1/2)
B Prasad, Assoc. Prof., 83
• In some cases, a task must be able to
access a shared resource recursively
• Example
– tAccessTask calls Routine A that calls Routine
B
– All three need access to the same shared
resource
– If semaphore is used
• tAccessTask would end up blocking and causing a
deadlock
– Solution
• Use a recursive mutex
Recursive Shared-Resource-
Access Synchronization (2/2)
B Prasad, Assoc. Prof., 84
Pseudo Code for Recursively
Accessing a Shared Resource
B Prasad, Assoc. Prof., 85
tAccessTask ()
{
:
Acquire mutex
Access shared resource
Call Routine A
Release mutex
:
}
Routine A ()
{
:
Acquire mutex
Access shared resource
Call Routine B
Release mutex
:
}
Routine B ()
{
:
Acquire mutex
Access shared resource
Release mutex
:
}
6.4.6 Multiple Shared-Resource-
Access Synchronization (1/2)
B Prasad, Assoc. Prof., 86
• If multiple equivalent shared resources are used
– May use a counting semaphore that is initially set to the
number of equivalent shared resource
• However, as with the binary semaphore
– May cause problem if a task releases a semaphore that it
did not originally acquire
• Solution
– A separate mutex can be assigned for each shared
resource
– Acquire the first mutex in a non-blocking way
– If unsuccessful, acquire the second mutex in a blocking-
way
Multiple Shared-Resource-
Access Synchronization (2/2)
B Prasad, Assoc. Prof., 87
Pseudo Code: Use the Counting
Semaphore
B Prasad, Assoc. Prof., 88
Pseudo Code: Use the Mutex
B Prasad, Assoc. Prof., 89
Chapter – 7
Message Queues
B Prasad, Assoc. Prof., 90
Chapter – 7 Message Queues
B Prasad, Assoc. Prof., 91
7.1 Introduction
7.2 Defining Message Queues
7.3 Message Queue States
7.4 Message Queue Content
7.5 Message Queue Storage
7.5.1 System Pools
7.5.2 Private Buffers
7.6 Typical Message Queue Operations
7.6.1 Creating and Deleting Message Queues
7.6.2 Sending and Receiving Messages
7.6.3 Obtaining Message Queue Information
7.7 Typical Message Queue Use
7.7.1 Non-Interlocked, One-Way Data Communication
7.7.2 Interlocked, One-Way Data Communication
7.7.3 Interlocked, Two-Way Data Communication
7.7.4 Broadcast Communication
7.1 Message Queues Introduction
B Prasad, Assoc. Prof., 92
• To facilitate inter-task data communication,
kernels provide
– a message queue object and message queue
management services
• A message queue
– a buffer-like object through which tasks and
ISRs send and receive messages to
communicate and synchronize with data
• The message queue itself consists of a
number of elements, each of which can hold
a single message.
7.2 Defining Message Queues
B Prasad, Assoc. Prof., 93
• When a message queue is first created, it is
assigned
– a queue control block (QCB)
– a message queue name
– a unique ID
– memory buffers
– a queue length
– a maximum message length
– task-waiting lists
• Kernel takes developer-supplied parameters to
determine how much memory is required for the
message queue:
– queue length and maximum message length
The associated parameters, and
supporting data structures
B Prasad, Assoc. Prof., 94
7.3 Message Queue States (1/2)
• The state diagram for a message queue:
B Prasad, Assoc. Prof., 95
Message Queue States (2/2)
B Prasad, Assoc. Prof., 96
• When a task attempts to send a message
to a full message queue, two ways of kernel
implementation:
– the sending function returns an error code to
that task
– Sending task is blocked and is moved into
sending task-waiting list
7.4 Message Queue Content (1/3)
B Prasad, Assoc. Prof., 97
• Message queues can be used to send and
receive a variety of data. Some examples:
– a temperature value from a sensor
– a bitmap to draw on a display
– a text message to print to an LCD
– a keyboard event
– a data packet to send over the network
Message Queue Content (2/3)
B Prasad, Assoc. Prof., 98
• When a task sends a message to another
task, the message normally is copied twice
– from sender’s memory area to the message
queue’s memory area
– from the message queue’s memory area to
receiver’s memory area
• Copying data can be expensive in terms of
performance and memory requirements
Message copying and memory use
for sending and receiving messages
B Prasad, Assoc. Prof., 99
Message Queue Content (3/3)
B Prasad, Assoc. Prof., 10
• Keep copying to a minimum in a real-time
embedded system:
– by keeping messages small
– by using a pointer instead
• Send a pointer to the data, rather than the
data itself
– overcome the limit on message length
– improve both performance and memory
utilization
7.5 Message Queue Storage (1/2)
B Prasad, Assoc. Prof., 10
• Message queues may be stored in a system
pool or private buffers
• System Pools
– the messages of all queues are stored in one
large shared area of memory
– Advantage: save on memory use
– Downside: a message queue with large messages
can easily use most of the pooled memory
Message Queue Storage (2/2)
B Prasad, Assoc. Prof., 10
• Private Buffers
– separate memory areas for each message queue
– Downside: uses up more memory
• requires enough reserved memory area for the full
capacity of every message queue that will be created
– Advantage: better reliability
• ensures that messages do not get overwritten and
that room is available for all messages
7.6 Typical Message Queue
Operations
B Prasad, Assoc. Prof., 10
• creating and deleting message queues
• sending and receiving messages
• obtaining message queue information
7.6.1 Creating and Deleting
Message Queues
B Prasad, Assoc. Prof., 10
• When created, message queues are treated as
global objects and are not owned by any particular
task.
• When creating a message queue, a developer needs
to decide
– message queue length
– the maximum messages size
– the blocked tasks waiting order
Operation Description
Create Creates a message queue
Delete Deletes a message queue
7.6.2 Sending and Receiving
Messages
B Prasad, Assoc. Prof., 10
Operation Description
Send Sends a message to a message queue
Receive Receives a message from a message queue
Broadcast Broadcasts messages
Sending messages in FIFO
or LIFO order
B Prasad, Assoc. Prof., 10
Sending Messages
B Prasad, Assoc. Prof., 10
• Tasks can send messages with different
blocking policies:
– not block (ISRs and tasks)
• If a message queue is already full, the send call
returns with an error, the sender does not block
– block with a timeout (tasks only)
– block forever (tasks only)
• The blocked task is placed in the message
queue’s task-waiting list
– FIFO or priority-based order
FIFO and priority-based
task- waiting lists
B Prasad, Assoc. Prof., 10
Receiving Messages (1/2)
B Prasad, Assoc. Prof., 10
• Tasks can receive messages with different
blocking policies:
– not blocking
– blocking with a timeout
– blocking forever
• Due to the empty message queue, the
blocked task is placed in the message
queue’s task-waiting list
– FIFO or priority-based order
FIFO and priority-based
task- waiting lists
B Prasad, Assoc. Prof., 11
Receiving Messages (2/2)
B Prasad, Assoc. Prof., 11
• Messages can be read from the head of a
message queue in two different ways:
– destructive read
• removes the message from the message queue’s
storage buffer after successfully read
– non-destructive read
• without removing the message
7.6.3 Obtaining Message
Queue Information
B Prasad, Assoc. Prof., 11
• Obtain information about a message queue:
– message queue ID, task-waiting list queuing
order (FIFO or priority-based), and the
number of messages queued.
Operation Description
Show queue info Gets information on a message queue
Show queue’s task-
waiting list
Gets a list of tasks in the queue’s task-
waiting list
7.7 Typical Message Queue Use
B Prasad, Assoc. Prof., 11
• Typical ways to use message queues within
an application:
– non-interlocked, one-way data communication
– interlocked, one-way data communication
– interlocked, two-way data communication
– broadcast communication
7.7.1 Non-Interlocked, One-
Way Data Communication (1/3)
B Prasad, Assoc. Prof., 11
• non-interlocked (or loosely coupled), one-
way data communication:
– The activities of tSourceTask and tSinkTask
are not synchronized.
• TSourceTask simply sends a message and does not
require acknowledgement from tSinkTask.
Non-Interlocked, One-Way
Data Communication (2/3)
tSourceTask ()
{
:
Send message to message queue
:
}
tSinkTask ()
{
:
Receive message from message queue
:
}
B Prasad, Assoc. Prof., 11
Non-Interlocked, One-Way
Data Communication (3/3)
B Prasad, Assoc. Prof., 11
• ISRs typically use non-interlocked, one-way
communication.
– A task such as tSinkTask runs and waits on the
message queue.
– When the hardware triggers an ISR to run, the
ISR puts one or more messages into the
message queue for tSinkTask.
• ISRs send messages to the message queue
in a non-blocking way.
– If the message queue becomes full, any
additional messages that the ISR sends to the
message queue are lost.
7.7.2 Interlocked, One-Way
Data Communication (1/2)
B Prasad, Assoc. Prof., 11
• interlocked communication
– the sending task sends a message and waits to
see if the message is received
– useful for reliable communications or task
synchronization
• Example
– a binary semaphore initially set to 0 and a
message queue with a length of 1 (also called a
mailbox)
– Sender tSourceTask and receiver tSinkTask
operate in lockstep with each other
Interlocked, One-Way Data
Communication (2/2)
tSourceTask ()
{
:
Send message to message queue
Acquire binary semaphore
:
}
tSinkTask ()
{
B Prasad, Assoc. Prof., 11
:
Receive message from message queue
Give binary semaphore
:
}
7.7.3 Interlocked, Two-Way
Data Communication (1/2)
B Prasad, Assoc. Prof., 11
• interlocked, two-way data communication
(also called full-duplex or tightly coupled
communication)
– data flow bidirectionally between tasks
– useful when designing a client/server-based
system
– two separate message queues are required
• If multiple clients need to be set up
– all clients can use the client message queue to
post requests
– tServerTask uses a separate message queue to
fulfill the different clients’ requests
Interlocked, Two-Way Data
Communication (2/2)
tClientTask ()
{
:
Send a message to the requests queue
Wait for message from the server queue
:
} tServerTask ()
{
B Prasad, Assoc. Prof., 12
:
Receive a message from the requests queue
Send a message to the client queue
:
}
7.7.4 Broadcast Communication (1/3)
B Prasad, Assoc. Prof., 12
• Allow developers to broadcast a copy of
the same message to multiple tasks
• Message broadcasting is a one-to-many-
task relationship.
– tBroadcastTask sends the message on which
multiple tSink-Task are waiting.
Broadcast Communication (2/3)
B Prasad, Assoc. Prof., 12
Broadcast Communication (3/3)
B Prasad, Assoc. Prof., 12
tBroadcastTask ()
{
:
Send broadcast message to queue
:
}
Note: similar code for tSignalTasks 1, 2, and 3.
tSignalTask ()
{
:
Receive message on queue
:
}

More Related Content

Similar to UNIT II PPT.pptx

Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
knowdiff
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
rohassanie
 
Operating System.ppt
Operating System.pptOperating System.ppt
Operating System.ppt
NitihyaAshwinC
 

Similar to UNIT II PPT.pptx (20)

Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
 
Operating Systems 1 (10/12) - Scheduling
Operating Systems 1 (10/12) - SchedulingOperating Systems 1 (10/12) - Scheduling
Operating Systems 1 (10/12) - Scheduling
 
OVERVIEW OF RTOS
OVERVIEW OF RTOSOVERVIEW OF RTOS
OVERVIEW OF RTOS
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
 
Ch03 processes
Ch03 processesCh03 processes
Ch03 processes
 
Chapter01-rev.pptx
Chapter01-rev.pptxChapter01-rev.pptx
Chapter01-rev.pptx
 
Os concepts 4 functions of os
Os concepts 4 functions of osOs concepts 4 functions of os
Os concepts 4 functions of os
 
Process Scheduling
Process SchedulingProcess Scheduling
Process Scheduling
 
Ch5 process synchronization
Ch5   process synchronizationCh5   process synchronization
Ch5 process synchronization
 
Processes
ProcessesProcesses
Processes
 
Process scheduling algorithms
Process scheduling algorithmsProcess scheduling algorithms
Process scheduling algorithms
 
Operating system 05 functions of os
Operating system 05 functions of osOperating system 05 functions of os
Operating system 05 functions of os
 
OS_module2. .pptx
OS_module2.                          .pptxOS_module2.                          .pptx
OS_module2. .pptx
 
PROCESS.pptx
PROCESS.pptxPROCESS.pptx
PROCESS.pptx
 
Operating System.ppt
Operating System.pptOperating System.ppt
Operating System.ppt
 
How Operating system works.
How Operating system works. How Operating system works.
How Operating system works.
 
ESC UNIT 3.ppt
ESC UNIT 3.pptESC UNIT 3.ppt
ESC UNIT 3.ppt
 
Lecture 4 process cpu scheduling
Lecture 4   process cpu schedulingLecture 4   process cpu scheduling
Lecture 4 process cpu scheduling
 
Unit 2 part 2(Process)
Unit 2 part 2(Process)Unit 2 part 2(Process)
Unit 2 part 2(Process)
 
Operating system 28 fundamental of scheduling
Operating system 28 fundamental of schedulingOperating system 28 fundamental of scheduling
Operating system 28 fundamental of scheduling
 

Recently uploaded

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 

Recently uploaded (20)

PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 

UNIT II PPT.pptx

  • 1. UNIT – II Chapter 4: Real-Time Operating Systems Chapter 5: Tasks Chapter 6: Semaphore Chapter – 7 : Message Queues B Prasad, Assoc. Prof., Dept. of CSE, MLRITM 1 Dept. of CSE MLRITM
  • 3. Chapter 4: Real-Time Operating Systems 4.1 Introduction 4.2 A Brief History of Operating Systems 4.3 Defining an RTOS 4.4 The Scheduler 4.4.1 Schedulable Entities 4.4.2 Multitasking 4.4.3 The Context Switch 4.4.4 The Dispatcher 4.4.5 Scheduling Algorithms Preemptive Priority-Based Scheduling Round- Robin Scheduling 4.5 Objects 4.6 Services 4.7 Key Characteristics of an RTOS 4.7.1 Reliability 4.7.2 Predictability 4.7.3 Performance 4.7.4 Compactness 4.7.5 Scalability
  • 5. Chapter 5: Tasks 5.1 Introduction 5.2 Defining a Task 5.3 Task States and Scheduling 5.3.1 Ready State 5.3.2 Running State 5.3.3 Blocked State 5.4 Typical Task Operations 5.4.1 Task Creation and Deletion 5.4.2 Task Scheduling 5.4.3 Obtaining Task Information 5.5 Typical Task Structure 5.5.1 Run-to-Completion Tasks 5.5.2 Endless-Loop Tasks 5.6 Synchronization, Communication, and Concurrency
  • 6. 5.1 Introduction • Simple software applications are typically designed to run sequently. • However, this scheme is inappropriate for real-time embedded applications. • These applications generally handle multiple inputs and outputs within tight time constraints. – They must be designed for concurrency.
  • 7. 5.1 Definition of Tasks • A task is an independent thread of execution that can compete with other concurrent tasks for processor execution time. • A task is schedulable.
  • 8. A task and its associated data structures
  • 9. System tasks • Initialization or startup task • Idle task • Logging task • Exception-handling task • Debug agent task
  • 10. 5.3 Task states and Scheduling
  • 11. 5.3.1 Ready state • In this state, the task actively competes with all other ready tasks for the processor’s execution time. • The kernel’s scheduler uses the priority of each task to determine which task to move to the running state.
  • 13. 5.3.2 Running state • On a single-processor system, only one task can run at a time. • When a task is preempted by a higher priority task, it moves to the ready state. • It also can move to the blocked state. – Making a call that requests an unavailable resource – Making a call that requests to wait for an event to occur – Making a call to delay the task for some duration
  • 14. 5.3.3 Blocked state • CPU starvation occurs when higher priority tasks use all of the CPU execution time and lower priority tasks do not get to run. • The cases when blocking conditions are met – A semaphore token for which a task is waiting is released – A message, on which the task is waiting, arrives in a message queue – A time delay imposed on the task expires
  • 15. 5.4 Typical task operations • Creating and deleting tasks • Controlling task scheduling • Obtaining task information
  • 16. 5.4.1 Task creation and deletion • Two common approaches – Two system calls: first to create and then to move to the ready state – One systems calls: All done in a system call • User-configurable hooks – A mechanism to execute programmer-supplied functions at the time of specific kernel events • Premature deletion – May get memory or resource leaks
  • 17. 5.4.2 Task scheduling • The scheduling can be handled automatically. • Many kernels also provide a set of API calls that allows developers to control the state changes. – Manual scheduling
  • 18. Task Scheduling • Scheduling: – Select the most deserving process to run based on system-specific policy • Also called CPU scheduling – Since tasks complete for the CPU execution • All most RTOSes support the preemptive priority-based scheduling algorithm
  • 19. Policy Considerations • Policy can control/influence – CPU utilization – Average time a process waits for service – Average amount of time to complete a job • Could strive for any of the following – Equitability – Favor very short or long jobs – Meet priority requirements – Meet deadlines
  • 20. When the Scheduler is Invoked? • The scheduler is run from several points within the kernel – Run after putting the current process onto a wait queue (when wait for some events) • The current process becomes blocked – Run at the end of a system call • Just before a process is returned to user mode from system mode – When the system timer has just set the current processes counter to zero. • The current process run out of its time quota – Process exits • The current process exits – Run when a more higher priority becomes ready • The current process is preempted
  • 21. Well-known Task Scheduling Algorithms in RTOSes • Priority-based scheduling algorithm • Priority-based round-robin scheduling algorithm • EDF (Earliest-Deadline-First) scheduling algorithm • RM (Rate-Monotonic) scheduling algorithm
  • 22. Context Switch (1/2) • Context switch – Also called task switch or process switch – Occurred when a scheduler switches from one task to another • Although each process can have its own address space, all processes have to share the CPU registers – Kernel ensure that each such register is loaded with the value it had when the process was suspended
  • 23. Context Switch (2/2) • Thus, each task has its own context – The state of the CPU registers required for tasks’ running – When a task running, its context is highly dynamic – The context of a task is stored in its process descriptor • Operations – Save the context of the current process – Load the context of new process – If have page table, update the page table entries – Flush those TLB entries that belonged to the old process
  • 24. Operations for task scheduling
  • 25. Task scheduling operations • Suspend and Resume – For debugging purposes – Suspend a high-priority task so that lower priority task can execute • Delay a task – Allow manual scheduling – Wait for an external condition that does not have an associated interrupt • Polling: wait up after a set time to check a specified condition or event had occurred
  • 26. Task scheduling operations • Restart – Begin the task as if it had not been previously executing – Useful during debugging or when initializing a task after a catastrophic error • Get and Set Priority – Control task scheduling manually – Helpful during a priority inversion • Preemption lock – A pairs of calls used to disable and enable preemption in applications – Useful if a task is in a critical section of code
  • 27. 5.4.3 Obtaining Task Information • Useful for debugging and monitoring • Get ID: obtain a task’s ID • Get TCB: obtain a task’s TCB by itsID – Only a snapshot of the task context
  • 28. 5.5 Typical task structure • Two kinds of tasks – Run to completion – Endless loop
  • 29. 5.6 Synchronization, Communication, and Concurrency • Tasks synchronize and communicate amongst themselves by using intertask primitives, which are kernel objects that facilitate synchronization and communication between two or more threads of execution. • Examples of such objects include semaphores, message queues, signals, and pipes, as well as other types of objects.
  • 30. Synchronization, Communication, and Concurrency • Task object is the fundamental construct of most kernels. • Tasks, along with task-management services, allow developers to design applications for concurrency to meet multiple time constraints and to address various design problems inherent to real-time embedded applications.
  • 32. Introduction B Prasad, Assoc. Prof., 3 • Inter-Process Communication (IPC) Classification – Synchronization / communication – Communication with/without data – Uni-directional/bi-directional transfer – Structured/un-structured data – Destructive/non-destructive read 32
  • 33. IPC Classification (1/2) B Prasad, Assoc. Prof., 3 • Mutual exclusion & synchronization – Semaphore • Binary semaphore • Counting semaphore • Mutex semaphore 33
  • 34. IPC Classification (2/2) B Prasad, Assoc. Prof., 3 • Communication with data – Structured data • destructive read – Message queue – Unstructured data • Uni-direction – destructive read » Named pipe (FIFO) » Unnamed pipe • Bi-direction – non-destructive read » Shared memory • Communication without data – Event register – Signal – Condition variable 34
  • 35. Chapter 6 Semaphore B Prasad, Assoc. Prof., 35
  • 36. 6.1 Introduction 6.2 Defining Semaphores 6.2.1 Binary Semaphores 6.2.2 Counting Semaphores 6.2.3 Mutual Exclusion (Mutex) 6.3 Typical Semaphore Operations 6.3.1 Creating and Deleting Semaphores 6.3.2 Acquiring and Releasing Semaphores 6.3.3 Clearing Semaphore Task-Waiting Lists 6.3.4 Getting Semaphore Information 6.4 Typical Semaphore Use 6.4.1 Wait-and-Signal Synchronization 6.4.2 Multiple-Task Wait-and-Signal Synchronization 6.4.3 Credit-Tracking Synchronization 6.4.4 Single Shared-Resource-Access Synchronization 6.4.5 Recursive Shared-Resource-Access Synchronization 6.4.6 Multiple Shared-Resource-Access Synchronization Chapter 6: Semaphore B Prasad, Assoc. Prof., 36
  • 37. 6.1 Semaphore Introduction B Prasad, Assoc. Prof., Dept. of CSE, MLRITM 37 • Multiple concurrent threads of execution within an application must be able to – Synchronize their execution – Coordinate mutually exclusive access to shared resources • RTOS provides a semaphore object and associated semaphore management services
  • 38. Semaphores B Prasad, Assoc. Prof., 38 • A semaphore S is an integer variable • Can only be accessed through two indivisible (atomic) operations – wait – signal wait(S) { while (S  0) ; // no-op S--; } signal(S) { S++; }
  • 39. Implementation of Blocking Semaphores B Prasad, Assoc. Prof., 39 • Semaphore operations now defined as: void wait(semaphore S) { S.value --; if (S.value < 0) then { add this process to S.L; block(); } } wait(S) void signal(semaphore S) { S.value ++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); } } signal(S)
  • 40. Usage of Semaphores - mutual exclusion • Example: critical-section for n-processes • Shared data: semaphore mutex; // initially mutex = 1 B Prasad, Assoc. Prof., 40 wait (mutex); signal (mutex); do { critical section remainder section } while (1); Process Pi
  • 41. Usage of Semaphores - synchronization B Prasad, Assoc. Prof., 41 • Semaphore as general synchronization tool to synchronize processes – Use semaphore (with value initialized to 0) • Example: if we want to execute S1 and then S2 • Shared data: semaphore synch; // initially synch = 0 Process P1 S1; signal(synch); wait(synch); S2; Process P2
  • 42. Deadlocks and Starvation • Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. • Let S and Q be two semaphores, initialized to 1. P1 P0 wait(S); wait(Q); . signal(S); signal(Q); wait(Q); wait(S); . . signal(Q); signal(S); P0: wait(S) … pass P1: wait(Q) … pass P0: wait(Q) … block P1: wait(S) … block Deadlock occurs ! B Prasad, Assoc. Prof., 42
  • 43. Incorrect Use of Semaphores wait(mutex); ... critical section ... signal(mutex); • Incorrect use of semaphores may cause timing errors wait(mutex); ... critical section ... wait(mutex); signal(mutex); ... critical section ... wait(mutex); ... critical section .. signal(mutex); wait(mutex); ... critical section ... will violate mutual exclusion B Prasad, Assoc. Prof., 43 will cause deadlock will violate mutual exclusion or cause deadlock
  • 44. 6.2 Defining Semaphores B Prasad, Assoc. Prof., 44 • A semaphore or semaphore token – A kernel object that one or more threads of execution can acquire or release for the purpose of synchronization or mutual exclusion • When a semaphore is first created, kernel assigns – An semaphore control block (SCB) – A unique ID – A value (binary or a count) – A task-waiting list
  • 45. The Associated Parameters, and Supporting Data Structures B Prasad, Assoc. Prof., 45
  • 46. Semaphore (1/2) B Prasad, Assoc. Prof., 46 • A key that allows a task to carry out some operation or to access a resource – A single semaphore can be acquired a finite number of times – Acquire a semaphore is like acquiring the duplicate of a key – If the token counter reaches 0 • The semaphore has no token left • A requesting task cannot acquire the semaphore and may block if it chooses to wait
  • 47. Semaphore (2/2) B Prasad, Assoc. Prof., 47 • Task-waiting list tracks all tasks blocked while waiting on an unavailable semaphore – FIFO: first-in-first-out order – Highest-priority first order • When an unavailable semaphore becomes available – Kernel allows the first task in task-waiting list to acquire it • Different types of semaphores – Binary semaphores – Counting semaphores – Mutual-exclusion (mutex) semaphores
  • 48. 6.2.1 Binary Semaphores B Prasad, Assoc. Prof., 48 • Have a value of either 0 or 1 – 0: the semaphore is considered unavailable – 1: the semaphore is considered available • When a binary semaphore is first created – It can be initialized to either available or unavailable (1 or 0, respectively) • When created as global resources – Shared among all tasks that need them – Any task could release a binary semaphore even if the task did not initially acquire it
  • 49. The State Diagram of a Binary Semaphore B Prasad, Assoc. Prof., 49
  • 50. 6.2.2 Counting Semaphores (1/2) B Prasad, Assoc. Prof., 50 • Use a counter to allow it to be acquired or released multiple times • The semaphore count assigned when it was first created denotes the number of semaphore tokens it has initially • Global resources that can be shared by all tasks that need them
  • 51. The State Diagram of a Counting Semaphore B Prasad, Assoc. Prof., 51
  • 52. Counting Semaphores (2/2) B Prasad, Assoc. Prof., 52 • Bounded count – A count in which the initial count set for the counting semaphore – Act as the maximum count for the semaphore – Determined when the semaphore was first created • Unbounded count – Allow the counting semaphore to count beyond the initial count to the maximum value that can be held by the counter’s data type • An unsigned integer or an unsigned long value
  • 53. 6.2.3 Mutual Exclusion (Mutex) Semaphores B Prasad, Assoc. Prof., 53 • A special binary semaphore that supports – Ownership – Recursive access – Task deletion safety – One or more protocols avoiding problems inherent to mutual exclusions • The states of a mutex – Locked (1) and unlocked (0) • A mutex is initially created in the unlocked state
  • 54. The State Diagram of a Mutual Exclusion (Mutex) Semaphore B Prasad, Assoc. Prof., 54
  • 55. Mutex Ownership B Prasad, Assoc. Prof., 55 • Ownership of a mutex is gained when a task first locks the mutex by acquiring it. • When a task owns the mutex – No other task can lock or unlock that mutex – Contrast to the binary semaphore that can be released by any task
  • 56. Recursive Locking (1/3) B Prasad, Assoc. Prof., 56 • Allow a task that owns the mutex to acquire it multiple times in the locked state • The mutex with recursive locking is called a recursive mutex – Useful when a task requiring exclusive access to a shared resource calls one or more routines that also require access to the same resource – Allows nested attempts to lock the mutex to succeed, rather than cause deadlock
  • 57. Recursive Locking (2/3) B Prasad, Assoc. Prof., 57 • Implementation – A lock count track two states of a mutex and the number of times it has been recursively locked – A mutex might maintain two counts • A binary value to track its state • A separate lock count to track the number of times it has been acquired in the lock state by the task owns it
  • 58. Recursive Locking (3/3) B Prasad, Assoc. Prof., 58 • Comparison – The count used for the mutex • Track the number of times that the task owning the mutex has locked or unlocked the mutex • Always unbounded, which allows multiple recursive accesses – The count used for the counting semaphore • Track the number of tokens that have been acquired or released by any task
  • 59. Task Deletion Safety B Prasad, Assoc. Prof., 59 • Premature task deletion is avoided • While a task owns the mutex with built-in task deletion safety capability, the task cannot be deleted
  • 60. Priority Inversion Avoidance B Prasad, Assoc. Prof., 60 • When a higher priority task is blocked and is waiting for a resource being used by a lower priority task, which has itself been preempted by an unrelated medium-priority task • Priority inversion avoidance protocols – Priority inheritance protocol – Ceiling priority protocol
  • 61. Priority Inversion Example B Prasad, Assoc. Prof., 61 • Priority inversion – a situation in which a low-priority task executes while a higher priority task waits on it due to resource contentions
  • 62. Unbounded Priority Inversion Example B Prasad, Assoc. Prof., 62
  • 63. Priority Inheritance Protocol Example B Prasad, Assoc. Prof., 63
  • 64. Ceiling Priority Protocol Example • When acquiring the mutex, the task’ priority is automatically set to the highest priority of all possible tasks that might request it until it is released B Prasad, Assoc. Prof., 64
  • 65. 6.3 Typical Semaphore Operations B Prasad, Assoc. Prof., 65 • Creating and deleting semaphores • Acquiring and releasing semaphores • Clearing a semaphore’s task-waiting list • Getting semaphore information
  • 66. 6.3.1 Creating and Deleting Semaphores (1/2) B Prasad, Assoc. Prof., 66 • Different calls might be used for creating binary, counting, and mutex semaphores – Binary: specify the initial semaphore state and the task-waiting order – Counting: specify the initial semaphore count and the task-waiting order – Mutex: specify the task-waiting order and enable task deletion safety, recursion, and priority-inversion avoidance protocols
  • 67. Creating and Deleting Semaphores (2/2) • When a semaphore is deleted – Blocked tasks in its task-waiting list are unblocked and moved to the ready or running states • Do not delete a semaphore while it is in use B Prasad, Assoc. Prof., 67
  • 68. 6.3.2 Acquiring and Releasing Semaphores • Tasks acquires a semaphore can – Wait forever – Wait with a timeout – Do not wait • Any task can release a binary or counting semaphore – However, a mutex can only be released (unlocked) by the task that first acquired (locked) B Prasad, Assoc. Prof., 68
  • 69. 6.3.3 Clearing Semaphore Task- Waiting Lists • Clear all tasks waiting on a semaphore task- waiting list • Useful for thread rendezvous – When multiple tasks’ executions need to meet at some point in time to synchronize execution control B Prasad, Assoc. Prof., 69
  • 70. 6.3.4 Getting Semaphore Information • Useful for performing monitoring or debugging • Should be used judiciously – The semaphore information might be dynamic at the time it is requested B Prasad, Assoc. Prof., 70
  • 71. 6.4 Typical Semaphore Use B Prasad, Assoc. Prof., 71 • Semaphore useful for – Synchronize execution of multiple tasks – Coordinate access to a shared resource • Examples – Wait-and-signal synchronization – Multiple-task wait-and-signal synchronization – Credit-tracking synchronization – Single shared-resource-access synchronization – Recursive shared-resource-access synchronization – Multiple shared-resource-access synchronization
  • 72. 6.4.1 Wait-and-Signal Synchronization (1/4) B Prasad, Assoc. Prof., 72 • Two tasks can communicate for the purpose of synchronization without exchanging data. • Example: a binary semaphore can be used between two tasks to coordinate the transfer of execution control – Binary semaphore is initially unavailable – tWaitTask has higher priority and runs first • Acquire the semaphore but blocked – tSignTask has a chance to run • Release semaphore and thus unlock tWaitTask
  • 73. Wait-and-Signal Synchronization Between Two Tasks (2/4) B Prasad, Assoc. Prof., 73
  • 74. Wait-and-Signal Synchronization (3/4) B Prasad, Assoc. Prof., 74 • To coordinate the synchronization of more than two tasks – Use the flush operation on the task-waiting list of a binary semaphore • Example – Binary semaphore is initially unavailable – tWaitTask has higher priority and runs first • Acquire the semaphore but blocked – tSignTask has a chance to run • Invoke a flush operation and thus unlock the three tWaitTask
  • 75. Wait-and-Signal Synchronization (4/4) B Prasad, Assoc. Prof., 75 tWaitTask () { : Do some processing specific to task Acquire binary semaphore token : } tSignalTask () { : Do some processing Flush binary semaphore's task-waiting list : }
  • 76. 6.4.3 Credit-Tracking Synchronization (1/4) B Prasad, Assoc. Prof., 76 • The rate at which the signaling task executes is higher than that of the signaled task – Need to count each signaling occurrence – By means of the counting semaphore
  • 77. Credit-Tracking Synchronization Between Two Tasks (2/4) B Prasad, Assoc. Prof., 77
  • 78. Credit-Tracking Synchronization (3/4) B Prasad, Assoc. Prof., 78 • Counting semaphore’s count is initially 0 • tSignalTask has higher priority than tWaitTask – tSignalTask continues to run until it relinquishes the CPU by making a blocking system call or delaying itself – Thus, tSignalTask might increment the counting semaphore multiple times before tWaitTask task running • The counting semaphore allows a credit buildup of the number of times that the tWaitTask can execute before the semaphore becomes available
  • 79. Credit-Tracking Synchronization (4/4) B Prasad, Assoc. Prof., 79 • Credit-tracking mechanism is useful if tSignalTask releases semaphore in bursts – Giving tWaitTask the chance to catch up every once in a while • Useful in ISR handling to signal task waiting on a semaphore
  • 80. 6.4.4 Single Shared-Resource- Access Synchronization (1/3) B Prasad, Assoc. Prof., 80 • Provide for mutually exclusive access to a shared resource • A shared resource may be a – Memory location – A data structure – An I/O devices • Example – Use a binary semaphore to protect the shared resource
  • 82. Single Shared-Resource-Access Synchronization (3/3) B Prasad, Assoc. Prof., 82 • Dangers – Any task can accidentally release the binary semaphore, even never acquired the semaphore first • Both tasks can access the shared resource at the same time • Solutions – Use a mutex semaphore instead • Support the concept of ownership
  • 83. 6.4.5 Recursive Shared-Resource- Access Synchronization (1/2) B Prasad, Assoc. Prof., 83 • In some cases, a task must be able to access a shared resource recursively • Example – tAccessTask calls Routine A that calls Routine B – All three need access to the same shared resource – If semaphore is used • tAccessTask would end up blocking and causing a deadlock – Solution • Use a recursive mutex
  • 84. Recursive Shared-Resource- Access Synchronization (2/2) B Prasad, Assoc. Prof., 84
  • 85. Pseudo Code for Recursively Accessing a Shared Resource B Prasad, Assoc. Prof., 85 tAccessTask () { : Acquire mutex Access shared resource Call Routine A Release mutex : } Routine A () { : Acquire mutex Access shared resource Call Routine B Release mutex : } Routine B () { : Acquire mutex Access shared resource Release mutex : }
  • 86. 6.4.6 Multiple Shared-Resource- Access Synchronization (1/2) B Prasad, Assoc. Prof., 86 • If multiple equivalent shared resources are used – May use a counting semaphore that is initially set to the number of equivalent shared resource • However, as with the binary semaphore – May cause problem if a task releases a semaphore that it did not originally acquire • Solution – A separate mutex can be assigned for each shared resource – Acquire the first mutex in a non-blocking way – If unsuccessful, acquire the second mutex in a blocking- way
  • 87. Multiple Shared-Resource- Access Synchronization (2/2) B Prasad, Assoc. Prof., 87
  • 88. Pseudo Code: Use the Counting Semaphore B Prasad, Assoc. Prof., 88
  • 89. Pseudo Code: Use the Mutex B Prasad, Assoc. Prof., 89
  • 90. Chapter – 7 Message Queues B Prasad, Assoc. Prof., 90
  • 91. Chapter – 7 Message Queues B Prasad, Assoc. Prof., 91 7.1 Introduction 7.2 Defining Message Queues 7.3 Message Queue States 7.4 Message Queue Content 7.5 Message Queue Storage 7.5.1 System Pools 7.5.2 Private Buffers 7.6 Typical Message Queue Operations 7.6.1 Creating and Deleting Message Queues 7.6.2 Sending and Receiving Messages 7.6.3 Obtaining Message Queue Information 7.7 Typical Message Queue Use 7.7.1 Non-Interlocked, One-Way Data Communication 7.7.2 Interlocked, One-Way Data Communication 7.7.3 Interlocked, Two-Way Data Communication 7.7.4 Broadcast Communication
  • 92. 7.1 Message Queues Introduction B Prasad, Assoc. Prof., 92 • To facilitate inter-task data communication, kernels provide – a message queue object and message queue management services • A message queue – a buffer-like object through which tasks and ISRs send and receive messages to communicate and synchronize with data • The message queue itself consists of a number of elements, each of which can hold a single message.
  • 93. 7.2 Defining Message Queues B Prasad, Assoc. Prof., 93 • When a message queue is first created, it is assigned – a queue control block (QCB) – a message queue name – a unique ID – memory buffers – a queue length – a maximum message length – task-waiting lists • Kernel takes developer-supplied parameters to determine how much memory is required for the message queue: – queue length and maximum message length
  • 94. The associated parameters, and supporting data structures B Prasad, Assoc. Prof., 94
  • 95. 7.3 Message Queue States (1/2) • The state diagram for a message queue: B Prasad, Assoc. Prof., 95
  • 96. Message Queue States (2/2) B Prasad, Assoc. Prof., 96 • When a task attempts to send a message to a full message queue, two ways of kernel implementation: – the sending function returns an error code to that task – Sending task is blocked and is moved into sending task-waiting list
  • 97. 7.4 Message Queue Content (1/3) B Prasad, Assoc. Prof., 97 • Message queues can be used to send and receive a variety of data. Some examples: – a temperature value from a sensor – a bitmap to draw on a display – a text message to print to an LCD – a keyboard event – a data packet to send over the network
  • 98. Message Queue Content (2/3) B Prasad, Assoc. Prof., 98 • When a task sends a message to another task, the message normally is copied twice – from sender’s memory area to the message queue’s memory area – from the message queue’s memory area to receiver’s memory area • Copying data can be expensive in terms of performance and memory requirements
  • 99. Message copying and memory use for sending and receiving messages B Prasad, Assoc. Prof., 99
  • 100. Message Queue Content (3/3) B Prasad, Assoc. Prof., 10 • Keep copying to a minimum in a real-time embedded system: – by keeping messages small – by using a pointer instead • Send a pointer to the data, rather than the data itself – overcome the limit on message length – improve both performance and memory utilization
  • 101. 7.5 Message Queue Storage (1/2) B Prasad, Assoc. Prof., 10 • Message queues may be stored in a system pool or private buffers • System Pools – the messages of all queues are stored in one large shared area of memory – Advantage: save on memory use – Downside: a message queue with large messages can easily use most of the pooled memory
  • 102. Message Queue Storage (2/2) B Prasad, Assoc. Prof., 10 • Private Buffers – separate memory areas for each message queue – Downside: uses up more memory • requires enough reserved memory area for the full capacity of every message queue that will be created – Advantage: better reliability • ensures that messages do not get overwritten and that room is available for all messages
  • 103. 7.6 Typical Message Queue Operations B Prasad, Assoc. Prof., 10 • creating and deleting message queues • sending and receiving messages • obtaining message queue information
  • 104. 7.6.1 Creating and Deleting Message Queues B Prasad, Assoc. Prof., 10 • When created, message queues are treated as global objects and are not owned by any particular task. • When creating a message queue, a developer needs to decide – message queue length – the maximum messages size – the blocked tasks waiting order Operation Description Create Creates a message queue Delete Deletes a message queue
  • 105. 7.6.2 Sending and Receiving Messages B Prasad, Assoc. Prof., 10 Operation Description Send Sends a message to a message queue Receive Receives a message from a message queue Broadcast Broadcasts messages
  • 106. Sending messages in FIFO or LIFO order B Prasad, Assoc. Prof., 10
  • 107. Sending Messages B Prasad, Assoc. Prof., 10 • Tasks can send messages with different blocking policies: – not block (ISRs and tasks) • If a message queue is already full, the send call returns with an error, the sender does not block – block with a timeout (tasks only) – block forever (tasks only) • The blocked task is placed in the message queue’s task-waiting list – FIFO or priority-based order
  • 108. FIFO and priority-based task- waiting lists B Prasad, Assoc. Prof., 10
  • 109. Receiving Messages (1/2) B Prasad, Assoc. Prof., 10 • Tasks can receive messages with different blocking policies: – not blocking – blocking with a timeout – blocking forever • Due to the empty message queue, the blocked task is placed in the message queue’s task-waiting list – FIFO or priority-based order
  • 110. FIFO and priority-based task- waiting lists B Prasad, Assoc. Prof., 11
  • 111. Receiving Messages (2/2) B Prasad, Assoc. Prof., 11 • Messages can be read from the head of a message queue in two different ways: – destructive read • removes the message from the message queue’s storage buffer after successfully read – non-destructive read • without removing the message
  • 112. 7.6.3 Obtaining Message Queue Information B Prasad, Assoc. Prof., 11 • Obtain information about a message queue: – message queue ID, task-waiting list queuing order (FIFO or priority-based), and the number of messages queued. Operation Description Show queue info Gets information on a message queue Show queue’s task- waiting list Gets a list of tasks in the queue’s task- waiting list
  • 113. 7.7 Typical Message Queue Use B Prasad, Assoc. Prof., 11 • Typical ways to use message queues within an application: – non-interlocked, one-way data communication – interlocked, one-way data communication – interlocked, two-way data communication – broadcast communication
  • 114. 7.7.1 Non-Interlocked, One- Way Data Communication (1/3) B Prasad, Assoc. Prof., 11 • non-interlocked (or loosely coupled), one- way data communication: – The activities of tSourceTask and tSinkTask are not synchronized. • TSourceTask simply sends a message and does not require acknowledgement from tSinkTask.
  • 115. Non-Interlocked, One-Way Data Communication (2/3) tSourceTask () { : Send message to message queue : } tSinkTask () { : Receive message from message queue : } B Prasad, Assoc. Prof., 11
  • 116. Non-Interlocked, One-Way Data Communication (3/3) B Prasad, Assoc. Prof., 11 • ISRs typically use non-interlocked, one-way communication. – A task such as tSinkTask runs and waits on the message queue. – When the hardware triggers an ISR to run, the ISR puts one or more messages into the message queue for tSinkTask. • ISRs send messages to the message queue in a non-blocking way. – If the message queue becomes full, any additional messages that the ISR sends to the message queue are lost.
  • 117. 7.7.2 Interlocked, One-Way Data Communication (1/2) B Prasad, Assoc. Prof., 11 • interlocked communication – the sending task sends a message and waits to see if the message is received – useful for reliable communications or task synchronization • Example – a binary semaphore initially set to 0 and a message queue with a length of 1 (also called a mailbox) – Sender tSourceTask and receiver tSinkTask operate in lockstep with each other
  • 118. Interlocked, One-Way Data Communication (2/2) tSourceTask () { : Send message to message queue Acquire binary semaphore : } tSinkTask () { B Prasad, Assoc. Prof., 11 : Receive message from message queue Give binary semaphore : }
  • 119. 7.7.3 Interlocked, Two-Way Data Communication (1/2) B Prasad, Assoc. Prof., 11 • interlocked, two-way data communication (also called full-duplex or tightly coupled communication) – data flow bidirectionally between tasks – useful when designing a client/server-based system – two separate message queues are required • If multiple clients need to be set up – all clients can use the client message queue to post requests – tServerTask uses a separate message queue to fulfill the different clients’ requests
  • 120. Interlocked, Two-Way Data Communication (2/2) tClientTask () { : Send a message to the requests queue Wait for message from the server queue : } tServerTask () { B Prasad, Assoc. Prof., 12 : Receive a message from the requests queue Send a message to the client queue : }
  • 121. 7.7.4 Broadcast Communication (1/3) B Prasad, Assoc. Prof., 12 • Allow developers to broadcast a copy of the same message to multiple tasks • Message broadcasting is a one-to-many- task relationship. – tBroadcastTask sends the message on which multiple tSink-Task are waiting.
  • 122. Broadcast Communication (2/3) B Prasad, Assoc. Prof., 12
  • 123. Broadcast Communication (3/3) B Prasad, Assoc. Prof., 12 tBroadcastTask () { : Send broadcast message to queue : } Note: similar code for tSignalTasks 1, 2, and 3. tSignalTask () { : Receive message on queue : }