SlideShare a Scribd company logo
RTS IMPORTANT POINTS
Real-time systems include digital control, command and control, signal processing, and telecommunication
systems.
Unlike PCs and workstations that run nonreal-time applications such as our editor and network browser, the
computers and networks that run real-time applications are often hidden from our view.
Real-time applications: digital control, optimal control, command and control, signal processing, tracking,
real-time databases, and multimedia.
Many real-time systems are embedded in sensors and actuators and function as digital controllers. Figure 1–
1 shows such a system.
The state of the plant is monitored by sensors and can be changed by actuators. The real-time (computing)
system estimates from the sensor readings the current state of the plant and computes a control output
based on the difference between the current state and the desired state (called reference input in the
figure). We call this computation the control-law computation of the controller. The output thus generated
activates the actuators, which bring the plant closer to the desired state.
The first is the perceived responsiveness of the overall system (i.e., the plant and the controller).
The high sampling rate makes it possible to keep the control input small and the control-law computation
and digital-to-analog conversion of the controller simple.
A plant typically has more than one degree of freedom. Its state is defined by multiple state variables (e.g.,
the rotation speed, temperature, etc. of an engine or the tension and position of a video tape). Therefore, it
is monitored by multiple sensors and controlled by multiple actuators. We can think of a multivariate (i.e.,
multi-input/multi-output) controller for such a plant as a system of single-output controllers.
This multirate controller controls only flight dynamics.
The control laws of each multirate controller may have harmonic periods.
Each control-law computation can begin shortly after the beginning of each sampling period when the most
recent sensor data become available.
The controller at the highest level of a control hierarchy is a command and control system.
Examples are digital filtering, video and voice compressing/decompression, and radar signal processing.
The number of outputs may also be of order n, and the complexity of the computation is O(n2
).
A real-time database contains data objects, called image objects.
A set of data objects is said to be absolutely (temporally) consistent if the maximum age of the objects in the
set is no greater than a certain threshold.
A fighter jet and the targets it tracks move at supersonic speeds. Hence the information on where they are
must be less than 50 milliseconds old. On the other hand, an air traffic control system monitors commercial
aircraft at subsonic speeds; this is why the absolute temporal consistency threshold for air traffic control is
much larger.
A set of data objects is said to be relatively consistent if the maximum difference in ages of the objects in the
set is no greater than the relative consistency threshold used by the application.
An example is a planning system that correlates traffic densities along a highway with the flow rates of
vehicles entering and exiting the highway. The system does not require the most up-to-date flow rates at all
interchanges and hence can tolerate a relatively large age (e.g., two minutes). However, if the difference in
the ages of flow rates is large (e.g., one minute), the flow rates no longer give a valid snapshot of the traffic
scenario and can lead the system to wrong conclusions.
Purely cyclic: Every task in a purely cyclic application executes periodically. Even I/O operations are polled.
Moreover, its demands in (computing, communication, and storage) resources do not vary significantly from
period to period.
Mostly cyclic: Most tasks in a mostly cyclic system execute periodically. The system must also respond to
some external events (fault recovery and external commands) asynchronously. Examples are modern
avionics.
Asynchronous and somewhat predictable: In applications such as multimedia communication, radar signal
processing, and tracking, most tasks are not periodic. The duration between consecutive executions of a task
may vary considerably.
Asynchronous and unpredictable: Applications that react to asynchronous events and have tasks with high
run-time complexity belong to this type. An example is intelligent real-time control systems.
The release time of a job is the instant of time at which the job becomes available for execution.
The deadline of a job is the instant of time by which its execution is required to be completed.
Response time, that is, the length of time from the release time of the job to the instant when it completes.
We call the maximum allowable response time of a job its relative deadline.
A constraint imposed on the timing behavior of a job a timing constraint.
A timing constraint of a job can be specified in terms of its release time and relative or absolute deadlines,
A timing constraint or deadline is hard if the failure to meet it is considered to be a fatal fault. A hard
deadline is imposed on a job because a late result produced by the job after the deadline may have
disastrous consequences. (As examples, a late command to stop a train may cause a collision, and a bomb
dropped too late may hit a civilian population instead of the intended military target.
The late completion of a job that has a soft deadline is undesirable. However, a few misses of soft deadlines
do no serious harm; only the system’s overall performance becomes poorer and poorer when more and
more jobs with soft deadlines complete late.
Similarly, each control-law computation job of a flight controller must be completed in time so that its
command can be issued in time. Otherwise, the plane controlled by it may become oscillatory (and the ride
bumpy) or even unstable and uncontrollable. For this reason, we want the timely completion of all control-
law computations guaranteed. Jobs in some nonembedded systems may also have hard deadlines. An
example is a critical information system that must be highly available: The system must never be down for
more than a minute. Because of this requirement, reconfiguration and recovery of database servers and
network connections in the system must complete within a few seconds or tens of seconds, and this relative
deadline is hard.
A system in which jobs have soft deadlines is a soft real-time system.
Examples of such systems include on-line transaction systems and telephone switches, as well as electronic
games.
Critical timing requirements but is nevertheless considered to be a soft real-time system. An example is a
stock price quotation system. It should update the price of each stock each time the price changes.
The release times of sporadic and aperiodic jobs are random variables.
Execution time is the maximum execution time of all the jobs.
Sporadic task is one whose inter release times can be arbitrarily small.
The accuracy of the periodic task model decreases with increasing jitter in release times.
The total utilization U of all the tasks in the system is the sum of the utilizations of the individual tasks in it.
We say that a task is aperiodic if the jobs in it have either soft deadlines or no deadlines.
Tasks containing jobs that are released at random time instants and have hard deadlines are sporadic tasks.
The types of an edge connecting two vertices and other parameters of the edge are interconnection
parameters of the jobs represented by the vertices.
Jobs with temporal distance constraints may or may not have deadlines.
An outgoing edge from every vertex expresses an AND constraint.
This system can easily be modeled by a task graph that has edges expressing OR constraints.
The subgraph that begins from a vertex representing a branch job and ends at the vertex representing the
associated join job is called a conditional block.
This interruption of job execution is called pre-emption.
In nonreal-time systems, such jobs are typically scheduled in a round-robin manner.
A job is nonpreemptable if it must be executed from start to completion without interruption.
jobs that model the transmissions of data frames in a token ring (or bus). If transmission of a frame is
interrupted before it completes, the partially transmitted frame is discarded by the receiver.
An interrupt handling job usually begins by saving the state of the processor (i.e., storing the processor
status register, the stack pointer, the program counter, and so on).
The amount of time required to accomplish a context switch is called a context-switch time.
A resource is nonpreemptable if each unit of the resource is constrained to be used serially.
In other words, once a unit of a nonpreemptable resource is allocated to a job, other jobs needing the unit
must wait until the job completes its use. Otherwise, if jobs can use every unit of a resource in an interleaved
fashion, the resource is preemptable.
Edges in task graphs represent different types of dependencies among jobs, edges in a resource graph
represent the relationship among resources.
The resource graph in a system containing two computers consists of two trees. The root of each tree
represents a computer.
Each of these subcomponents is represented by a vertex, and there is an edge from the computer vertex to
each subcomponent vertex.
Some edges in resource graphs represent connectivity between components. These edges are called
accessibility edges.
A schedule of the jobs is computed off-line and is stored for use at run time. The scheduler schedules the
jobs according to this schedule at each scheduling decision time. In this way, scheduling overhead during
run-time can be minimized.
The round-robin approach is commonly used for scheduling time-shared applications. When jobs are
scheduled on a round-robin basis, every job joins a First-in-first-out (FIFO) queue when it becomes ready for
execution.
A time slice is the basic granule of time that is allocated to jobs. In a timeshared environment, a time slice is
typically in the order of tens of milliseconds. If the job does not complete by the end of the time slice, it is
preempted and placed at the end of the queue to wait for its next turn. When there are n ready jobs in the
queue, each job gets one time slice every n time slices, that is, every round. Because the length of the time
slice is relatively short, the execution of every job begins almost immediately after it becomes ready.
The round-robin algorithm is also called the processor-sharing algorithm.
The weighted round-robin algorithm has been used for scheduling real-time traffic in high-speed switched
networks.
Different jobs may be given different weights. Here, the weight of a job refers to the fraction of processor
time allocated to the job.
The length of a round is equal to the sum of the weights of all the ready jobs. By adjusting the weights of
jobs, we can speed up or retard the progress of each job toward its completion. By giving each job a fraction
of the processor, a round-robin scheduler delays the completion of every job. If it is used to schedule
precedence constrained jobs, the response time of a chain of jobs can be unduly large.
The weighted round-robin approach does not require a sorted priority queue.
The term priority-driven algorithms refers to a large class of scheduling algorithms that never leave any
resource idle.
Scheduling decisions are made when events such as releases and completions of jobs occur. Hence, priority-
driven algorithms are event-driven.
Other commonly used names for this approach are greedy scheduling, list scheduling and work-conserving
scheduling. A priority-driven algorithm is greedy because it tries to make locally optimal decisions.
When a processor or resource is available and some job can use it to make progress, such an algorithm never
makes the job wait.
Priority-driven algorithm can be implemented by assigning priorities to jobs. Jobs ready for execution are
placed in one or more queues ordered by the priorities of the jobs. At any scheduling decision time, the jobs
with the highest priorities are scheduled and executed on the available processors.
Most scheduling algorithms used in nonreal-time systems are priority-driven. Examples include the FIFO
(First-In-First-Out) and LIFO (Last-In-First-Out) algorithms.
SETF (Shortest-Execution-Time-First) , which assign priorities on the basis of job execution times
The priority of the executing job is lowered to the minimum among all jobs waiting for execution after the
job has executed.
Jobs have the same release time, preemptive scheduling is better.
When a processor is available, the job at the head of the queue executes on the processor. We will refer to
such a multiprocessor system as a dynamic system, because jobs are dynamically dispatched to processors.
In the example in Figure 4–2, we allowed each preempted job to resume on any processor and hence, jobs
are migratable. We say that a job migrates if it starts execution on a processor, is preempted, and later
resumes on a different processor.
In multiprocessor and distributed systems is to partition the jobs in the system into subsystems and assign
and bind the subsystems statically to the processors. Jobs are moved among processors only when the
system must be reconfigured, that is, when the operation mode of the system changes or some processor
fails. Such a system is called a static system, because the system is statically configured. If jobs on different
processors are dependent, the schedulers on the processors must synchronize.
The given release times and deadlines of jobs are sometimes inconsistent with the precedence constraints of
the jobs.
Release time of a job may be later than that of its successors, and its deadline may be earlier than that of its
predecessors.
The effective release time of a job without predecessors is equal to its given release time.
The effective deadline of a job without a successor is equal to its given deadline. The effective deadline of a
job with successors is equal to the minimum value among its given deadline.
The effective release times of all the jobs can be computed in one pass through the precedence graph in
O(n2
).
Serially reusable resources are typically granted (i.e., allocated) to jobs on a nonpreemptive basis and used in
a mutually exclusive manner. In other words, when a unit of a resource Ri is granted to a job, this unit is no
longer available to other jobs until the job frees the unit.
A lock-based concurrency control mechanism is used to enforce mutually exclusive accesses of jobs to
resources.
When a job wants to use ηi units of resource Ri , it executes a lock to request them. We denote this lock
request by L(Ri, ηi). The job continues its execution when it is granted the requested resource. When the job
no longer needs the resource, it releases the resource by executing an unlock, denoted by U(Ri, ηi). When a
resource Ri has only 1 unit, we use the simpler notations L(Ri) and U(Ri) for lock and unlock, respectively.
We call a segment of a job that begins at a lock and ends at a matching unlock a critical section.
As an example, Figure 8–1 shows three jobs, J1, J2, and J3, and the time instants when locks and unlocks are
executed if each job executes alone starting from time 0. Resources R1, R2, and R3 have only 1 unit each,
while resources R4 and R5 have many units. Job J3 has three overlapping critical sections that are properly
nested. A critical section that is not included in other critical sections is an outermost critical section; the
critical section delimited by L(R1) and U(R1) in J3 is an example. Other examples are the critical sections
delimited by L(R2) and U(R2) in J2, the second pair of L(R3) and U(R3) in J2 and L(R3) and U(R3) in J1.
The jobs contend for a resource when one job requests a resource that the other job already has.
The scheduler always denies a request if there are not enough free units of the resource to satisfy the
request. Sometimes, as we will see later, a scheduler may deny a request even when the requested resource
units are free in order to prevent some undesirable execution behavior.
When the scheduler does not grant ηi units of resource Ri to the job requesting them, the lock request L(Ri,
ηi) of the job fails (or is denied). When its lock request fails, the job is blocked and loses the processor. A
blocked job is removed from the ready job queue. It stays blocked until the scheduler grants it ηi units of Ri
for which the job is waiting. At that time, the job becomes unblocked, is moved backed to the ready job
queue, and executes when it is scheduled.
The example in Figure 8–2 illustrates the effect of resource contentions. In this example, there are three
jobs, J1, J2, and J3, whose feasible intervals are (6, 14], (2, 17] and (0, 18], respectively.
J1 has the highest priority and J3 the lowest. All three jobs require the resource R, which has only 1 unit. In
particular, the critical sections in these jobs are [R; 2], [R; 4], and [R; 4], respectively.
At time 0, only J3 is ready. It executes.
At time 1, J3 is granted the resource R when it executes L(R).
J2 is released at time 2, preempts J3, and begins to execute.
At time 4, J2 tries to lock R. Because R is in use by J3, this lock request fails. J2 becomes blocked, and J3
regains the processor and begins to execute.
At time 6, J1 becomes ready, preempts J3 and begins to execute.
J1 executes until time 8 when it executes a L(R) to request R. J3 still has the resource. Consequently, J1
becomes blocked. Only J3 is ready for execution, and it again has the processor and executes.
The critical section of J3 completes at time 9. The resource R becomes free when J3 executes U(R). Both J1
and J2 are waiting for it. The priority of the former is higher. Therefore, the resource and the processor are
allocated to J1, allowing it to resume execution.
J1 releases the resource R at time 11. J2 is unblocked. Since J1 has the highest priority, it continues to
execute.
Priority inversion can occur when the execution of some jobs or portions of jobs is nonpreemptable.
J1 completes at time 12. Since J2 is no longer blocked and has a higher priority than J3, it has the processor,
holds the resource, and begins to execute. When it completes at time 17, J3 resumes and executes until
completion at 18.
Priority inversion can occur when the execution of some jobs or portions of jobs is nonpreemptable.
Resources are allocated to jobs on a nonpreemptive basis, a higher-priority job can be blocked by a lower-
priority job if the jobs conflict, even when the execution of both jobs is preemptable. In the example in
Figure 8–2, the lowest priority job J3 first blocks J2 and then blocks J1 while it holds the resource R. As a
result, priority inversion occurs in intervals (4, 6] and (8, 9].
The execution time of the critical section in J3 is shortened by 1.5
The example in Figure 8–4 illustrates this fact. Here, jobs J1 and J3 have the highest priority and lowest
priority, respectively. At time 0, J3 becomes ready and executes. It acquires the resource R shortly
afterwards and continues to execute. After R is allocated to J3, J1 becomes ready. It preempts J3 and
executes until it requests resource R at time 3. Because the resource is in use, J1 becomes blocked, and a
priority inversion begins. While J3 is holding the resource and executes, a job J2 with a priority higher than J3
but lower than J1 is released. Moreover, J2 does not require the resource R. This job preempts J3 and
executes to completion. Thus, J2 lengthens the duration of this priority inversion. In this situation, the
priority inversion is said to be uncontrolled.
When a job requests a resource, it is always allocated the resource. When a job holds any resource, it
executes at a priority higher than the priorities of all jobs.) This protocol is called the Nonpreemptive Critical
Section (NPCS) protocol.
The schedule of these jobs when their critical sections are scheduled nonpreemptively on the processor.
According to this schedule, J1 is forced to wait for J3 when J3 holds the resource. However, as soon as J3
releases the resource, J1 becomes unblocked and executes to completion. Because J1 is not delayed by J2, it
completes at time 10.
Uncontrolled priority inversion illustrated by Figure 8–4 can never occur. The reason is that a job Jh can be
blocked only if it is released when some lower-priority job is in a critical section. If it is blocked, once the
blocking critical section completes, all resources are free. No lower-priority job can get the processor and
acquire any resource until Jh completes. Hence, Jh can be blocked only once, and its blocking time due to
resource conflict is at most equal to the maximum execution time of the critical sections of all lower-priority
jobs.
The most important advantage of the NPCS protocol is its simplicity, especially when the numbers of
resource units are arbitrary. The protocol does not need any prior knowledge about resource requirements
of jobs. It is simple to implement and can be used in both fixed-priority and dynamic-priority systems.
When the resource requirements of all jobs are known, an improvement is to let a job holding any resource
execute at the highest priority of all jobs requiring the resource. This is indeed how the ceiling-priority
protocol supported by the Real-Time Systems.
It works with any preemptive, priority-driven scheduling algorithm. Like the NPCS protocol, it does not
require prior knowledge on resource requirements of jobs. The priority-inheritance protocol does not
prevent deadlock. When there is no deadlock (i.e., when some other method is used to prevent deadlock),
the protocol ensures that no job is ever blocked for an indefinitely long time because uncontrolled priority
inversion cannot occur.
In particular, the current priority πl(t) of a job Jl may be raised to the higher priority πh (t) of another job Jh .
When this happens, we say that the lower-priority job Jl inherits the priority of the higher priority job Jh and
that Jl executes at its inherited priority πh (t).
The priority-inheritance protocol is defined by the following rules.
Scheduling Rule: Ready jobs are scheduled on the processor preemptively in a prioritydriven manner
according to their current priorities. At its release time t, the current priority π(t) of every job J is equal to its
assigned priority. The job remains at this priority except under the condition stated in rule 3.
Allocation Rule: When a job J requests a resource R at time t, (a) if R is free, R is allocated to J until J releases
the resource, and (b) if R is not free, the request is denied and J is blocked.
Priority-Inheritance Rule: When the requesting job J becomes blocked, the job Jl which blocks J inherits the
current priority π(t) of J . The job Jl executes at its inherited priority π(t) until it releases R; at that time, the
priority of Jl returns to its priority πl(t ) at the time t when it acquires the resource R.
Figure 8–7(b) illustrates how priority inheritance affects the way jobs are scheduled and executed. The three
jobs in this figure are the same as the ones in Figure 8–7(a). When J1 requests resource R and becomes
blocked by J3 at time 3, job J3 inherits the priority π1 of J1. When job J2 becomes ready at 5, it cannot
preempt J3 because its priority π2 is lower than the inherited priority π1 of J3. As a consequence, J3
completes its critical section as soon as possible.
The priority-inheritance protocol does not prevent deadlock.
In addition, the priority-inheritance protocol does not reduce the blocking times suffered by jobs as small as
possible.
In the worst case, a job that requires v resources and conflicts with k lowerpriority jobs can be blocked for
min(v, k) times.
The priority-ceiling protocol extends the priority-inheritance protocol to prevent deadlocks and to further
reduce the blocking time. This protocol makes two key assumptions: 1. The assigned priorities of all jobs are
fixed. 2. The resources required by all jobs are known a priori before the execution of any job begins.
The priority ceiling of any resource Ri is the highest priority of all the jobs that require Ri and is denoted by
(Ri).
A fundamental difference between the priority-inheritance and priority-ceiling protocols is that the former is
greedy while the latter is not
priority-inheritance protocol lets the requesting job have a resource whenever the resource is free. In
contrast, according to the allocation rule of the priority-ceiling protocol, a job may be denied its requested
resource even when the resource is free at the time
The priority-inheritance rules of these two protocols are essentially the same.
It is possible for J to be blocked by a lower-priority job which does not hold the requested resource
according to the priority-ceiling protocol, while this is impossible according to the priority-inheritance
protocol.
Priority-ceiling blocking is sometimes referred to as avoidance blocking. The reason for this term is that the
blocking caused by the priority-ceiling rule is the cost for avoidance of deadlocks among jobs
In systems where the number of jobs is large, it may be necessary for the jobs to share a common run-time
stack, in order to reduce overall memory demand.
When a job J executes, its stack space is on the top of the stack. The space is freed when the job completes.
When J is preempted, the preempting job has the stack space above J ’s. J can resume execution only after
all the jobs holding stack space above its space complete, free their stack spaces, and leave J ’s stack space
on the top of the stack again.
To ensure deadlock-free sharing of the run-time stack among jobs, we must ensure that no job is ever
blocked because it is denied some resource once its execution begins. This observation leads to the following
modified version of the priority-ceiling protocol, called the stack-based, priority-ceiling protocol.
The schedule in Figure 8–17 shows how the system of jobs in Figure 8–10 would be scheduled if the stack-
based, priority-ceiling protocol were used instead of the basic priorityceiling protocol. To better illustrate the
stack-based protocol, we let J2 be released at 4.8 and the execution time of the critical section of J2 be 1.2.
At time 2 when J4 is released, it is blocked from starting because its priority is not higher than the ceiling of
the system, which is equal to 2 at the time. This allows J5 to continue execution. For the same reason, J3
does not start execution when it is released. When J2 is released at time 4.8, it cannot start execution
because the ceiling of the system is 2. At time 5, the resource held by J5 becomes free and the ceiling of the
system is at . Consequently, J2 starts to execute since it has the highest priority among all the jobs ready at
the time. As expected, when it requests the resource Black at time 6, the resource is free. It acquires the
resource and continues to execute. At time 7 when J1 is released, its priority is higher than the ceiling of the
system, which is 2 at the time. (Again, this fact indicates that the resource Shaded, which it will require later,
is free.) J1, therefore, preempts J2 and holds the space on the top of the stack until it completes at time 10.
J2 then resumes and completes at 11. Afterwards, J3, J4, and J5 complete in the order of their priorities.
Rules Defining the Ceiling-Priority Protocol 1. Scheduling Rule: (a) Every job executes at its assigned priority
when it does not hold any resource. Jobs of the same priority are scheduled on the FIFO basis. (b) The
priority of each job holding any resource is equal to the highest of the priority ceilings of all resources held
by the job. 2. Allocation Rule: Whenever a job requests a resource, it is allocated the resource.
The longest blocking time suffered by every job is the same for the stack-based and basic priority-ceiling
protocols.
Data objects are a special type of shared resources. When jobs are scheduled preemptively, their accesses to
(i.e., reads and writes) data objects may be interleaved. To ensure data integrity, it is common to require
that the reads and writes be serializable.
A well-known way to ensure serializability is TwoPhase Locking (2PL). According to the 2PL protocol, a job
never requests any lock once it releases some lock.
We can easily get concurrency-control protocols that not only ensure serializability but also prevent
deadlock and transitive blocking by augmenting the protocols described in earlier sections with the two-
phase locking rule. As a result, we have the NPCS-2PL and the PCP2PL protocols. The augmented protocols
have an obvious shortcoming: prolonged blocking.
Convex-(priority-) ceiling protocol described below is another extension of the priority-ceiling protocol. It is
an improvement over the PCP-2PL protocol because it reduces the duration of blocking.
A thread implements a computation job and is the basic unit of work handled by the scheduler.
The Thread Control Block (TCB) and uses the structure to keep all the information it will need to manage and
schedule the thread.
Information kept in the TCB of a thread includes the ID of the thread and the starting address of thread’s
code. The context of a thread refers to the values of registers (e.g., program counter and status register)
When we say that the operating system inserts a thread in a queue (e.g., the ready or suspend queue),
A periodic (computation) task is a thread that executes periodically. It is clearly inefficient if the thread is
created and destroyed repeatedly every period.
The kernel reinitializes such a thread and puts it to sleep when the thread completes. The kernel keeps track
of the passage of time and releases.
The parameters of a periodic thread include its phase (i.e., the interval between its creation and its first
release time), period, relative deadline, and the number of instances.
Most commercial operating systems do not support periodic threads.
The events that cause the releases of these threads occur sporadically and may be triggered by external
interrupts.
Such a queue is simply a list of pointers which give the starting addresses of functions to be executed by the
server thread.
Sleeping state immediately after it is created and initialized.
Time Services and Scheduling. The scheduler is a central part of the kernel. In most operating systems, the
scheduler executes periodically, as well as whenever the state of any thread changes. To trigger the
scheduler into action, the system clock device raises interrupts periodically. A clock interrupt refers to an
interrupt from the device.
At each clock interrupt, the kernel does the following chores.
Processes timer events: A clock device has a timer queue. The pending expiration times of all the timers that
are bound to the clock are stored in time order in this queue.
Updates execution budget: Most real-time operating systems (including all Real-Time POSIX-compliant
systems) provide the user with the choice of round-robin or FIFO scheduling of threads of equal priority.
(These policies are called SCHED RR and SCHED FIFO, respectively.) To schedule equal-priority threads in a
round-robin manner, the scheduler gives such a thread a time slice when it schedules the thread for
execution.
Threads may become ready (e.g., released upon timer expiration) and the thread that was executing at the
time of the clock interrupt may need to be preempted. The scheduler updates the ready queue accordingly
and then gives control to the thread.
To provide these services and support its own activities, every system has at least one clock device. We have
been calling it the system clock.
Again, a clock device contains a counter, a timer queue, and an interrupt handler. The counter is triggered to
increment monotonically by a precise periodic sequence of pulses. At any time, the content of the counter
gives a representation of the current time.
The timer queue contains the pending expiration times of timers bound to the clock.
System may have more than one clock device and uses them for different purposes. For example, a real-time
monitor system may use a clock device to trigger the sampling and digitization of sensor readings and
initialize the input of sensor data. A digital control system may use one or more clock devices to time the
control-law computations.
Resolution. The resolution of a clock is the granularity of time provided by the clock.
The kernel maintains a software clock for each clock device it supports. It programs the clock device to raise
interrupts periodically. Whenever it gets an interrupt from the clock device, it updates the software clock,
and hence the current time according to the clock. Therefore, the resolution of the software clock is equal to
the period of these interrupts.
The typical period of clock interrupts is 10 milliseconds. If an operating system updates the system clock only
at these interrupts, its clock resolution is 10 milliseconds
High Resolution. A way to provide a finer resolution than hundreds of microseconds and more accurate time
to an application is to map a hardware clock into the address space of the application. Then, the application
can read the clock directly.
Most operating systems (including all Real-Time POSIX compliant systems, Windows NT, and Solaris) allow a
thread or process to have its own timers.8 Specifically, by calling the create timer function, a thread (or
process) can create a per thread (or per process) timer and, in a system containing multiple clocks, bind the
timer to a specified clock.
Various forms of set-timer functions allow a thread to specify an action to take upon timer expiration. The
types of action include calling a specified function, waking up a thread, and sending a message to a message
queue.
Asynchronous Timer Functions. As an example, we look at the set watchdog timer function wdStart( )
provided by VxWorks [Wind]. A watchdog (or alarm) timer is a oneshot timer that supports relative time.
A thread can cancel a timer before its expiration by calling wdCancel( ).
An implementation of a timing monitor. The timing monitor logs the deadlines missed by input tasks. Each of
these tasks processes sporadic sensor readings collected by an input device. Whenever the input device
presents a sensor reading by raising an interrupt, its input task must process the reading within the relative
deadline of the task.
A thread can set multiple timers to alarm at different rates or times.
Periodic Timer Interrupts. :Now let us suppose that the nominal timer resolution is 10 microseconds and the
period of time-service interrupts is 5 milliseconds. A threads sets a timer to expire twice 10 microseconds
apart and reads the current time at the occurrence of each timer event.
One-Shot Timer Interrupts. Alternatively, some operating systems (e.g., QNX) program the clock device to
raise an interrupt at each timer expiration time.
The actual first release time can be later than this time because (1) the thread may be preempted and not
scheduled until a later time and (2) it takes some time to create a timer (and to get the current time) when
the thread starts to execute.
Simplify the implementation of complex algorithms for scheduling aperiodic tasks in the user level.
Modern operating systems support fixed-priority scheduling.
To support fixed-priority scheduling, the kernel maintains a ready queue for each priority level. Whenever a
thread is ready to execute, the kernel places it in the ready queue of the thread’s current priority. For this
reason, the current priority of a thread is often called its dispatch priority.
Theoretical worst-case time complexity of this operation is O( ), where is the number of priority levels
supported by the operating system.
Most existing operating systems supposely support dynamic priority.
Thread can set and change its own priority or the priority of some other thread. This mechanism is adequate
for mode-change and reconfiguration purposes but is too expensive to support dynamic scheduling policies
such as the EDF algorithm.
Rather than maintaining a single EDF queue, the scheduler keeps a FIFO queue for threads of each relative
deadline. The scheduler places each newly released thread at the end of the queue for threads which have
the same relative deadline as the new thread, as if the threads were scheduled on the deadline-monotonic
basis.
Therefore, to find the thread with the earliest absolute deadline, the scheduler only needs to search among
the threads at the heads of all the FIFO queues. To minimize the time for dequeueing the highest priority
thread, the scheduler can keep the threads at the heads of the FIFO queues.
As they execute, threads communicate (i.e., they exchange control information and data). They synchronize
in order to ensure that their exchanges occur at the right times and under the right conditions and that they
do not get into each other’s way. Shared memory, message queues, synchronization primitives (e.g.,
mutexes, conditional variables, and semaphores), and events and signals are commonly used mechanisms
for these purposes.
Shared memory provides a low-level, high-bandwidth and low-latency means of interprocess
communication.
Message Queues. As its name tells us, a message queue provides a place where one or more threads can
pass messages to some other thread or threads.
We consider a system service provider. Message queues provide a natural way of communication between it
and its clients. The service provider creates a message queue, gives the message queue a name, and makes
this name known to its clients. To request service, a client thread opens the message queue and places its
Request-For-Service (RFS) message in the queue. The service provider may also use message queues as the
means for returning the results it produces back to the clients. A client gets the result by opening the result
queue and receiving the message in it.

More Related Content

What's hot

Real time Scheduling in Operating System for Msc CS
Real time Scheduling in Operating System for Msc CSReal time Scheduling in Operating System for Msc CS
Real time Scheduling in Operating System for Msc CS
Thanveen
 
Real time-embedded-system-lec-02
Real time-embedded-system-lec-02Real time-embedded-system-lec-02
Real time-embedded-system-lec-02
University of Computer Science and Technology
 
Rtos concepts
Rtos conceptsRtos concepts
Rtos concepts
anishgoel
 
Real time os(suga)
Real time os(suga) Real time os(suga)
Real time os(suga) Nagarajan
 
Real Time Systems
Real Time SystemsReal Time Systems
Real Time Systems
leo3004
 
Components in real time systems
Components in real time systemsComponents in real time systems
Components in real time systemsSaransh Garg
 
REAL TIME OPERATING SYSTEM
REAL TIME OPERATING SYSTEMREAL TIME OPERATING SYSTEM
REAL TIME OPERATING SYSTEMprakrutijsh
 
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
knowdiff
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
Pawandeep Kaur
 
Real-Time Scheduling Algorithms
Real-Time Scheduling AlgorithmsReal-Time Scheduling Algorithms
Real-Time Scheduling Algorithms
AJAL A J
 
6.Distributed Operating Systems
6.Distributed Operating Systems6.Distributed Operating Systems
6.Distributed Operating Systems
Dr Sandeep Kumar Poonia
 
Real time operating system
Real time operating systemReal time operating system
Real time operating system
Khuram Shahzad
 
RTOS- Real Time Operating Systems
RTOS- Real Time Operating Systems RTOS- Real Time Operating Systems
RTOS- Real Time Operating Systems
Bayar shahab
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
Riya Choudhary
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
Ashwani Garg
 
Operating Systems Part II-Process Scheduling, Synchronisation & Deadlock
Operating Systems Part II-Process Scheduling, Synchronisation & DeadlockOperating Systems Part II-Process Scheduling, Synchronisation & Deadlock
Operating Systems Part II-Process Scheduling, Synchronisation & Deadlock
Ajit Nayak
 
Reference Model of Real Time System
Reference Model of Real Time SystemReference Model of Real Time System
Reference Model of Real Time System
Raaz Karkee
 
Rtos Concepts
Rtos ConceptsRtos Concepts
Rtos Concepts
Sundaresan Sundar
 
Clock driven scheduling
Clock driven schedulingClock driven scheduling
Clock driven scheduling
Kamal Acharya
 

What's hot (20)

Real time Scheduling in Operating System for Msc CS
Real time Scheduling in Operating System for Msc CSReal time Scheduling in Operating System for Msc CS
Real time Scheduling in Operating System for Msc CS
 
Real time-embedded-system-lec-02
Real time-embedded-system-lec-02Real time-embedded-system-lec-02
Real time-embedded-system-lec-02
 
Rtos concepts
Rtos conceptsRtos concepts
Rtos concepts
 
Real time os(suga)
Real time os(suga) Real time os(suga)
Real time os(suga)
 
Real Time Systems
Real Time SystemsReal Time Systems
Real Time Systems
 
Components in real time systems
Components in real time systemsComponents in real time systems
Components in real time systems
 
REAL TIME OPERATING SYSTEM
REAL TIME OPERATING SYSTEMREAL TIME OPERATING SYSTEM
REAL TIME OPERATING SYSTEM
 
Distributed Operating System_2
Distributed Operating System_2Distributed Operating System_2
Distributed Operating System_2
 
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
 
Real-Time Scheduling Algorithms
Real-Time Scheduling AlgorithmsReal-Time Scheduling Algorithms
Real-Time Scheduling Algorithms
 
6.Distributed Operating Systems
6.Distributed Operating Systems6.Distributed Operating Systems
6.Distributed Operating Systems
 
Real time operating system
Real time operating systemReal time operating system
Real time operating system
 
RTOS- Real Time Operating Systems
RTOS- Real Time Operating Systems RTOS- Real Time Operating Systems
RTOS- Real Time Operating Systems
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
 
Operating Systems Part II-Process Scheduling, Synchronisation & Deadlock
Operating Systems Part II-Process Scheduling, Synchronisation & DeadlockOperating Systems Part II-Process Scheduling, Synchronisation & Deadlock
Operating Systems Part II-Process Scheduling, Synchronisation & Deadlock
 
Reference Model of Real Time System
Reference Model of Real Time SystemReference Model of Real Time System
Reference Model of Real Time System
 
Rtos Concepts
Rtos ConceptsRtos Concepts
Rtos Concepts
 
Clock driven scheduling
Clock driven schedulingClock driven scheduling
Clock driven scheduling
 

Similar to RTS Short Topics

Concepts of Real time Systems (RTS)
Concepts of Real time Systems (RTS)Concepts of Real time Systems (RTS)
Concepts of Real time Systems (RTS)
EngKarrarSMuttair
 
Air traffic control
Air traffic controlAir traffic control
Air traffic control
Rishu Seth
 
There are many operating systemsReal-Time Operating SystemReal-t.pdf
There are many operating systemsReal-Time Operating SystemReal-t.pdfThere are many operating systemsReal-Time Operating SystemReal-t.pdf
There are many operating systemsReal-Time Operating SystemReal-t.pdf
ankitmobileshop235
 
Real Time Systems & RTOS
Real Time Systems & RTOSReal Time Systems & RTOS
Real Time Systems & RTOS
Vishwa Mohan
 
Real Time System
Real Time SystemReal Time System
Real Time System
Suman Astani
 
Survey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling AlgorithmsSurvey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling Algorithms
IOSR Journals
 
Embedded system software
Embedded system softwareEmbedded system software
Embedded system software
Jamia Hamdard
 
Benchmark methods to analyze embedded processors and systems
Benchmark methods to analyze embedded processors and systemsBenchmark methods to analyze embedded processors and systems
Benchmark methods to analyze embedded processors and systems
XMOS
 
Reference model of real time system
Reference model of real time systemReference model of real time system
Reference model of real time system
Kamal Acharya
 
Real time operating system Concept
Real time operating system ConceptReal time operating system Concept
Real time operating system Concept
IntervalZero
 
D017311724
D017311724D017311724
D017311724
IOSR Journals
 
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
iosrjce
 
Embedded os
Embedded osEmbedded os
Embedded os
K Senthil Kumar
 
Ch20
Ch20Ch20
Real Time Operating system (RTOS) - Embedded systems
Real Time Operating system (RTOS) - Embedded systemsReal Time Operating system (RTOS) - Embedded systems
Real Time Operating system (RTOS) - Embedded systems
Hariharan Ganesan
 
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICESCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
ijait
 
RTOS APPLICATIONS
RTOS  APPLICATIONSRTOS  APPLICATIONS
RTOS APPLICATIONS
Dr.YNM
 
Integrating fault tolerant scheme with feedback control scheduling algorithm ...
Integrating fault tolerant scheme with feedback control scheduling algorithm ...Integrating fault tolerant scheme with feedback control scheduling algorithm ...
Integrating fault tolerant scheme with feedback control scheduling algorithm ...
ijics
 

Similar to RTS Short Topics (20)

Concepts of Real time Systems (RTS)
Concepts of Real time Systems (RTS)Concepts of Real time Systems (RTS)
Concepts of Real time Systems (RTS)
 
Air traffic control
Air traffic controlAir traffic control
Air traffic control
 
There are many operating systemsReal-Time Operating SystemReal-t.pdf
There are many operating systemsReal-Time Operating SystemReal-t.pdfThere are many operating systemsReal-Time Operating SystemReal-t.pdf
There are many operating systemsReal-Time Operating SystemReal-t.pdf
 
Real Time Systems & RTOS
Real Time Systems & RTOSReal Time Systems & RTOS
Real Time Systems & RTOS
 
Real Time System
Real Time SystemReal Time System
Real Time System
 
Survey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling AlgorithmsSurvey of Real Time Scheduling Algorithms
Survey of Real Time Scheduling Algorithms
 
Embedded system software
Embedded system softwareEmbedded system software
Embedded system software
 
Benchmark methods to analyze embedded processors and systems
Benchmark methods to analyze embedded processors and systemsBenchmark methods to analyze embedded processors and systems
Benchmark methods to analyze embedded processors and systems
 
Reference model of real time system
Reference model of real time systemReference model of real time system
Reference model of real time system
 
Real time operating system Concept
Real time operating system ConceptReal time operating system Concept
Real time operating system Concept
 
D017311724
D017311724D017311724
D017311724
 
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...
 
Embedded os
Embedded osEmbedded os
Embedded os
 
Ch20
Ch20Ch20
Ch20
 
Real Time Operating system (RTOS) - Embedded systems
Real Time Operating system (RTOS) - Embedded systemsReal Time Operating system (RTOS) - Embedded systems
Real Time Operating system (RTOS) - Embedded systems
 
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICESCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
 
Os Concepts
Os ConceptsOs Concepts
Os Concepts
 
RTOS APPLICATIONS
RTOS  APPLICATIONSRTOS  APPLICATIONS
RTOS APPLICATIONS
 
Integrating fault tolerant scheme with feedback control scheduling algorithm ...
Integrating fault tolerant scheme with feedback control scheduling algorithm ...Integrating fault tolerant scheme with feedback control scheduling algorithm ...
Integrating fault tolerant scheme with feedback control scheduling algorithm ...
 
Ch15
Ch15Ch15
Ch15
 

Recently uploaded

Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
RaedMohamed3
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
DhatriParmar
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 

Recently uploaded (20)

Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 

RTS Short Topics

  • 1. RTS IMPORTANT POINTS Real-time systems include digital control, command and control, signal processing, and telecommunication systems. Unlike PCs and workstations that run nonreal-time applications such as our editor and network browser, the computers and networks that run real-time applications are often hidden from our view. Real-time applications: digital control, optimal control, command and control, signal processing, tracking, real-time databases, and multimedia. Many real-time systems are embedded in sensors and actuators and function as digital controllers. Figure 1– 1 shows such a system. The state of the plant is monitored by sensors and can be changed by actuators. The real-time (computing) system estimates from the sensor readings the current state of the plant and computes a control output based on the difference between the current state and the desired state (called reference input in the figure). We call this computation the control-law computation of the controller. The output thus generated activates the actuators, which bring the plant closer to the desired state. The first is the perceived responsiveness of the overall system (i.e., the plant and the controller). The high sampling rate makes it possible to keep the control input small and the control-law computation and digital-to-analog conversion of the controller simple. A plant typically has more than one degree of freedom. Its state is defined by multiple state variables (e.g., the rotation speed, temperature, etc. of an engine or the tension and position of a video tape). Therefore, it is monitored by multiple sensors and controlled by multiple actuators. We can think of a multivariate (i.e., multi-input/multi-output) controller for such a plant as a system of single-output controllers. This multirate controller controls only flight dynamics. The control laws of each multirate controller may have harmonic periods. Each control-law computation can begin shortly after the beginning of each sampling period when the most recent sensor data become available. The controller at the highest level of a control hierarchy is a command and control system. Examples are digital filtering, video and voice compressing/decompression, and radar signal processing. The number of outputs may also be of order n, and the complexity of the computation is O(n2 ). A real-time database contains data objects, called image objects. A set of data objects is said to be absolutely (temporally) consistent if the maximum age of the objects in the set is no greater than a certain threshold. A fighter jet and the targets it tracks move at supersonic speeds. Hence the information on where they are must be less than 50 milliseconds old. On the other hand, an air traffic control system monitors commercial aircraft at subsonic speeds; this is why the absolute temporal consistency threshold for air traffic control is much larger. A set of data objects is said to be relatively consistent if the maximum difference in ages of the objects in the set is no greater than the relative consistency threshold used by the application. An example is a planning system that correlates traffic densities along a highway with the flow rates of vehicles entering and exiting the highway. The system does not require the most up-to-date flow rates at all interchanges and hence can tolerate a relatively large age (e.g., two minutes). However, if the difference in the ages of flow rates is large (e.g., one minute), the flow rates no longer give a valid snapshot of the traffic scenario and can lead the system to wrong conclusions. Purely cyclic: Every task in a purely cyclic application executes periodically. Even I/O operations are polled. Moreover, its demands in (computing, communication, and storage) resources do not vary significantly from period to period. Mostly cyclic: Most tasks in a mostly cyclic system execute periodically. The system must also respond to some external events (fault recovery and external commands) asynchronously. Examples are modern avionics.
  • 2. Asynchronous and somewhat predictable: In applications such as multimedia communication, radar signal processing, and tracking, most tasks are not periodic. The duration between consecutive executions of a task may vary considerably. Asynchronous and unpredictable: Applications that react to asynchronous events and have tasks with high run-time complexity belong to this type. An example is intelligent real-time control systems. The release time of a job is the instant of time at which the job becomes available for execution. The deadline of a job is the instant of time by which its execution is required to be completed. Response time, that is, the length of time from the release time of the job to the instant when it completes. We call the maximum allowable response time of a job its relative deadline. A constraint imposed on the timing behavior of a job a timing constraint. A timing constraint of a job can be specified in terms of its release time and relative or absolute deadlines, A timing constraint or deadline is hard if the failure to meet it is considered to be a fatal fault. A hard deadline is imposed on a job because a late result produced by the job after the deadline may have disastrous consequences. (As examples, a late command to stop a train may cause a collision, and a bomb dropped too late may hit a civilian population instead of the intended military target. The late completion of a job that has a soft deadline is undesirable. However, a few misses of soft deadlines do no serious harm; only the system’s overall performance becomes poorer and poorer when more and more jobs with soft deadlines complete late. Similarly, each control-law computation job of a flight controller must be completed in time so that its command can be issued in time. Otherwise, the plane controlled by it may become oscillatory (and the ride bumpy) or even unstable and uncontrollable. For this reason, we want the timely completion of all control- law computations guaranteed. Jobs in some nonembedded systems may also have hard deadlines. An example is a critical information system that must be highly available: The system must never be down for more than a minute. Because of this requirement, reconfiguration and recovery of database servers and network connections in the system must complete within a few seconds or tens of seconds, and this relative deadline is hard. A system in which jobs have soft deadlines is a soft real-time system. Examples of such systems include on-line transaction systems and telephone switches, as well as electronic games. Critical timing requirements but is nevertheless considered to be a soft real-time system. An example is a stock price quotation system. It should update the price of each stock each time the price changes. The release times of sporadic and aperiodic jobs are random variables. Execution time is the maximum execution time of all the jobs. Sporadic task is one whose inter release times can be arbitrarily small. The accuracy of the periodic task model decreases with increasing jitter in release times. The total utilization U of all the tasks in the system is the sum of the utilizations of the individual tasks in it. We say that a task is aperiodic if the jobs in it have either soft deadlines or no deadlines. Tasks containing jobs that are released at random time instants and have hard deadlines are sporadic tasks. The types of an edge connecting two vertices and other parameters of the edge are interconnection parameters of the jobs represented by the vertices. Jobs with temporal distance constraints may or may not have deadlines. An outgoing edge from every vertex expresses an AND constraint. This system can easily be modeled by a task graph that has edges expressing OR constraints. The subgraph that begins from a vertex representing a branch job and ends at the vertex representing the associated join job is called a conditional block. This interruption of job execution is called pre-emption. In nonreal-time systems, such jobs are typically scheduled in a round-robin manner. A job is nonpreemptable if it must be executed from start to completion without interruption. jobs that model the transmissions of data frames in a token ring (or bus). If transmission of a frame is interrupted before it completes, the partially transmitted frame is discarded by the receiver.
  • 3. An interrupt handling job usually begins by saving the state of the processor (i.e., storing the processor status register, the stack pointer, the program counter, and so on). The amount of time required to accomplish a context switch is called a context-switch time. A resource is nonpreemptable if each unit of the resource is constrained to be used serially. In other words, once a unit of a nonpreemptable resource is allocated to a job, other jobs needing the unit must wait until the job completes its use. Otherwise, if jobs can use every unit of a resource in an interleaved fashion, the resource is preemptable. Edges in task graphs represent different types of dependencies among jobs, edges in a resource graph represent the relationship among resources. The resource graph in a system containing two computers consists of two trees. The root of each tree represents a computer. Each of these subcomponents is represented by a vertex, and there is an edge from the computer vertex to each subcomponent vertex. Some edges in resource graphs represent connectivity between components. These edges are called accessibility edges. A schedule of the jobs is computed off-line and is stored for use at run time. The scheduler schedules the jobs according to this schedule at each scheduling decision time. In this way, scheduling overhead during run-time can be minimized. The round-robin approach is commonly used for scheduling time-shared applications. When jobs are scheduled on a round-robin basis, every job joins a First-in-first-out (FIFO) queue when it becomes ready for execution. A time slice is the basic granule of time that is allocated to jobs. In a timeshared environment, a time slice is typically in the order of tens of milliseconds. If the job does not complete by the end of the time slice, it is preempted and placed at the end of the queue to wait for its next turn. When there are n ready jobs in the queue, each job gets one time slice every n time slices, that is, every round. Because the length of the time slice is relatively short, the execution of every job begins almost immediately after it becomes ready. The round-robin algorithm is also called the processor-sharing algorithm. The weighted round-robin algorithm has been used for scheduling real-time traffic in high-speed switched networks. Different jobs may be given different weights. Here, the weight of a job refers to the fraction of processor time allocated to the job. The length of a round is equal to the sum of the weights of all the ready jobs. By adjusting the weights of jobs, we can speed up or retard the progress of each job toward its completion. By giving each job a fraction of the processor, a round-robin scheduler delays the completion of every job. If it is used to schedule precedence constrained jobs, the response time of a chain of jobs can be unduly large. The weighted round-robin approach does not require a sorted priority queue. The term priority-driven algorithms refers to a large class of scheduling algorithms that never leave any resource idle. Scheduling decisions are made when events such as releases and completions of jobs occur. Hence, priority- driven algorithms are event-driven. Other commonly used names for this approach are greedy scheduling, list scheduling and work-conserving scheduling. A priority-driven algorithm is greedy because it tries to make locally optimal decisions. When a processor or resource is available and some job can use it to make progress, such an algorithm never makes the job wait. Priority-driven algorithm can be implemented by assigning priorities to jobs. Jobs ready for execution are placed in one or more queues ordered by the priorities of the jobs. At any scheduling decision time, the jobs with the highest priorities are scheduled and executed on the available processors. Most scheduling algorithms used in nonreal-time systems are priority-driven. Examples include the FIFO (First-In-First-Out) and LIFO (Last-In-First-Out) algorithms. SETF (Shortest-Execution-Time-First) , which assign priorities on the basis of job execution times
  • 4. The priority of the executing job is lowered to the minimum among all jobs waiting for execution after the job has executed. Jobs have the same release time, preemptive scheduling is better. When a processor is available, the job at the head of the queue executes on the processor. We will refer to such a multiprocessor system as a dynamic system, because jobs are dynamically dispatched to processors. In the example in Figure 4–2, we allowed each preempted job to resume on any processor and hence, jobs are migratable. We say that a job migrates if it starts execution on a processor, is preempted, and later resumes on a different processor. In multiprocessor and distributed systems is to partition the jobs in the system into subsystems and assign and bind the subsystems statically to the processors. Jobs are moved among processors only when the system must be reconfigured, that is, when the operation mode of the system changes or some processor fails. Such a system is called a static system, because the system is statically configured. If jobs on different processors are dependent, the schedulers on the processors must synchronize. The given release times and deadlines of jobs are sometimes inconsistent with the precedence constraints of the jobs. Release time of a job may be later than that of its successors, and its deadline may be earlier than that of its predecessors. The effective release time of a job without predecessors is equal to its given release time. The effective deadline of a job without a successor is equal to its given deadline. The effective deadline of a job with successors is equal to the minimum value among its given deadline. The effective release times of all the jobs can be computed in one pass through the precedence graph in O(n2 ). Serially reusable resources are typically granted (i.e., allocated) to jobs on a nonpreemptive basis and used in a mutually exclusive manner. In other words, when a unit of a resource Ri is granted to a job, this unit is no longer available to other jobs until the job frees the unit. A lock-based concurrency control mechanism is used to enforce mutually exclusive accesses of jobs to resources. When a job wants to use ηi units of resource Ri , it executes a lock to request them. We denote this lock request by L(Ri, ηi). The job continues its execution when it is granted the requested resource. When the job no longer needs the resource, it releases the resource by executing an unlock, denoted by U(Ri, ηi). When a resource Ri has only 1 unit, we use the simpler notations L(Ri) and U(Ri) for lock and unlock, respectively. We call a segment of a job that begins at a lock and ends at a matching unlock a critical section. As an example, Figure 8–1 shows three jobs, J1, J2, and J3, and the time instants when locks and unlocks are executed if each job executes alone starting from time 0. Resources R1, R2, and R3 have only 1 unit each, while resources R4 and R5 have many units. Job J3 has three overlapping critical sections that are properly nested. A critical section that is not included in other critical sections is an outermost critical section; the critical section delimited by L(R1) and U(R1) in J3 is an example. Other examples are the critical sections delimited by L(R2) and U(R2) in J2, the second pair of L(R3) and U(R3) in J2 and L(R3) and U(R3) in J1. The jobs contend for a resource when one job requests a resource that the other job already has. The scheduler always denies a request if there are not enough free units of the resource to satisfy the request. Sometimes, as we will see later, a scheduler may deny a request even when the requested resource units are free in order to prevent some undesirable execution behavior. When the scheduler does not grant ηi units of resource Ri to the job requesting them, the lock request L(Ri, ηi) of the job fails (or is denied). When its lock request fails, the job is blocked and loses the processor. A blocked job is removed from the ready job queue. It stays blocked until the scheduler grants it ηi units of Ri for which the job is waiting. At that time, the job becomes unblocked, is moved backed to the ready job queue, and executes when it is scheduled. The example in Figure 8–2 illustrates the effect of resource contentions. In this example, there are three jobs, J1, J2, and J3, whose feasible intervals are (6, 14], (2, 17] and (0, 18], respectively. J1 has the highest priority and J3 the lowest. All three jobs require the resource R, which has only 1 unit. In particular, the critical sections in these jobs are [R; 2], [R; 4], and [R; 4], respectively.
  • 5. At time 0, only J3 is ready. It executes. At time 1, J3 is granted the resource R when it executes L(R). J2 is released at time 2, preempts J3, and begins to execute. At time 4, J2 tries to lock R. Because R is in use by J3, this lock request fails. J2 becomes blocked, and J3 regains the processor and begins to execute. At time 6, J1 becomes ready, preempts J3 and begins to execute. J1 executes until time 8 when it executes a L(R) to request R. J3 still has the resource. Consequently, J1 becomes blocked. Only J3 is ready for execution, and it again has the processor and executes. The critical section of J3 completes at time 9. The resource R becomes free when J3 executes U(R). Both J1 and J2 are waiting for it. The priority of the former is higher. Therefore, the resource and the processor are allocated to J1, allowing it to resume execution. J1 releases the resource R at time 11. J2 is unblocked. Since J1 has the highest priority, it continues to execute. Priority inversion can occur when the execution of some jobs or portions of jobs is nonpreemptable. J1 completes at time 12. Since J2 is no longer blocked and has a higher priority than J3, it has the processor, holds the resource, and begins to execute. When it completes at time 17, J3 resumes and executes until completion at 18. Priority inversion can occur when the execution of some jobs or portions of jobs is nonpreemptable. Resources are allocated to jobs on a nonpreemptive basis, a higher-priority job can be blocked by a lower- priority job if the jobs conflict, even when the execution of both jobs is preemptable. In the example in Figure 8–2, the lowest priority job J3 first blocks J2 and then blocks J1 while it holds the resource R. As a result, priority inversion occurs in intervals (4, 6] and (8, 9]. The execution time of the critical section in J3 is shortened by 1.5 The example in Figure 8–4 illustrates this fact. Here, jobs J1 and J3 have the highest priority and lowest priority, respectively. At time 0, J3 becomes ready and executes. It acquires the resource R shortly afterwards and continues to execute. After R is allocated to J3, J1 becomes ready. It preempts J3 and executes until it requests resource R at time 3. Because the resource is in use, J1 becomes blocked, and a priority inversion begins. While J3 is holding the resource and executes, a job J2 with a priority higher than J3 but lower than J1 is released. Moreover, J2 does not require the resource R. This job preempts J3 and executes to completion. Thus, J2 lengthens the duration of this priority inversion. In this situation, the priority inversion is said to be uncontrolled. When a job requests a resource, it is always allocated the resource. When a job holds any resource, it executes at a priority higher than the priorities of all jobs.) This protocol is called the Nonpreemptive Critical Section (NPCS) protocol. The schedule of these jobs when their critical sections are scheduled nonpreemptively on the processor. According to this schedule, J1 is forced to wait for J3 when J3 holds the resource. However, as soon as J3 releases the resource, J1 becomes unblocked and executes to completion. Because J1 is not delayed by J2, it completes at time 10. Uncontrolled priority inversion illustrated by Figure 8–4 can never occur. The reason is that a job Jh can be blocked only if it is released when some lower-priority job is in a critical section. If it is blocked, once the blocking critical section completes, all resources are free. No lower-priority job can get the processor and acquire any resource until Jh completes. Hence, Jh can be blocked only once, and its blocking time due to resource conflict is at most equal to the maximum execution time of the critical sections of all lower-priority jobs. The most important advantage of the NPCS protocol is its simplicity, especially when the numbers of resource units are arbitrary. The protocol does not need any prior knowledge about resource requirements of jobs. It is simple to implement and can be used in both fixed-priority and dynamic-priority systems. When the resource requirements of all jobs are known, an improvement is to let a job holding any resource execute at the highest priority of all jobs requiring the resource. This is indeed how the ceiling-priority protocol supported by the Real-Time Systems.
  • 6. It works with any preemptive, priority-driven scheduling algorithm. Like the NPCS protocol, it does not require prior knowledge on resource requirements of jobs. The priority-inheritance protocol does not prevent deadlock. When there is no deadlock (i.e., when some other method is used to prevent deadlock), the protocol ensures that no job is ever blocked for an indefinitely long time because uncontrolled priority inversion cannot occur. In particular, the current priority πl(t) of a job Jl may be raised to the higher priority πh (t) of another job Jh . When this happens, we say that the lower-priority job Jl inherits the priority of the higher priority job Jh and that Jl executes at its inherited priority πh (t). The priority-inheritance protocol is defined by the following rules. Scheduling Rule: Ready jobs are scheduled on the processor preemptively in a prioritydriven manner according to their current priorities. At its release time t, the current priority π(t) of every job J is equal to its assigned priority. The job remains at this priority except under the condition stated in rule 3. Allocation Rule: When a job J requests a resource R at time t, (a) if R is free, R is allocated to J until J releases the resource, and (b) if R is not free, the request is denied and J is blocked. Priority-Inheritance Rule: When the requesting job J becomes blocked, the job Jl which blocks J inherits the current priority π(t) of J . The job Jl executes at its inherited priority π(t) until it releases R; at that time, the priority of Jl returns to its priority πl(t ) at the time t when it acquires the resource R. Figure 8–7(b) illustrates how priority inheritance affects the way jobs are scheduled and executed. The three jobs in this figure are the same as the ones in Figure 8–7(a). When J1 requests resource R and becomes blocked by J3 at time 3, job J3 inherits the priority π1 of J1. When job J2 becomes ready at 5, it cannot preempt J3 because its priority π2 is lower than the inherited priority π1 of J3. As a consequence, J3 completes its critical section as soon as possible. The priority-inheritance protocol does not prevent deadlock. In addition, the priority-inheritance protocol does not reduce the blocking times suffered by jobs as small as possible. In the worst case, a job that requires v resources and conflicts with k lowerpriority jobs can be blocked for min(v, k) times. The priority-ceiling protocol extends the priority-inheritance protocol to prevent deadlocks and to further reduce the blocking time. This protocol makes two key assumptions: 1. The assigned priorities of all jobs are fixed. 2. The resources required by all jobs are known a priori before the execution of any job begins. The priority ceiling of any resource Ri is the highest priority of all the jobs that require Ri and is denoted by (Ri). A fundamental difference between the priority-inheritance and priority-ceiling protocols is that the former is greedy while the latter is not priority-inheritance protocol lets the requesting job have a resource whenever the resource is free. In contrast, according to the allocation rule of the priority-ceiling protocol, a job may be denied its requested resource even when the resource is free at the time The priority-inheritance rules of these two protocols are essentially the same. It is possible for J to be blocked by a lower-priority job which does not hold the requested resource according to the priority-ceiling protocol, while this is impossible according to the priority-inheritance protocol. Priority-ceiling blocking is sometimes referred to as avoidance blocking. The reason for this term is that the blocking caused by the priority-ceiling rule is the cost for avoidance of deadlocks among jobs In systems where the number of jobs is large, it may be necessary for the jobs to share a common run-time stack, in order to reduce overall memory demand. When a job J executes, its stack space is on the top of the stack. The space is freed when the job completes. When J is preempted, the preempting job has the stack space above J ’s. J can resume execution only after all the jobs holding stack space above its space complete, free their stack spaces, and leave J ’s stack space on the top of the stack again.
  • 7. To ensure deadlock-free sharing of the run-time stack among jobs, we must ensure that no job is ever blocked because it is denied some resource once its execution begins. This observation leads to the following modified version of the priority-ceiling protocol, called the stack-based, priority-ceiling protocol. The schedule in Figure 8–17 shows how the system of jobs in Figure 8–10 would be scheduled if the stack- based, priority-ceiling protocol were used instead of the basic priorityceiling protocol. To better illustrate the stack-based protocol, we let J2 be released at 4.8 and the execution time of the critical section of J2 be 1.2. At time 2 when J4 is released, it is blocked from starting because its priority is not higher than the ceiling of the system, which is equal to 2 at the time. This allows J5 to continue execution. For the same reason, J3 does not start execution when it is released. When J2 is released at time 4.8, it cannot start execution because the ceiling of the system is 2. At time 5, the resource held by J5 becomes free and the ceiling of the system is at . Consequently, J2 starts to execute since it has the highest priority among all the jobs ready at the time. As expected, when it requests the resource Black at time 6, the resource is free. It acquires the resource and continues to execute. At time 7 when J1 is released, its priority is higher than the ceiling of the system, which is 2 at the time. (Again, this fact indicates that the resource Shaded, which it will require later, is free.) J1, therefore, preempts J2 and holds the space on the top of the stack until it completes at time 10. J2 then resumes and completes at 11. Afterwards, J3, J4, and J5 complete in the order of their priorities. Rules Defining the Ceiling-Priority Protocol 1. Scheduling Rule: (a) Every job executes at its assigned priority when it does not hold any resource. Jobs of the same priority are scheduled on the FIFO basis. (b) The priority of each job holding any resource is equal to the highest of the priority ceilings of all resources held by the job. 2. Allocation Rule: Whenever a job requests a resource, it is allocated the resource. The longest blocking time suffered by every job is the same for the stack-based and basic priority-ceiling protocols. Data objects are a special type of shared resources. When jobs are scheduled preemptively, their accesses to (i.e., reads and writes) data objects may be interleaved. To ensure data integrity, it is common to require that the reads and writes be serializable. A well-known way to ensure serializability is TwoPhase Locking (2PL). According to the 2PL protocol, a job never requests any lock once it releases some lock. We can easily get concurrency-control protocols that not only ensure serializability but also prevent deadlock and transitive blocking by augmenting the protocols described in earlier sections with the two- phase locking rule. As a result, we have the NPCS-2PL and the PCP2PL protocols. The augmented protocols have an obvious shortcoming: prolonged blocking. Convex-(priority-) ceiling protocol described below is another extension of the priority-ceiling protocol. It is an improvement over the PCP-2PL protocol because it reduces the duration of blocking. A thread implements a computation job and is the basic unit of work handled by the scheduler. The Thread Control Block (TCB) and uses the structure to keep all the information it will need to manage and schedule the thread. Information kept in the TCB of a thread includes the ID of the thread and the starting address of thread’s code. The context of a thread refers to the values of registers (e.g., program counter and status register) When we say that the operating system inserts a thread in a queue (e.g., the ready or suspend queue), A periodic (computation) task is a thread that executes periodically. It is clearly inefficient if the thread is created and destroyed repeatedly every period. The kernel reinitializes such a thread and puts it to sleep when the thread completes. The kernel keeps track of the passage of time and releases. The parameters of a periodic thread include its phase (i.e., the interval between its creation and its first release time), period, relative deadline, and the number of instances. Most commercial operating systems do not support periodic threads. The events that cause the releases of these threads occur sporadically and may be triggered by external interrupts. Such a queue is simply a list of pointers which give the starting addresses of functions to be executed by the server thread. Sleeping state immediately after it is created and initialized.
  • 8. Time Services and Scheduling. The scheduler is a central part of the kernel. In most operating systems, the scheduler executes periodically, as well as whenever the state of any thread changes. To trigger the scheduler into action, the system clock device raises interrupts periodically. A clock interrupt refers to an interrupt from the device. At each clock interrupt, the kernel does the following chores. Processes timer events: A clock device has a timer queue. The pending expiration times of all the timers that are bound to the clock are stored in time order in this queue. Updates execution budget: Most real-time operating systems (including all Real-Time POSIX-compliant systems) provide the user with the choice of round-robin or FIFO scheduling of threads of equal priority. (These policies are called SCHED RR and SCHED FIFO, respectively.) To schedule equal-priority threads in a round-robin manner, the scheduler gives such a thread a time slice when it schedules the thread for execution. Threads may become ready (e.g., released upon timer expiration) and the thread that was executing at the time of the clock interrupt may need to be preempted. The scheduler updates the ready queue accordingly and then gives control to the thread. To provide these services and support its own activities, every system has at least one clock device. We have been calling it the system clock. Again, a clock device contains a counter, a timer queue, and an interrupt handler. The counter is triggered to increment monotonically by a precise periodic sequence of pulses. At any time, the content of the counter gives a representation of the current time. The timer queue contains the pending expiration times of timers bound to the clock. System may have more than one clock device and uses them for different purposes. For example, a real-time monitor system may use a clock device to trigger the sampling and digitization of sensor readings and initialize the input of sensor data. A digital control system may use one or more clock devices to time the control-law computations. Resolution. The resolution of a clock is the granularity of time provided by the clock. The kernel maintains a software clock for each clock device it supports. It programs the clock device to raise interrupts periodically. Whenever it gets an interrupt from the clock device, it updates the software clock, and hence the current time according to the clock. Therefore, the resolution of the software clock is equal to the period of these interrupts. The typical period of clock interrupts is 10 milliseconds. If an operating system updates the system clock only at these interrupts, its clock resolution is 10 milliseconds High Resolution. A way to provide a finer resolution than hundreds of microseconds and more accurate time to an application is to map a hardware clock into the address space of the application. Then, the application can read the clock directly. Most operating systems (including all Real-Time POSIX compliant systems, Windows NT, and Solaris) allow a thread or process to have its own timers.8 Specifically, by calling the create timer function, a thread (or process) can create a per thread (or per process) timer and, in a system containing multiple clocks, bind the timer to a specified clock. Various forms of set-timer functions allow a thread to specify an action to take upon timer expiration. The types of action include calling a specified function, waking up a thread, and sending a message to a message queue. Asynchronous Timer Functions. As an example, we look at the set watchdog timer function wdStart( ) provided by VxWorks [Wind]. A watchdog (or alarm) timer is a oneshot timer that supports relative time. A thread can cancel a timer before its expiration by calling wdCancel( ). An implementation of a timing monitor. The timing monitor logs the deadlines missed by input tasks. Each of these tasks processes sporadic sensor readings collected by an input device. Whenever the input device presents a sensor reading by raising an interrupt, its input task must process the reading within the relative deadline of the task. A thread can set multiple timers to alarm at different rates or times.
  • 9. Periodic Timer Interrupts. :Now let us suppose that the nominal timer resolution is 10 microseconds and the period of time-service interrupts is 5 milliseconds. A threads sets a timer to expire twice 10 microseconds apart and reads the current time at the occurrence of each timer event. One-Shot Timer Interrupts. Alternatively, some operating systems (e.g., QNX) program the clock device to raise an interrupt at each timer expiration time. The actual first release time can be later than this time because (1) the thread may be preempted and not scheduled until a later time and (2) it takes some time to create a timer (and to get the current time) when the thread starts to execute. Simplify the implementation of complex algorithms for scheduling aperiodic tasks in the user level. Modern operating systems support fixed-priority scheduling. To support fixed-priority scheduling, the kernel maintains a ready queue for each priority level. Whenever a thread is ready to execute, the kernel places it in the ready queue of the thread’s current priority. For this reason, the current priority of a thread is often called its dispatch priority. Theoretical worst-case time complexity of this operation is O( ), where is the number of priority levels supported by the operating system. Most existing operating systems supposely support dynamic priority. Thread can set and change its own priority or the priority of some other thread. This mechanism is adequate for mode-change and reconfiguration purposes but is too expensive to support dynamic scheduling policies such as the EDF algorithm. Rather than maintaining a single EDF queue, the scheduler keeps a FIFO queue for threads of each relative deadline. The scheduler places each newly released thread at the end of the queue for threads which have the same relative deadline as the new thread, as if the threads were scheduled on the deadline-monotonic basis. Therefore, to find the thread with the earliest absolute deadline, the scheduler only needs to search among the threads at the heads of all the FIFO queues. To minimize the time for dequeueing the highest priority thread, the scheduler can keep the threads at the heads of the FIFO queues. As they execute, threads communicate (i.e., they exchange control information and data). They synchronize in order to ensure that their exchanges occur at the right times and under the right conditions and that they do not get into each other’s way. Shared memory, message queues, synchronization primitives (e.g., mutexes, conditional variables, and semaphores), and events and signals are commonly used mechanisms for these purposes. Shared memory provides a low-level, high-bandwidth and low-latency means of interprocess communication. Message Queues. As its name tells us, a message queue provides a place where one or more threads can pass messages to some other thread or threads. We consider a system service provider. Message queues provide a natural way of communication between it and its clients. The service provider creates a message queue, gives the message queue a name, and makes this name known to its clients. To request service, a client thread opens the message queue and places its Request-For-Service (RFS) message in the queue. The service provider may also use message queues as the means for returning the results it produces back to the clients. A client gets the result by opening the result queue and receiving the message in it.