SlideShare a Scribd company logo
1 of 106
EC6703
EMBEDDED AND REAL
TIME SYSTEMS
 Introduction
 Multiple tasks and multiple processes
 Multi-rate systems
 Pre-emptive real-time operating systems
 Priority based scheduling
 Inter-process communication mechanisms
 Evaluating operating system performance
 Power optimization strategies for processes
 Example Real time operating systems
-POSIX-Windows CE.
UNIT III PROCESSES AND OPERATING SYSTEMS
Processes and operating systems
• Applications can be programmed on a
microprocessor by writing a single piece of code,
• But many applications are sophisticated enough
that writing one large program does not suffice.
• When multiple operations must be performed at
widely varying times , a single program can
easily become too complex and unwieldy.
Introduction
In this unit, we will study the two fundamental abstractions that
allow us to build complex applications on microprocessors:
the Process and the Operating System (OS) in turn RTOS
RTOS
• Real-time operating systems (RTOSs),which are
OSs that provide facilities for satisfying real-time
requirements .
• A RTOS allocates resources using algorithms that
take real time into account.
• General-purpose Oss, in contrast, generally
allocate resources using other criteria like
fairness.
• Trying to allocate the CPU equally to all processes
without regard to time can easily cause processes
to miss their deadlines.
• Most embedded systems require functionality
and timing that is too complex to embody in a
single program.
• So We break the system into multiple tasks in
order to manage when things happen.
• In this section we will develop the basic
abstractions that will be manipulated by the
RTOS to build multirate systems.
Multiple tasks and multiple processes.
Tasks and Processes
• A task is a functional description of a
connected set of operations.
• Many (if not most) embedded computing
systems do more than one thing—that is, the
environment can cause mode changes that in
turn cause the embedded system to behave
quite differently.
• Ex: Telephone answering machine
• A process is a single execution of a program.
• If we run the same program two different times, we
have created two different processes. Each process has
its own state that includes not only its registers but all
of its memory.
• In some OSs, the memory management unit is used to
keep each process in a separate address space.
• In others, particularly lightweight RTOSs, the processes
run in the same address space.
• “Processes that share the same address space are
often called threads.”
Tasks and Processes
(Task can also mean a collection of processes.)
• A process is a unique execution of a program.
– Several copies of a program may run
simultaneously or at different times.
• A process has its own state:
– registers;
– memory.
• The operating system manages processes.
Tasks and Processes
EX:-An on-the-fly compression box.
This device is connected
to serial ports on both
ends. The input to the
box is an uncompressed
stream of bytes. The box
emits a compressed string
of bits on the output serial
line, based on a redefined
compression table.
The program’s need to
receive and send data at
different rates—
for example, the program
may emit 2 bits for the
first byte and then 7 bits
for the second byte
Why multiple processes?
• Multiple tasks means multiple processes.
• Processes help with timing complexity:
– multiple rates
• multimedia
• automotive
– asynchronous input
• user interfaces
• communication systems
Reactive systems
• Respond to external events.
– Engine controller.
– Seat belt monitor.
• Requires real-time response.
– System architecture.
– Program implementation.
• May require a chain reaction among multiple
processors.
Multi-rate systems
• Tasks may be synchronous or asynchronous.
• Synchronous tasks may recur at different rates.
• Processes run at different rates based on
computational needs of the tasks.
• Multirate embedded computing systems are
very common, including automobile engines,
printers,and cell phones.
• In all these systems, certain operations must be
executed periodically, and each operation is
executed at its own rate
EX:
Example: Automotive engine control
• Tasks:
– spark control
– crankshaft sensing
– fuel/air mixture
– oxygen sensor
– Kalman filter
The simplest automotive engine controllers,
such as the ignition controller for a basic
motorcycle engine, perform only one task—
timing the firing of the spark plug, which takes
the place of a mechanical distributor.
The spark plug must be fired at a certain point
in the combustion cycle, but to obtain better
performance, the phase relationship between
the piston’s movement and the spark should
change as a function of engine speed.
Using a microcontroller that senses the engine
crankshaft position allows the spark timing to
vary with engine speed.
Automobile engine controllers use additional sensors, including the gas pedal
position and an oxygen sensor used to control emissions. They also use a
multimode control scheme.
For example, one mode may be used for engine warm-up, another for cruise, and
yet another for climbing steep hills, and so forth.
The throttle setting must be sampled and acted upon regularly, although not as
frequently as the crankshaft setting and the spark plugs. The oxygen sensor
responds much more slowly than the throttle, so adjustments to the fuel/air
mixture suggested by the oxygen sensor can be computed at a much lower rate.
The engine controller takes a variety of inputs that determine the state of the
engine.
It then controls two basic engine parameters:
1. the spark plug firings and
2. the fuel/air mixture.
Example: Automotive engine control
Typical rates in engine controllers
Variable Full range time (ms) Update period (ms)
Engine spark timing 300 2
Throttle 40 2
Air flow 30 4
Battery voltage 80 4
Fuel flow 250 10
Recycled exhaust gas 500 25
Status switches 100 20
Air temperature Seconds 400
Barometric pressure Seconds 1000
Spark (dwell) 10 1
Fuel adjustment 80 8
Carburetor 500 25
Mode actuators 100 100
Real-time systems
• Perform a computation to conform to external timing
constraints.
• Deadline frequency:
– Periodic.
– Aperiodic.
• Deadline type:
– Hard: failure to meet deadline causes system failure.
– Soft: failure to meet deadline causes degraded response.
– Firm: late response is useless but some late responses can
be tolerated.
Timing requirement (specifications) on
processes
• Two important requirements on processes:-
• Release time: time at which process becomes ready to
execute.
– (this is not necessarily the time at which it actually
takes control of the CPU and starts to run.)
• Deadline: time at which process must finish.
• The deadline for an aperiodic process is generally
measured from the release time, since that is the only
reasonable time reference.
Processes can have several different types of
timing requirements imposed on them by the
application.
Release times and deadlines
time
P1
initiating
event
deadline
aperiodic process
periodic process initiated
at start of period
period
P1
P1
deadline
period
periodic process initiated
by event
Example definitions of release times
and deadlines
Rate requirements on processes
• Period: interval
between process
activations.
• Rate: reciprocal of
period.
A rate requirement specifies how quickly processes
must be initiated.
Figure illustrates process execution in a system with four CPUs. The various
execution instances of program P1 have been subscripted to distinguish their
initiation times. In this case, the initiation interval is equal to one fourth of the
period.
It is possible for a process to have an initiation rate less than the period even in
single-CPU systems.
Timing violations
• What happens if a process doesn’t finish by its
deadline?
• The results can be catastrophic in an automotive
control system,
• whereas a missed deadline in a multimedia
system may cause an audio or video glitch.
– Hard deadline: system fails if missed.
– Soft deadline: user may notice, but system doesn’t
necessarily fail.
EX:
Example: Space Shuttle software error
• Space Shuttle’s first launch was delayed by a software
timing error:
– Primary Avionics Software System(PASS) and Backup Flight
Control System (BFS.)
– PASS used an asynchronous, priority-driven software
architecture. ( 4 computers)
– The BFS, in contrast, used a time-slot system that allocated a
fixed amount of time to each process
– BFS failed to synchronize with PASS.
– Change to one routine added delay that threw off start time
calculation.
– 1 in 67 chance of timing problem.
The system incorrectly calculated its start time.
Task graphs
• Tasks may have data
dependencies---must execute in
certain order.
• Task graph shows data/control
dependencies between processes.
• Task: connected set of processes.
• Task set: One or more tasks.
P3
P1 P2
P4
P5
P6
task 1 task 2
task set
The order of execution of processes may be constrained when the
processes pass data between each other. Figure shows a set of
processes with data dependencies among them.
“A set of processes with data dependencies is known as a task graph”
Communication between tasks
• Task graph assumes that all
processes in each task run
at the same rate, tasks do
not communicate.
• In reality, some amount of
inter-task communication is
necessary.
– It’s hard to require immediate
response for multi-rate
communication.
Communication among processes that run at different rates cannot be
represented by data dependencies because there is no one-to-one relationship
between data coming out of the source process and going into the destination
process.
Communication among processes
at different rates.
Process execution characteristics
• Process execution time Ti.
– Execution time in absence of preemption.
– Possible time units: seconds, clock cycles.
– Worst-case, best-case execution time may be useful in some
cases.
• Sources of variation:
– Data dependencies.
– Memory system.
– CPU pipeline.
CPU Metrics
• The initiation time is the time at which a process
actually starts executing on the CPU.
• The completion time is the time at which the
process finishes its work.
• The most basic measure of work is the amount of
CPU time expended by a process.
• The CPU time of process ‘i’ is called Ci .
– Note that the CPU time is not equal to the completion
time minus initiation time; several other processes
may interrupt execution.
The total CPU time consumed by a set of processes is
We need a basic measure of the efficiency with which we use
the CPU. The simplest and most direct measure is utilization:
The utilization is often expressed as a percentage. If we measure
the total execution time of all processes over an interval of time t,
then the CPU utilization is
This ratio ranges
between 0 and 1
CPU Metrics contd….
Process State and Scheduling
• The first job of the OS is to determine that
process runs next.
• The work of choosing the order of running
processes is known as scheduling.
• The OS considers a process to be in one of
three basic scheduling states:
• waiting,
• ready,
• executing.
• A process can be in one
of three states:
– executing on the CPU;
– ready to run;
– waiting for data.
Process State and Scheduling
The scheduling problem
• Can we meet all deadlines?
– Must be able to meet deadlines in all cases.
• How much CPU horsepower do we need to
meet our deadlines?
Scheduling feasibility
• Resource constraints
make schedulability
analysis NP-hard.
– Must show that the
deadlines are met for
all timings of resource
requests.
P1 P2
I/O device
Simple processor feasibility
• Assume:
– No resource conflicts.
– Constant process
execution times.
• Require:
– T ≥ Si Ti
– Can’t use more than
100% of the CPU.
T1 T2 T3
T
Hyperperiod
• Hyperperiod: least common multiple (LCM) of
the task periods.
• Must look at the hyperperiod schedule to find
all task interactions.
• Hyperperiod can be very long if task periods
are not chosen carefully.
Hyperperiod example
• Long hyperperiod:
– P1 7 ms.
– P2 11 ms.
– P3 15 ms.
– LCM = 1155 ms.
• Shorter hyperperiod:
– P1 8 ms.
– P2 12 ms.
– P3 16 ms.
– LCM = 96 ms.
Simple processor feasibility example
• P1 period 1 ms, CPU
time 0.1 ms.
• P2 period 1 ms, CPU
time 0.2 ms.
• P3 period 5 ms, CPU
time 0.3 ms.
LCM 5.00E-03
peirod CPU time CPU time/LCM
P1 1.00E-03 1.00E-04 5.00E-04
P2 1.00E-03 2.00E-04 1.00E-03
P3 5.00E-03 3.00E-04 3.00E-04
total CPU/LCM 1.80E-03
utilization 3.60E-01
scheduling policy
1. Cyclostatic/TDMA
• Schedule in time
slots.
– Same process
activation
irrespective of
workload.
• Time slots may be
equal size or
unequal.
T1 T2 T3
P
T1 T2 T3
P
One very simple scheduling policy is
known as cyclostatic scheduling or
sometimes as Time Division Multiple
Access scheduling. As illustrated in Figure
a cyclostatic schedule is divided into
equal-sized time slots over an interval
equal to the length of the hyperperiod H.
Processes always run in the same time
slot.
TDMA assumptions
• Schedule based on
least common
multiple (LCM) of
the process periods.
• Trivial scheduler ->
very small
scheduling
overhead.
P1 P1 P1
P2 P2
PLCM
TDMA schedulability
• Always same CPU utilization (assuming
constant process execution times).
• Can’t handle unexpected loads.
– Must schedule a time slot for aperiodic events.
TDMA schedulability example
• TDMA period = 10 ms.
• P1 CPU time 1 ms.
• P2 CPU time 3 ms.
• P3 CPU time 2 ms.
• P4 CPU time 2 ms.
TDMA period 1.00E-02
CPU time
P1 1.00E-03
P2 3.00E-03
P3 2.00E-03
P4 2.00E-03
total 8.00E-03
utilization 8.00E-01
scheduling policy
2. Round-robin
• Schedule process only
if ready.
– Always test processes
in the same order.
• Variations:
– Constant system
period.
– Start round-robin again
after finishing a round.
T1 T2 T3
P
T2 T3
P
Another scheduling policy that is slightly
more sophisticated is round robin.
As illustrated in Fig, round robin uses the
same hyperperiod as does cyclostatic.
It also evaluates the processes in order. But
unlike cyclostatic scheduling,if a process
does not have any useful work to do, the
round-robin scheduler moves on to the next
process in order to fill the time slot with
useful work. In this example, all
three processes execute during the first
hyperperiod, but during the second one,
P1 has no useful work and is skipped.
Round-robin assumptions
• Schedule based on least common multiple
(LCM) of the process periods.
• Best done with equal time slots for processes.
• Simple scheduler -> low scheduling overhead.
– Can be implemented in hardware.
Round-robin schedulability
• Can bound maximum CPU load.
– May leave unused CPU cycles.
• Can be adapted to handle unexpected load.
– Use time slots at end of period.
Schedulability and overhead
• The scheduling process consumes CPU time.
– Not all CPU time is available for processes.
• Scheduling overhead must be taken into
account for exact schedule.
– May be ignored if it is a small fraction of total
execution time.
PREEMPTIVE REAL-TIME OPERATING SYSTEMS
A RTOS executes processes based upon timing constraints provided by the system
designer.
The most reliable way to meet timing constraints accurately is to build a
preemptive OS and to use priorities to control what process runs at any given time.
Preemption
Preemption is an alternative to the C function call as a way to control execution.
To be able to take full advantage of the timer, we must change our notion of a process
as something more than a function call. We must, in fact, break the assumptions of
our high-level programming language. We will create new routines that allow us to
jump from one subroutine to another at any point in the program. That, together
with the timer, will allow us to move between functions whenever necessary based
upon the system’s timing constraints.
We want to share the CPU across two processes. The kernel is the part of
the OS that determines what process is running. The kernel is activated periodically
by the timer. The length of the timer period is known as the time quantum
because it is the smallest increment in which we can control CPU activity. The
kernel determines what process will run next and causes that process to run. On
the next timer interrupt, the kernel may pick the same process or another process
to run.
We can similarly use assembly language to restore registers not
from the process that was interrupted by the timer but to use
registers from any process we want.
The set of registers that define a process are known as its context
and switching from one process’s register set to another is known
as context switching.
The data structure that holds the state of the process is known as
the process control block.
context switching
We want a mechanism that executes quickly so that we don’t spend all
our time in the kernel and starve out the processes that do the useful
work.
If we assign each task a numerical priority, then the kernel can simply
look at the processes and their priorities, see which ones actually
want to execute (some may be waiting for data or for some event),and
select the highest priority process that is ready to run. This mechanism
is both flexible and fast.
The priority is a non-negative integer value.
The exact value of the priority is not as important as the relative
priority of different processes.
We will generally use priority 1 as the highest priority,
Priorities
How does the kernel determine what process will run next?
Priority-driven scheduling EX
• Each process has a priority.
• CPU goes to highest-priority process that is
ready.
• Priorities determine scheduling policy:
– fixed priority;
– time-varying priorities.
Simple Rules
Priority-driven scheduling example
• Rules:
– each process has a fixed priority (1 highest);
– highest-priority ready process gets CPU;
– process continues until done.
• Processes
– P1: priority 1, execution time 10
– P2: priority 2, execution time 30
– P3: priority 3, execution time 20
Priority-driven scheduling example
time
P2 ready t=0 P1 ready t=15
P3 ready t=18
0 30
10 20 60
40 50
P2 P2
P1 P3
PRIORITY-BASED SCHEDULING
• In priority-based context switching mechanism,
we have to determine an algorithm by which to
assign priorities to processes.
• After assigning priorities, the OS takes care of the
rest by choosing the highest-priority ready
process.
• There are two major ways to assign priorities:
– static priorities that do not change during execution
and
– dynamic priorities that do change.
Metrics
• How do we evaluate a scheduling policy?:
– Ability to satisfy all deadlines.
– CPU utilization---percentage of time devoted to
useful work.
– Scheduling overhead---time required to make
scheduling decision.
• 1. Rate-Monotonic Scheduling
• Rate-monotonic scheduling (RMS), introduced
by Liu and Layland [Liu73],was one of the first
scheduling policies developed for real-time
systems and is still very widely used.
• RMS is a static scheduling policy. It turns out
that these fixed priorities are sufficient to
efficiently schedule the processes in many
situations.
RMA simple model of the system
should conforms to
• All process run on single CPU.
• Zero context switch time.
• No data dependencies between processes.
• Process execution time is constant.
• Deadline is at end of period.
• Highest-priority ready process runs.
RMS priorities
• Optimal (fixed) priority assignment:
– shortest-period process gets highest priority;
– priority inversely proportional to period;
• No fixed-priority scheme does better.
This fixed-priority scheduling policy is the optimum
assignment of static priorities to processes, in that it
provides the highest CPU utilization while ensuring that
all processes meet their deadlines.
RMS example
During one 12 time-unit interval,
we must execute P1 three times, requiring 6 units of CPU time;
P2 twice, costing 6 units of CPU time;
and P3 one time, requiring 3 units of CPU time.
The total of 6 + 6 + 3 = 15 units of CPU time is more than the 12
time units available, clearly exceeding the available CPU capacity.
Case 2
Rate-monotonic analysis
• Response time: time required to finish
process.
• Critical instant: scheduling state that gives
worst response time.
• Critical instant occurs when all higher-priority
processes are ready to execute.
Critical instant
P4
P3
P2
P1
critical
instant
P1 P1 P1 P1
P2 P2
P3
interfering processes
The critical
instant for a
process is
defined as the
instant during
execution at
which the task
has the
largest
response
time.
RMS CPU utilization.
• RMS cannot use 100% of CPU, even with zero
context switch overhead.
• Must keep idle cycles available to handle
worst-case scenario.
• However, RMS guarantees all processes will
always meet their deadlines.
2.Earliest-deadline-first scheduling
• EDF: dynamic priority scheduling scheme—it
changes process priorities during execution
based on initiation times.
• Process closest to its deadline has highest
priority.
• Requires recalculating processes at every
timer interrupt.
EDF analysis
• EDF can use 100% of CPU.
• But EDF may fail to miss a deadline.
EDF example
Liu and Layland showed that EDF can achieve 100% utilization
EDF implementation
• On each timer interrupt:
– compute time to deadline;
– choose process closest to deadline.
• Generally considered too expensive to use in
practice.
RMS vs EDF
RMS EDF
Achieves lower CPU
utilization
Higher CPU utilization
easier to ensure that all
deadlines will be satisfied
Hard to ensure deadlines
Static priority scheduling dynamic priority scheduling
Not expensive to use in
practice
Expensive to use in practice
shortest-period process gets
highest priority
Process closest to its deadline
has highest priority.
Fixing scheduling problems
• What if your set of processes is
unschedulable?
– Change deadlines in requirements.
– Reduce execution times of processes.
– Get a faster CPU.
Priority inversion
• Priority inversion: low-priority process keeps high-priority
process from running.
• Improper use of system resources can cause scheduling
problems:
– Low-priority process grabs I/O device.
– High-priority device needs I/O device, but can’t get it until
low-priority process is done.
• Can cause deadlock.
So far, we have assumed that each process is totally self contained.
However, that is not always the case—for instance, a process may need a
system resource, such as an I/O device or the bus, to complete its work.
Scheduling the processes without considering the resources those
processes require can cause priority inversion, in which a low-priority
process blocks execution of a higher priority process by keeping hold of
its resource.
Solving priority inversion
• Give priorities to system resources.
• Have process inherit the priority of a resource
that it requests.
– Low-priority process inherits priority of device if
higher.
The most common method for dealing with priority
inversion is to promote the priority of any process
when it requests a resource from the OS.
Interprocess communication
mechanisms
Interprocess communication
Processes often need to communicate with each other.
Interprocess communication mechanisms are provided by the
operating system as part of the process abstraction.
In general, a process can send a communication in one of
two ways:
Blocking
Non blocking.
After sending a blocking communication, the process goes
into the waiting state until it receives a response.
Non blocking communication allows the process to
continue execution after sending the communication.
There are two major styles of interprocess communication:
1. shared memory
2. message passing.
3. signals
The two are logically equivalent—given one, you can build an
interface that implements the other.
Shared Memory Communication
Two components, such as a CPU and an I/O
device, communicate through
a shared memory location. The software on
the CPU has been designed to know
the address of the shared location; the shared
location has also been loaded into the
proper register of the I/O device. If, as in the
figure, the CPU wants to send data to
the device, it writes to the shared location. The
I/O device then reads the data from
that location. The read and write operations
are standard and can be encapsulated
in a procedural interface.
EX: Shared Memory Communication
(Elastic buffers as shared memory)
the text compressor uses the CPU to compress
incoming text, which is then
sent on a serial line by a UART.
The input data arrive at a constant rate and are
easy to manage. But because the output
data are consumed at a variable rate, these data
require an elastic buffer. The CPU and output
UART share a memory area—the CPU writes
compressed characters into the buffer and the
UART removes them as necessary to fill the serial
line. Because the number of bits in the
buffer changes constantly, the compression and
transmission processes need additional size
information. In this case, coordination is simple—
the CPU writes at one end of the buffer and
the UART reads at the other end. The only
challenge is to make sure that the UART does not
overrun the buffer.
Race condition in shared memory
• Problem when two CPUs try to write the same
location:
– CPU 1 reads flag and sees 0.
– CPU 2 reads flag and sees 0.
– CPU 1 sets flag to one and writes location.
– CPU 2 sets flag to one and overwrites location.
Atomic test-and-set
• Problem can be solved with an atomic test-
and-set:
– single bus operation reads memory location, tests
it, writes it.
• ARM test-and-set provided by SWP:
A test-and-set can be used to implement a semaphore (used to
guard access to a block of protected memory), which is a
language-level synchronization construct.
Critical regions
• Critical region: section of code that cannot be
interrupted by another process.
• Examples:
– writing shared memory;
– accessing I/O device.
Semaphores
• Semaphore: OS primitive for controlling access
to critical regions.
• Protocol:
– Get access to semaphore with P().
– Perform critical region operations.
– Release semaphore with V().
2. Message passing
CPU 1 CPU 2
message message
message
Message passing communication
complements the shared memory
model.
each communicating entity has its own
message send/receive unit.
The message is not stored on the
communications link, but rather at the
senders/ receivers at the end points.
In contrast, shared memory
communication can be seen as a memory
block used as a communication device, in
which all the data are stored in the
communication link/memory.
• Applications in which units operate relatively autonomously are
natural candidates for message passing communication.
• For example, a home control system has one microcontroller per
household device—lamp, thermostat, faucet, appliance, and so on.
• The devices must communicate relatively infrequently;
furthermore, their physical separation is large enough that we
would not naturally think of them as sharing a central pool of
memory. Passing communication packets among the devices is a
natural way to describe coordination between these devices.
• Message passing is the natural implementation of communication
in many 8-bit microcontrollers that do not normally operate with
external memory.
2. Message passing Example
3.Signals
• More general than Unix signal---may carry
arbitrary data:
<<signal>>
aSig
p : integer
someClass
sigbehavior()
<<send>>
Another form of interprocess communication commonly used in Unix is
the signal.
A signal is simple because it does not pass data beyond the existence of
the signal itself.
A signal is analogous to an interrupt, but it is entirely a software
creation.
A signal is generated by a process and transmitted to another process by
the operating system.
EVALUATING OPERATING SYSTEM
PERFORMANCE
• Simplifying assumptions from scheduling policy:
– Context switch costs no CPU time,.
– We know the exact execution time of processes.
– WCET(worst-case execution times)/BCET(best-case
execution times) don’t depend on context switches.
– We probably determined worst-case or best-case
times for the processes in isolation. But, in fact, they
interact with each other in the cache.
Cache conflicts among processes can drastically degrade
process execution time.
1. The zero-time context switch assumption used in the analysis of
RMS is not Correct.
The effects of nonzero context switching time must be carefully
analyzed in the context of a particular implementation to be sure
that the predictions of an ideal scheduling policy are sufficiently
accurate.
Example below shows that context switching can, in fact, cause a
system to miss a deadline.
Scheduling and context switch
overhead
Process Execution
time
deadline
P1 3 5
P2 3 10
With context switch overhead of 1,
no feasible schedule.
2TP1 + TP2 = 2*(1+3)+(1_3)=11
In Example above ,overhead was a large
fraction of the process execution time and
of the periods.
2. Another important assumption we have made thus far is that
process execution time is constant.
As seen already , this is definitely not the case—both data
dependent behavior and caching effects can cause large variations
in run times.
If we can determine worst-case execution time, then shorter run
times for a process simply mean unused CPU time. If we cannot
accurately bound WCET, then we will be left with a very
conservative estimate of execution time that will leave even more
CPU time unused.
Process execution time
• Process execution time is not constant.
• Extra CPU time can be good.
• Extra CPU time can also be bad:
– Next process runs earlier, causing new
preemption.
Processes and caches
• Processes can cause additional caching
problems.
– Even if individual processes are well-behaved,
processes may interfere with each other.
• Worst-case execution time with bad behavior
is usually much worse than execution time
with good cache behavior.
Effects of scheduling on the cache
Process WCET Avg. CPU
time
P1 8 6
P2 4 3
P3 4 3
Schedule 1 (LRU cache):
Schedule 2 (half of cache
reserved for P1):
Power optimization
• Power management: determining how system
resources are scheduled/used to control
power consumption.
• OS can manage for power just as it manages
for time.
• OS reduces power by shutting down units.
– May have partial shutdown modes.
A power management policy in general examines the state of
the system to determine when to take actions.
Power management and performance
• Power management and performance are
often at odds.
• Entering power-down mode consumes
– energy,
– time.
• Leaving power-down mode consumes
– energy,
– time.
Simple power management policies
• Request-driven: power up once request is
received. Adds delay to response.
• Predictive shutdown: try to predict how long
you have before next request.
– May start up in advance of request in anticipation
of a new request.
– If you predict wrong, you will incur additional
delay while starting up.
Probabilistic shutdown
• Assume service requests are probabilistic.
• Optimize expected values:
– power consumption;
– response time.
• Simple probabilistic: shut down after time Ton,
turn back on after waiting for Toff.
Advanced Configuration and Power
Interface
• ACPI: open standard for power management
services.
Hardware platform
device
drivers
ACPI BIOS
OS kernel
applications
power
management
The Advanced Configuration and Power Interface (ACPI) is an open industry
standard for power management services. It is designed to be compatible with
a wide variety of OSs.
• ACPI provides some basic power management
facilities and abstracts the hardware layer, the
OS has its own power management module
that determines the policy,and the OS then
uses ACPI to send the required controls to the
hardware and to observe the hardware’s state
as input to the power manager.
ACPI supports the following five basic
global power states:
• ■ G3, the mechanical off state, in which the system consumes no power.
• ■ G2, the soft off state, which requires a full OS reboot to restore the machine
to working condition.
– This state has four substates:
• —S1, a low wake-up latency state with no loss of system context;
• —S2, a low wake-up latency state with a loss of CPU and system cache state;
• —S3, a low wake-up latency state in which all system state except for main
memory is lost; and
• —S4, the lowest-power sleeping state, in which all devices are turned off.
• ■ G1, the sleeping state, in which the system appears to be off and the time
required to return to working condition is inversely proportional to power
consumption.
• ■ G0, the working state, in which the system is fully usable.
• ■ The legacy state, in which the system does not comply with ACPI.
13009690.ppt
13009690.ppt
13009690.ppt
13009690.ppt
13009690.ppt
13009690.ppt
13009690.ppt

More Related Content

Similar to 13009690.ppt

Real time os(suga)
Real time os(suga) Real time os(suga)
Real time os(suga)
Nagarajan
 

Similar to 13009690.ppt (20)

Os notes 1_5
Os notes 1_5Os notes 1_5
Os notes 1_5
 
Unit 2_OS process management
Unit 2_OS process management Unit 2_OS process management
Unit 2_OS process management
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Real time operating system
Real time operating systemReal time operating system
Real time operating system
 
cs1311lecture25wdl.ppt
cs1311lecture25wdl.pptcs1311lecture25wdl.ppt
cs1311lecture25wdl.ppt
 
Autosar Basics hand book_v1
Autosar Basics  hand book_v1Autosar Basics  hand book_v1
Autosar Basics hand book_v1
 
Real Time Kernels and Operating Systems.pptx
Real Time Kernels and Operating Systems.pptxReal Time Kernels and Operating Systems.pptx
Real Time Kernels and Operating Systems.pptx
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................
 
Real time os(suga)
Real time os(suga) Real time os(suga)
Real time os(suga)
 
Insider operating system
Insider   operating systemInsider   operating system
Insider operating system
 
May14ProcessScheduling.ppt
May14ProcessScheduling.pptMay14ProcessScheduling.ppt
May14ProcessScheduling.ppt
 
pscheduling.ppt
pscheduling.pptpscheduling.ppt
pscheduling.ppt
 
Operating System.pptx
Operating System.pptxOperating System.pptx
Operating System.pptx
 
Operating System
Operating SystemOperating System
Operating System
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecture
 
BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...
BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...
BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfEngg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdf
 
Round Robin Algorithm.pptx
Round Robin Algorithm.pptxRound Robin Algorithm.pptx
Round Robin Algorithm.pptx
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
CPU scheduling in Operating System Explanation
CPU scheduling in Operating System ExplanationCPU scheduling in Operating System Explanation
CPU scheduling in Operating System Explanation
 

More from ssuser4ca1eb (8)

lightning_go.pptx
lightning_go.pptxlightning_go.pptx
lightning_go.pptx
 
go.ppt
go.pptgo.ppt
go.ppt
 
Von Neuman vs Harward.docx
Von Neuman vs Harward.docxVon Neuman vs Harward.docx
Von Neuman vs Harward.docx
 
UNIT I_Introduction.pptx
UNIT I_Introduction.pptxUNIT I_Introduction.pptx
UNIT I_Introduction.pptx
 
12429908.ppt
12429908.ppt12429908.ppt
12429908.ppt
 
UNIT 3.pptx
UNIT 3.pptxUNIT 3.pptx
UNIT 3.pptx
 
Ch01_Intro_to_Embedded_Systems.pptx
Ch01_Intro_to_Embedded_Systems.pptxCh01_Intro_to_Embedded_Systems.pptx
Ch01_Intro_to_Embedded_Systems.pptx
 
Lec01_Course_Overview.ppt
Lec01_Course_Overview.pptLec01_Course_Overview.ppt
Lec01_Course_Overview.ppt
 

Recently uploaded

Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Dr.Costas Sachpazis
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
rknatarajan
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
ankushspencer015
 

Recently uploaded (20)

Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 

13009690.ppt

  • 1.
  • 3.  Introduction  Multiple tasks and multiple processes  Multi-rate systems  Pre-emptive real-time operating systems  Priority based scheduling  Inter-process communication mechanisms  Evaluating operating system performance  Power optimization strategies for processes  Example Real time operating systems -POSIX-Windows CE. UNIT III PROCESSES AND OPERATING SYSTEMS
  • 4. Processes and operating systems • Applications can be programmed on a microprocessor by writing a single piece of code, • But many applications are sophisticated enough that writing one large program does not suffice. • When multiple operations must be performed at widely varying times , a single program can easily become too complex and unwieldy. Introduction In this unit, we will study the two fundamental abstractions that allow us to build complex applications on microprocessors: the Process and the Operating System (OS) in turn RTOS
  • 5. RTOS • Real-time operating systems (RTOSs),which are OSs that provide facilities for satisfying real-time requirements . • A RTOS allocates resources using algorithms that take real time into account. • General-purpose Oss, in contrast, generally allocate resources using other criteria like fairness. • Trying to allocate the CPU equally to all processes without regard to time can easily cause processes to miss their deadlines.
  • 6. • Most embedded systems require functionality and timing that is too complex to embody in a single program. • So We break the system into multiple tasks in order to manage when things happen. • In this section we will develop the basic abstractions that will be manipulated by the RTOS to build multirate systems. Multiple tasks and multiple processes.
  • 7. Tasks and Processes • A task is a functional description of a connected set of operations. • Many (if not most) embedded computing systems do more than one thing—that is, the environment can cause mode changes that in turn cause the embedded system to behave quite differently. • Ex: Telephone answering machine
  • 8. • A process is a single execution of a program. • If we run the same program two different times, we have created two different processes. Each process has its own state that includes not only its registers but all of its memory. • In some OSs, the memory management unit is used to keep each process in a separate address space. • In others, particularly lightweight RTOSs, the processes run in the same address space. • “Processes that share the same address space are often called threads.” Tasks and Processes (Task can also mean a collection of processes.)
  • 9. • A process is a unique execution of a program. – Several copies of a program may run simultaneously or at different times. • A process has its own state: – registers; – memory. • The operating system manages processes. Tasks and Processes
  • 10. EX:-An on-the-fly compression box. This device is connected to serial ports on both ends. The input to the box is an uncompressed stream of bytes. The box emits a compressed string of bits on the output serial line, based on a redefined compression table. The program’s need to receive and send data at different rates— for example, the program may emit 2 bits for the first byte and then 7 bits for the second byte
  • 11. Why multiple processes? • Multiple tasks means multiple processes. • Processes help with timing complexity: – multiple rates • multimedia • automotive – asynchronous input • user interfaces • communication systems
  • 12. Reactive systems • Respond to external events. – Engine controller. – Seat belt monitor. • Requires real-time response. – System architecture. – Program implementation. • May require a chain reaction among multiple processors.
  • 13. Multi-rate systems • Tasks may be synchronous or asynchronous. • Synchronous tasks may recur at different rates. • Processes run at different rates based on computational needs of the tasks. • Multirate embedded computing systems are very common, including automobile engines, printers,and cell phones. • In all these systems, certain operations must be executed periodically, and each operation is executed at its own rate EX:
  • 14. Example: Automotive engine control • Tasks: – spark control – crankshaft sensing – fuel/air mixture – oxygen sensor – Kalman filter The simplest automotive engine controllers, such as the ignition controller for a basic motorcycle engine, perform only one task— timing the firing of the spark plug, which takes the place of a mechanical distributor. The spark plug must be fired at a certain point in the combustion cycle, but to obtain better performance, the phase relationship between the piston’s movement and the spark should change as a function of engine speed. Using a microcontroller that senses the engine crankshaft position allows the spark timing to vary with engine speed.
  • 15. Automobile engine controllers use additional sensors, including the gas pedal position and an oxygen sensor used to control emissions. They also use a multimode control scheme. For example, one mode may be used for engine warm-up, another for cruise, and yet another for climbing steep hills, and so forth. The throttle setting must be sampled and acted upon regularly, although not as frequently as the crankshaft setting and the spark plugs. The oxygen sensor responds much more slowly than the throttle, so adjustments to the fuel/air mixture suggested by the oxygen sensor can be computed at a much lower rate. The engine controller takes a variety of inputs that determine the state of the engine. It then controls two basic engine parameters: 1. the spark plug firings and 2. the fuel/air mixture. Example: Automotive engine control
  • 16. Typical rates in engine controllers Variable Full range time (ms) Update period (ms) Engine spark timing 300 2 Throttle 40 2 Air flow 30 4 Battery voltage 80 4 Fuel flow 250 10 Recycled exhaust gas 500 25 Status switches 100 20 Air temperature Seconds 400 Barometric pressure Seconds 1000 Spark (dwell) 10 1 Fuel adjustment 80 8 Carburetor 500 25 Mode actuators 100 100
  • 17.
  • 18. Real-time systems • Perform a computation to conform to external timing constraints. • Deadline frequency: – Periodic. – Aperiodic. • Deadline type: – Hard: failure to meet deadline causes system failure. – Soft: failure to meet deadline causes degraded response. – Firm: late response is useless but some late responses can be tolerated.
  • 19. Timing requirement (specifications) on processes • Two important requirements on processes:- • Release time: time at which process becomes ready to execute. – (this is not necessarily the time at which it actually takes control of the CPU and starts to run.) • Deadline: time at which process must finish. • The deadline for an aperiodic process is generally measured from the release time, since that is the only reasonable time reference. Processes can have several different types of timing requirements imposed on them by the application.
  • 20. Release times and deadlines time P1 initiating event deadline aperiodic process periodic process initiated at start of period period P1 P1 deadline period periodic process initiated by event
  • 21. Example definitions of release times and deadlines
  • 22. Rate requirements on processes • Period: interval between process activations. • Rate: reciprocal of period. A rate requirement specifies how quickly processes must be initiated. Figure illustrates process execution in a system with four CPUs. The various execution instances of program P1 have been subscripted to distinguish their initiation times. In this case, the initiation interval is equal to one fourth of the period. It is possible for a process to have an initiation rate less than the period even in single-CPU systems.
  • 23. Timing violations • What happens if a process doesn’t finish by its deadline? • The results can be catastrophic in an automotive control system, • whereas a missed deadline in a multimedia system may cause an audio or video glitch. – Hard deadline: system fails if missed. – Soft deadline: user may notice, but system doesn’t necessarily fail. EX:
  • 24. Example: Space Shuttle software error • Space Shuttle’s first launch was delayed by a software timing error: – Primary Avionics Software System(PASS) and Backup Flight Control System (BFS.) – PASS used an asynchronous, priority-driven software architecture. ( 4 computers) – The BFS, in contrast, used a time-slot system that allocated a fixed amount of time to each process – BFS failed to synchronize with PASS. – Change to one routine added delay that threw off start time calculation. – 1 in 67 chance of timing problem. The system incorrectly calculated its start time.
  • 25. Task graphs • Tasks may have data dependencies---must execute in certain order. • Task graph shows data/control dependencies between processes. • Task: connected set of processes. • Task set: One or more tasks. P3 P1 P2 P4 P5 P6 task 1 task 2 task set The order of execution of processes may be constrained when the processes pass data between each other. Figure shows a set of processes with data dependencies among them. “A set of processes with data dependencies is known as a task graph”
  • 26. Communication between tasks • Task graph assumes that all processes in each task run at the same rate, tasks do not communicate. • In reality, some amount of inter-task communication is necessary. – It’s hard to require immediate response for multi-rate communication. Communication among processes that run at different rates cannot be represented by data dependencies because there is no one-to-one relationship between data coming out of the source process and going into the destination process. Communication among processes at different rates.
  • 27. Process execution characteristics • Process execution time Ti. – Execution time in absence of preemption. – Possible time units: seconds, clock cycles. – Worst-case, best-case execution time may be useful in some cases. • Sources of variation: – Data dependencies. – Memory system. – CPU pipeline.
  • 28. CPU Metrics • The initiation time is the time at which a process actually starts executing on the CPU. • The completion time is the time at which the process finishes its work. • The most basic measure of work is the amount of CPU time expended by a process. • The CPU time of process ‘i’ is called Ci . – Note that the CPU time is not equal to the completion time minus initiation time; several other processes may interrupt execution.
  • 29. The total CPU time consumed by a set of processes is We need a basic measure of the efficiency with which we use the CPU. The simplest and most direct measure is utilization: The utilization is often expressed as a percentage. If we measure the total execution time of all processes over an interval of time t, then the CPU utilization is This ratio ranges between 0 and 1 CPU Metrics contd….
  • 30. Process State and Scheduling • The first job of the OS is to determine that process runs next. • The work of choosing the order of running processes is known as scheduling. • The OS considers a process to be in one of three basic scheduling states: • waiting, • ready, • executing.
  • 31. • A process can be in one of three states: – executing on the CPU; – ready to run; – waiting for data. Process State and Scheduling
  • 32.
  • 33. The scheduling problem • Can we meet all deadlines? – Must be able to meet deadlines in all cases. • How much CPU horsepower do we need to meet our deadlines?
  • 34. Scheduling feasibility • Resource constraints make schedulability analysis NP-hard. – Must show that the deadlines are met for all timings of resource requests. P1 P2 I/O device
  • 35. Simple processor feasibility • Assume: – No resource conflicts. – Constant process execution times. • Require: – T ≥ Si Ti – Can’t use more than 100% of the CPU. T1 T2 T3 T
  • 36. Hyperperiod • Hyperperiod: least common multiple (LCM) of the task periods. • Must look at the hyperperiod schedule to find all task interactions. • Hyperperiod can be very long if task periods are not chosen carefully.
  • 37. Hyperperiod example • Long hyperperiod: – P1 7 ms. – P2 11 ms. – P3 15 ms. – LCM = 1155 ms. • Shorter hyperperiod: – P1 8 ms. – P2 12 ms. – P3 16 ms. – LCM = 96 ms.
  • 38. Simple processor feasibility example • P1 period 1 ms, CPU time 0.1 ms. • P2 period 1 ms, CPU time 0.2 ms. • P3 period 5 ms, CPU time 0.3 ms. LCM 5.00E-03 peirod CPU time CPU time/LCM P1 1.00E-03 1.00E-04 5.00E-04 P2 1.00E-03 2.00E-04 1.00E-03 P3 5.00E-03 3.00E-04 3.00E-04 total CPU/LCM 1.80E-03 utilization 3.60E-01
  • 39. scheduling policy 1. Cyclostatic/TDMA • Schedule in time slots. – Same process activation irrespective of workload. • Time slots may be equal size or unequal. T1 T2 T3 P T1 T2 T3 P One very simple scheduling policy is known as cyclostatic scheduling or sometimes as Time Division Multiple Access scheduling. As illustrated in Figure a cyclostatic schedule is divided into equal-sized time slots over an interval equal to the length of the hyperperiod H. Processes always run in the same time slot.
  • 40. TDMA assumptions • Schedule based on least common multiple (LCM) of the process periods. • Trivial scheduler -> very small scheduling overhead. P1 P1 P1 P2 P2 PLCM
  • 41. TDMA schedulability • Always same CPU utilization (assuming constant process execution times). • Can’t handle unexpected loads. – Must schedule a time slot for aperiodic events.
  • 42. TDMA schedulability example • TDMA period = 10 ms. • P1 CPU time 1 ms. • P2 CPU time 3 ms. • P3 CPU time 2 ms. • P4 CPU time 2 ms. TDMA period 1.00E-02 CPU time P1 1.00E-03 P2 3.00E-03 P3 2.00E-03 P4 2.00E-03 total 8.00E-03 utilization 8.00E-01
  • 43. scheduling policy 2. Round-robin • Schedule process only if ready. – Always test processes in the same order. • Variations: – Constant system period. – Start round-robin again after finishing a round. T1 T2 T3 P T2 T3 P Another scheduling policy that is slightly more sophisticated is round robin. As illustrated in Fig, round robin uses the same hyperperiod as does cyclostatic. It also evaluates the processes in order. But unlike cyclostatic scheduling,if a process does not have any useful work to do, the round-robin scheduler moves on to the next process in order to fill the time slot with useful work. In this example, all three processes execute during the first hyperperiod, but during the second one, P1 has no useful work and is skipped.
  • 44. Round-robin assumptions • Schedule based on least common multiple (LCM) of the process periods. • Best done with equal time slots for processes. • Simple scheduler -> low scheduling overhead. – Can be implemented in hardware.
  • 45. Round-robin schedulability • Can bound maximum CPU load. – May leave unused CPU cycles. • Can be adapted to handle unexpected load. – Use time slots at end of period.
  • 46. Schedulability and overhead • The scheduling process consumes CPU time. – Not all CPU time is available for processes. • Scheduling overhead must be taken into account for exact schedule. – May be ignored if it is a small fraction of total execution time.
  • 47.
  • 48. PREEMPTIVE REAL-TIME OPERATING SYSTEMS A RTOS executes processes based upon timing constraints provided by the system designer. The most reliable way to meet timing constraints accurately is to build a preemptive OS and to use priorities to control what process runs at any given time. Preemption Preemption is an alternative to the C function call as a way to control execution. To be able to take full advantage of the timer, we must change our notion of a process as something more than a function call. We must, in fact, break the assumptions of our high-level programming language. We will create new routines that allow us to jump from one subroutine to another at any point in the program. That, together with the timer, will allow us to move between functions whenever necessary based upon the system’s timing constraints. We want to share the CPU across two processes. The kernel is the part of the OS that determines what process is running. The kernel is activated periodically by the timer. The length of the timer period is known as the time quantum because it is the smallest increment in which we can control CPU activity. The kernel determines what process will run next and causes that process to run. On the next timer interrupt, the kernel may pick the same process or another process to run.
  • 49. We can similarly use assembly language to restore registers not from the process that was interrupted by the timer but to use registers from any process we want. The set of registers that define a process are known as its context and switching from one process’s register set to another is known as context switching. The data structure that holds the state of the process is known as the process control block. context switching
  • 50. We want a mechanism that executes quickly so that we don’t spend all our time in the kernel and starve out the processes that do the useful work. If we assign each task a numerical priority, then the kernel can simply look at the processes and their priorities, see which ones actually want to execute (some may be waiting for data or for some event),and select the highest priority process that is ready to run. This mechanism is both flexible and fast. The priority is a non-negative integer value. The exact value of the priority is not as important as the relative priority of different processes. We will generally use priority 1 as the highest priority, Priorities How does the kernel determine what process will run next?
  • 51. Priority-driven scheduling EX • Each process has a priority. • CPU goes to highest-priority process that is ready. • Priorities determine scheduling policy: – fixed priority; – time-varying priorities. Simple Rules
  • 52. Priority-driven scheduling example • Rules: – each process has a fixed priority (1 highest); – highest-priority ready process gets CPU; – process continues until done. • Processes – P1: priority 1, execution time 10 – P2: priority 2, execution time 30 – P3: priority 3, execution time 20
  • 53. Priority-driven scheduling example time P2 ready t=0 P1 ready t=15 P3 ready t=18 0 30 10 20 60 40 50 P2 P2 P1 P3
  • 54.
  • 55. PRIORITY-BASED SCHEDULING • In priority-based context switching mechanism, we have to determine an algorithm by which to assign priorities to processes. • After assigning priorities, the OS takes care of the rest by choosing the highest-priority ready process. • There are two major ways to assign priorities: – static priorities that do not change during execution and – dynamic priorities that do change.
  • 56. Metrics • How do we evaluate a scheduling policy?: – Ability to satisfy all deadlines. – CPU utilization---percentage of time devoted to useful work. – Scheduling overhead---time required to make scheduling decision.
  • 57. • 1. Rate-Monotonic Scheduling • Rate-monotonic scheduling (RMS), introduced by Liu and Layland [Liu73],was one of the first scheduling policies developed for real-time systems and is still very widely used. • RMS is a static scheduling policy. It turns out that these fixed priorities are sufficient to efficiently schedule the processes in many situations.
  • 58. RMA simple model of the system should conforms to • All process run on single CPU. • Zero context switch time. • No data dependencies between processes. • Process execution time is constant. • Deadline is at end of period. • Highest-priority ready process runs.
  • 59. RMS priorities • Optimal (fixed) priority assignment: – shortest-period process gets highest priority; – priority inversely proportional to period; • No fixed-priority scheme does better. This fixed-priority scheduling policy is the optimum assignment of static priorities to processes, in that it provides the highest CPU utilization while ensuring that all processes meet their deadlines.
  • 61. During one 12 time-unit interval, we must execute P1 three times, requiring 6 units of CPU time; P2 twice, costing 6 units of CPU time; and P3 one time, requiring 3 units of CPU time. The total of 6 + 6 + 3 = 15 units of CPU time is more than the 12 time units available, clearly exceeding the available CPU capacity. Case 2
  • 62. Rate-monotonic analysis • Response time: time required to finish process. • Critical instant: scheduling state that gives worst response time. • Critical instant occurs when all higher-priority processes are ready to execute.
  • 63. Critical instant P4 P3 P2 P1 critical instant P1 P1 P1 P1 P2 P2 P3 interfering processes The critical instant for a process is defined as the instant during execution at which the task has the largest response time.
  • 64. RMS CPU utilization. • RMS cannot use 100% of CPU, even with zero context switch overhead. • Must keep idle cycles available to handle worst-case scenario. • However, RMS guarantees all processes will always meet their deadlines.
  • 65. 2.Earliest-deadline-first scheduling • EDF: dynamic priority scheduling scheme—it changes process priorities during execution based on initiation times. • Process closest to its deadline has highest priority. • Requires recalculating processes at every timer interrupt.
  • 66. EDF analysis • EDF can use 100% of CPU. • But EDF may fail to miss a deadline.
  • 67. EDF example Liu and Layland showed that EDF can achieve 100% utilization
  • 68. EDF implementation • On each timer interrupt: – compute time to deadline; – choose process closest to deadline. • Generally considered too expensive to use in practice.
  • 69. RMS vs EDF RMS EDF Achieves lower CPU utilization Higher CPU utilization easier to ensure that all deadlines will be satisfied Hard to ensure deadlines Static priority scheduling dynamic priority scheduling Not expensive to use in practice Expensive to use in practice shortest-period process gets highest priority Process closest to its deadline has highest priority.
  • 70. Fixing scheduling problems • What if your set of processes is unschedulable? – Change deadlines in requirements. – Reduce execution times of processes. – Get a faster CPU.
  • 71. Priority inversion • Priority inversion: low-priority process keeps high-priority process from running. • Improper use of system resources can cause scheduling problems: – Low-priority process grabs I/O device. – High-priority device needs I/O device, but can’t get it until low-priority process is done. • Can cause deadlock. So far, we have assumed that each process is totally self contained. However, that is not always the case—for instance, a process may need a system resource, such as an I/O device or the bus, to complete its work. Scheduling the processes without considering the resources those processes require can cause priority inversion, in which a low-priority process blocks execution of a higher priority process by keeping hold of its resource.
  • 72. Solving priority inversion • Give priorities to system resources. • Have process inherit the priority of a resource that it requests. – Low-priority process inherits priority of device if higher. The most common method for dealing with priority inversion is to promote the priority of any process when it requests a resource from the OS.
  • 73.
  • 75. Interprocess communication Processes often need to communicate with each other. Interprocess communication mechanisms are provided by the operating system as part of the process abstraction. In general, a process can send a communication in one of two ways: Blocking Non blocking. After sending a blocking communication, the process goes into the waiting state until it receives a response. Non blocking communication allows the process to continue execution after sending the communication.
  • 76. There are two major styles of interprocess communication: 1. shared memory 2. message passing. 3. signals The two are logically equivalent—given one, you can build an interface that implements the other. Shared Memory Communication Two components, such as a CPU and an I/O device, communicate through a shared memory location. The software on the CPU has been designed to know the address of the shared location; the shared location has also been loaded into the proper register of the I/O device. If, as in the figure, the CPU wants to send data to the device, it writes to the shared location. The I/O device then reads the data from that location. The read and write operations are standard and can be encapsulated in a procedural interface.
  • 77. EX: Shared Memory Communication (Elastic buffers as shared memory) the text compressor uses the CPU to compress incoming text, which is then sent on a serial line by a UART. The input data arrive at a constant rate and are easy to manage. But because the output data are consumed at a variable rate, these data require an elastic buffer. The CPU and output UART share a memory area—the CPU writes compressed characters into the buffer and the UART removes them as necessary to fill the serial line. Because the number of bits in the buffer changes constantly, the compression and transmission processes need additional size information. In this case, coordination is simple— the CPU writes at one end of the buffer and the UART reads at the other end. The only challenge is to make sure that the UART does not overrun the buffer.
  • 78. Race condition in shared memory • Problem when two CPUs try to write the same location: – CPU 1 reads flag and sees 0. – CPU 2 reads flag and sees 0. – CPU 1 sets flag to one and writes location. – CPU 2 sets flag to one and overwrites location.
  • 79. Atomic test-and-set • Problem can be solved with an atomic test- and-set: – single bus operation reads memory location, tests it, writes it. • ARM test-and-set provided by SWP: A test-and-set can be used to implement a semaphore (used to guard access to a block of protected memory), which is a language-level synchronization construct.
  • 80. Critical regions • Critical region: section of code that cannot be interrupted by another process. • Examples: – writing shared memory; – accessing I/O device.
  • 81. Semaphores • Semaphore: OS primitive for controlling access to critical regions. • Protocol: – Get access to semaphore with P(). – Perform critical region operations. – Release semaphore with V().
  • 82. 2. Message passing CPU 1 CPU 2 message message message Message passing communication complements the shared memory model. each communicating entity has its own message send/receive unit. The message is not stored on the communications link, but rather at the senders/ receivers at the end points. In contrast, shared memory communication can be seen as a memory block used as a communication device, in which all the data are stored in the communication link/memory.
  • 83. • Applications in which units operate relatively autonomously are natural candidates for message passing communication. • For example, a home control system has one microcontroller per household device—lamp, thermostat, faucet, appliance, and so on. • The devices must communicate relatively infrequently; furthermore, their physical separation is large enough that we would not naturally think of them as sharing a central pool of memory. Passing communication packets among the devices is a natural way to describe coordination between these devices. • Message passing is the natural implementation of communication in many 8-bit microcontrollers that do not normally operate with external memory. 2. Message passing Example
  • 84. 3.Signals • More general than Unix signal---may carry arbitrary data: <<signal>> aSig p : integer someClass sigbehavior() <<send>> Another form of interprocess communication commonly used in Unix is the signal. A signal is simple because it does not pass data beyond the existence of the signal itself. A signal is analogous to an interrupt, but it is entirely a software creation. A signal is generated by a process and transmitted to another process by the operating system.
  • 85.
  • 86. EVALUATING OPERATING SYSTEM PERFORMANCE • Simplifying assumptions from scheduling policy: – Context switch costs no CPU time,. – We know the exact execution time of processes. – WCET(worst-case execution times)/BCET(best-case execution times) don’t depend on context switches. – We probably determined worst-case or best-case times for the processes in isolation. But, in fact, they interact with each other in the cache. Cache conflicts among processes can drastically degrade process execution time.
  • 87. 1. The zero-time context switch assumption used in the analysis of RMS is not Correct. The effects of nonzero context switching time must be carefully analyzed in the context of a particular implementation to be sure that the predictions of an ideal scheduling policy are sufficiently accurate. Example below shows that context switching can, in fact, cause a system to miss a deadline.
  • 88. Scheduling and context switch overhead Process Execution time deadline P1 3 5 P2 3 10 With context switch overhead of 1, no feasible schedule. 2TP1 + TP2 = 2*(1+3)+(1_3)=11 In Example above ,overhead was a large fraction of the process execution time and of the periods.
  • 89. 2. Another important assumption we have made thus far is that process execution time is constant. As seen already , this is definitely not the case—both data dependent behavior and caching effects can cause large variations in run times. If we can determine worst-case execution time, then shorter run times for a process simply mean unused CPU time. If we cannot accurately bound WCET, then we will be left with a very conservative estimate of execution time that will leave even more CPU time unused.
  • 90. Process execution time • Process execution time is not constant. • Extra CPU time can be good. • Extra CPU time can also be bad: – Next process runs earlier, causing new preemption.
  • 91. Processes and caches • Processes can cause additional caching problems. – Even if individual processes are well-behaved, processes may interfere with each other. • Worst-case execution time with bad behavior is usually much worse than execution time with good cache behavior.
  • 92. Effects of scheduling on the cache Process WCET Avg. CPU time P1 8 6 P2 4 3 P3 4 3 Schedule 1 (LRU cache): Schedule 2 (half of cache reserved for P1):
  • 93. Power optimization • Power management: determining how system resources are scheduled/used to control power consumption. • OS can manage for power just as it manages for time. • OS reduces power by shutting down units. – May have partial shutdown modes. A power management policy in general examines the state of the system to determine when to take actions.
  • 94. Power management and performance • Power management and performance are often at odds. • Entering power-down mode consumes – energy, – time. • Leaving power-down mode consumes – energy, – time.
  • 95. Simple power management policies • Request-driven: power up once request is received. Adds delay to response. • Predictive shutdown: try to predict how long you have before next request. – May start up in advance of request in anticipation of a new request. – If you predict wrong, you will incur additional delay while starting up.
  • 96. Probabilistic shutdown • Assume service requests are probabilistic. • Optimize expected values: – power consumption; – response time. • Simple probabilistic: shut down after time Ton, turn back on after waiting for Toff.
  • 97. Advanced Configuration and Power Interface • ACPI: open standard for power management services. Hardware platform device drivers ACPI BIOS OS kernel applications power management The Advanced Configuration and Power Interface (ACPI) is an open industry standard for power management services. It is designed to be compatible with a wide variety of OSs.
  • 98. • ACPI provides some basic power management facilities and abstracts the hardware layer, the OS has its own power management module that determines the policy,and the OS then uses ACPI to send the required controls to the hardware and to observe the hardware’s state as input to the power manager.
  • 99. ACPI supports the following five basic global power states: • ■ G3, the mechanical off state, in which the system consumes no power. • ■ G2, the soft off state, which requires a full OS reboot to restore the machine to working condition. – This state has four substates: • —S1, a low wake-up latency state with no loss of system context; • —S2, a low wake-up latency state with a loss of CPU and system cache state; • —S3, a low wake-up latency state in which all system state except for main memory is lost; and • —S4, the lowest-power sleeping state, in which all devices are turned off. • ■ G1, the sleeping state, in which the system appears to be off and the time required to return to working condition is inversely proportional to power consumption. • ■ G0, the working state, in which the system is fully usable. • ■ The legacy state, in which the system does not comply with ACPI.