2. Introduction
Embedded System
• Combination of hardware and software designed for a
specific application
• Microprocessor/microcontroller -based computer hardware
system with software that is designed to perform a
dedicated function, either as an independent system or as a
part of a large system
Microcontroller
• Any electric appliance that stores, measures, displays
information or calculates requires a µc
• Embedded systems are managed by microcontrollers
• Integrated with components dedicated to handling electric
and/or mechanical interfacing
2
3. Why Embedded System?
• Usage of computer in controlling the system is gaining popularity over
traditional techniques of mechanics, electromechanics, etc.,
• Eg: Airplane, Car brakes, respirators, energy meters, etc.,
Cost Reduction
• Mechanical components are heavier and costly, if replaced with
computer based system, distributed system can be implemented with
several nodes connected with communication networks
Increased Functionality
• Electric power steering – requires no mechanical input
• Self driving cars implementation becomes easy
3
5. Requirement of Embedded System
• Requirement: Embedded system must be at least as safe as the
system they're replacing
• Physical property that can be measured from software: Time
• The system operates correctly not only in the functional domain, but
also in the temporal domain
• The system must deliver correct value at the correct point in time
5
7. Real Time System
• A system that reacts upon outside events
• Performs a function based on observed events
• Gives a response within a certain time
• Correctness of the function does not only depend on correctness of the result, but also on
the time when these are produced
• Real time system does not increase the execution speed of the program but provides more
precise and predictable timing characteristics
• Fast computing: reduction in average response time for the entire system
• Real time computing: Individual timing constraints of separate activity is full filled
• In some cases, the system must wait until it responds, as in the airbag example
7
8. Real Time System Functions
• Observe the process, i.e., read the sensor
values (sampling)
• Make a decision about what to do
(execution of control algorithm)
• Affect the process with an output signal to
an actuator (actuation)
• Timing requirements:
8
9. Characteristics of Real Time System
• Timeliness
• Close coupling to process I/O
• Predictably fast handling of events
• Handling of several tasks at the same time
• Possibility to prioritize among tasks
• Configuring of task as event-triggered or time-triggered
• Possibility to internally hold a view of the process being controlled
• Design for peak load and fault tolerance
9
10. Classification of Real Time Systems
Resources
• Systems with enough resources
• System with limited resources
Activation
• Event Triggered (ET) systems
• Time Triggered (TT) systems
Service level
• Soft real-time systems
• Hard real-time systems
10
11. Classification of Real Time Systems
Applications
• Embedded real-time systems
• Not embedded real-time systems
Fail-tolerance
• Fail-safe systems
• Fail-operational systems
11
12. Real Time Operating Systems
• An operating system that guarantees a certain capability within a
specified time constraint
• Eg: an OS might be designed to ensure that a certain object was
available for a robot on an assembly line
• Some real-time operating systems are created for a special
application and others are more general purpose
• RTOS is a platform, or a toolbox, for development of real time
applications
12
13. Terminologies used in RTOS
Task
• A sequential program (separate “chunk” of software) that performs a specific activity and that possibly
communicates with other tasks in the system
• A task often has a priority relative to other tasks in the system
• Sometimes terms thread and job are used instead of task
Process
• A virtual processor that can handle several tasks with a common memory space, i.e., several tasks (threads) can
execute within a process, sharing the same memory
Scheduling
• Assignment of tasks to the processor, so that each task is executed until completion
• Scheduling is an activity of the RTOS kernel that determines the order in which concurrent tasks are executed
on a processor
Preemption
• An operation of the kernel that interrupts the currently executing task and assigns the processor to a more
urgent task ready to execute
13
14. RTOS in its environment
Hardware
• Physical hardware: the CPU itself with belonging I/O devices, registers, memory etc
Hardware adaptation layer
• Consist of HW dependable code for communication with the underlaying hardware, i.e.,
device drivers, register handling code, interrupt handling code etc
RTOS
• RTOS itself uses the functionality provided by the HW adaptation layer, for e.g., scheduling,
communication and synchronization
• RTOS can also communicate directly to the hardware, but that is not a way to go, since the
RTOS gets platform dependent
14
15. RTOS in its environment
RTOS
• It is better to have all communication through the adaptation layer since it will make it
easier to port the RTOS to different platforms
• only need to change the HW adaptation layer when moving the application to different
platform
Application
• Real-time application uses the services provided by the RTOS
• Those services are usually called system calls
15
16. Advantages of RTOS
Simplifies application development
• For an application that can do several different things, it is hard to develop one program performing several
independent activities (tasks)
• Better solution is to have several independent programs (tasks) each having its own activity, and let the RTOS
take care of CPU sharing between tasks (according to some scheduling policy)
Task management
• Services in this category include the ability to launch tasks and assign priorities to them
• The main RTOS service in this category is the scheduling of tasks as the embedded system is in operation
• Scheduler controls the execution of application software tasks, and can make them run in a very timely and
responsive fashion
Resource management
• Management of I/O devices, memory, disk, etc
• Provides uniform framework for organizing and accessing the many hardware device drivers that are typical
of an embedded real-time system 16
17. Advantages of RTOS
Homogeneous programming model
• RTOS provides a number of well defined system calls, which makes it easier to understand and maintain the
application code
Reduce development effort and risk
• Provides a streamlined set of tools and methods to get a quality product into production as quickly as
possible
• Using an RTOS, a programmer can use as much as possible existing reliable proven building blocks, which
makes it possible of demonstrating or proof the quality of the product before its production
Portability
• RTOSs can be adjusted by the manufacturer to support different platforms, it makes it much more simple for
the customer to change the HW platform
• Besides, standards are covering all the possible interaction and interchanges between subsystems,
components and buildig blocks
17
18. Advantages of RTOS
Communication and synchronization
• These services make it possible for tasks to pass information from one to another, without danger of that
information ever being damaged
• They also make it possible for tasks to coordinate, so that they can productively cooperate with one another
• Without the help of these RTOS services, tasks might well communicate corrupted information or otherwise
interfere with each other
Time services
• Good time services are essential to real-time applications
• Since many embedded systems have stringent timing requirements, most RTOS kernels also provide some
basic Timer services, such as task delays and time-outs
18
19. RTOS vs GPOS
• Many non-real-time operating systems also provide similar kernel services
• Difference between general computing operating systems and real-time operating systems is the need for
“deterministic” timing behavior in the real- time operating systems
• “Deterministic” timing - operating system services consume only known and expected amounts of time
• General-computing/purpose non-real-time operating systems are often quite non-deterministic
• Their services can inject random delays into application software and thus cause slow responsiveness of an
application at unexpected times
Temporal Requirements
• Service calls in an RTOS must be predictable with a known upper bound on execution time
• Context switch has do be done by some algorithm that allows to be analysed for its timing
• Delay spent in waiting for shared resources, must be possible to determine
• The maximum time that the RTOS is in interrupt disable mode must be known
19
20. Windows for Real Time applications
• Possible with sufficiently fine Timer and clock resolutions for the most time-stringent
applications
Drawbacks
• Large memory footprint
• Most of the real time systems are embedded into different products which have very limited resources
• Weak support for real-time scheduling and resource-access control
• The Windows API first organizes processes by the priority class to which they are assigned at creation
(Real-time, High, Above Normal, Normal, Below Normal, and Idle) and then by the relative priority of
the individual threads within those processes (Time-critical, Highest, Above-normal, Normal, Below-
normal, Lowest, and Idle)
• The problem is that even threads that belong to REALTIME processes can be blocked by lower priority
threads during unpredictably long time
20
21. RTOS Types
Event-triggered (ET) RTOS
• Priority based pre-emptive scheduling
• Determine whether the currently running task should continue to run. If not …
• Determine which task should run next
• Save the environment of the task that was stopped (so it can continue later)
• Set up the running environment of the task that will run next
• Allow this task to run
Time-triggered (TT) RTOS
• Tasks are executed according to a schedule determined before the execution
• Time acts a means for synchronization
There are also RTOSs that support the event-triggered and time-triggered scheduling
Most commercial real-time systems are priority-driven
21
Task
Switching
Context
Switching
22. Event-triggered systems - Functionality
• Definition and activation of tasks
• Handling of time for tasks
• Delay of execution for a task (DELAY)
• Periodic execution of a task
• ”timeout” handling on shared resources
• Communication and synchronisation
• Error handling
• I/O – device drivers
• Memory administration
• Scheduling
22
23. Task structure
• A task implements a computation job and is the basic unit of work handled by the
scheduler in a RTOS
• When a kernel creates a task, it allocates memory space for the code to be executed by
the task and instantiates a data structure called the Task Control Block (TCB)
• TCB is used for keeping all the information needed to manage and schedule the task
TCB contains:
• Task ID
• Task state (e.g. running, waiting, blocked..)
• Start address of the task code
• Registers (program counter and status register)
23
24. Code Reusability
24
• Consider a system that requires 2 PID
controllers
• Condition: Same code for both controllers
• This requires creation of 2 tasks
• Difference between tasks: input parameters
• When the task code is executed by the RTOS
for the first time, the input parameters are
passed either by a system call, or the RTOS
sends the input data to the start function of
the task (as function input parameters)
• The parameters are then stored in the local
data area for the task
• To be able to resuse the same code in
different tasks, the code must be
preemptable in the middle of the execution
without any side effects, i.e., the code must
be reentrant
• A function is reentrant if it can be
simultaneously executed by two or more
tasks
Reentrant code can not use global variables without
protection, otherwise different threads may update the
same memory location in non-deterministic order
25. Task States
25
DORMANT - In this state, a task is not yet consuming any
resources in the system. The task is registered in the
system but the it is either not activated yet or has
terminated.
EXECUTING - A task enters this state as it starts executing
on the processor. Only one task is in state executing at a
time
READY - A ready task is one that is ready for execution. A
ready task cannot gain control of the CPU until all higher
priority tasks in the ready or executing state either
complete or become dormant.
WAITING - A task enters this state when it waits for an
event, e.g., a timeout expiration, or a synchronization
signal from another task.
BLOCKED - A task is blocked when it released but cannot
proceed its execution for some reason. e.g., it may be
blocked waiting for a shared resource to be free.
DORMANT
EXECUTING
READY
WAITING BLOCKED
26. Tasks and Processes
• Most embedded systems require functionality and timing that is too complex to embody in a
single program
• So, the system is broken into multiple tasks in order to manage when things happen
• These different tasks are part of the system’s functionality, but that application-level organization
of functionality is often reflected in the structure of the program as well
• A process is a single execution of a program
• If the same program is executed two different times two different processes are created
• Each process has its own state that includes not only its registers but all of its memory
• In some OS, the memory management unit is used to keep each process in a separate address
space
• In others, particularly lightweight RTOSs, the processes run in the same address space
26
Threads
Used interchangeably
27. Multirate Systems
• Implementing code that satisfies timing requirements is even more complex when multiple rates
of computation must be handled
• Multirate embedded computing systems are very common, including automobile engines,
printers and cell phones
• In all these systems, certain operations must be executed periodically and each operation is
executed at its own rate
27
Engine Control System
• The spark plug must be fired at a certain point in the combustion cycle
• To obtain better performance, the phase relationship between the piston’s movement and
the spark should change as a function of engine speed
• Using a microcontroller that senses the engine crankshaft position allows the spark timing
to vary with engine speed
• Automobile engines must meet strict requirements (mandated by law in the United States)
on both emissions and fuel economy
• The engines must still satisfy customers not only in terms of performance but also in terms
of ease of starting in extreme cold and heat, low maintenance, and so on
28. Multirate Systems
28
Engine Control System
• Automobile engine controllers use additional sensors, including the gas pedal position and
an oxygen sensor used to control emissions
• They also use a multimode control scheme: one mode may be used for engine warm-up,
another for cruise, and yet another for climbing steep hills, and so forth
29. Timing Requirements on Processes
• Processes can have several different types of timing requirements imposed on them by the
application
• The timing requirements on a set of processes strongly influence the type of scheduling that is
appropriate
• A scheduling policy must define the timing requirements that it uses to determine whether a
schedule is valid
29
The release time is the time at which the process becomes ready to
execute; this is not necessarily the time at which it actually takes
control of the CPU and starts to run
An aperiodic process is initiated by an event whose timing interval of
occurrence is not constant, such as external data arriving or data
computed by another process
A deadline specifies when a computation must be finished. The deadline for an aperiodic process is generally
measured from the release time, since that is the only reasonable time reference.
30. Timing Requirements on Processes
30
For a periodically executed process, there are two common
possibilities.
• In simpler systems, the process may become ready at the beginning
of the period.
• More sophisticated systems, such as those with data dependencies
between processes, may set the release time at the arrival time of
certain data, at a time after the start of the period.
The deadline for a periodic process may in general occur at some time
other than the end of the period
The period of a process is the time between successive executions
The process rate is the inverse of its period
In a multirate system, each process executes at its own distinct rate
The most common case for periodic processes is for the initiation interval to be equal to the period.
31. Timing Requirements on Processes
What happens when a process misses a deadline?
The practical effects of a timing violation depend on the application
• The results can be catastrophic in an automotive control system
• Whereas a missed deadline in a multimedia system may cause an audio or video
glitch
The system can be designed to take a variety of actions when a deadline is missed
• Safety-critical systems may try to take compensatory measures such as
approximating data or switching into a special safety mode
• Systems for which safety is not as important may take simple measures to avoid
propagating bad data, such as inserting silence in a phone line, or may completely
ignore the failure
31
32. CPU Metrics
Initiation time - time at which a process actually starts executing on the CPU
Completion time - time at which the process finishes its work
CPU time - amount of time expended by a process
CPU time of process i is called Ci
CPU time is not equal to the completion time minus initiation time; several other processes may interrupt
execution
The total CPU time consumed by a set of processes is
Utilization - basic measure of the efficiency with which we use the CPU
U ranges between 0 and 1, with 1 meaning that all of the available CPU time is being used for system
purposes - often expressed as a percentage
If the total execution time of all the Processes is measured over an interval of time t.
32
33. Scheduling Policies
• Scheduling policy defines how processes are selected for promotion from the ready state to the executing
state
• Every multitasking OS implements some type of scheduling policy
• Choosing the right scheduling policy not only ensures that the system will meet all its timing requirements,
but it also has a profound influence on the CPU horsepower required to implement the system’s
functionality
• Schedulability means whether there exists a schedule of execution for the processes in a system that satisfies
all their timing requirements
• Utilization is one of the key metrics in evaluating a scheduling policy
• Basic requirement: CPU utilization be no more than 100% since the CPU can’t be used more than 100% of
the time
• Utilization of the CPU, is calculated over a finite period that covers all possible combinations of process
executions
33
34. Scheduling Policies
• For periodic processes, the length of time that must be considered is the hyperperiod, which is the least-
common multiple of the periods of all the processes
• Consider the following set of periodic tasks
• Some types of timing requirements for a set of processes imply that 100% of the CPU’s execution time
cannot utilized on useful work, even if context switching overhead is ignored
• Some scheduling policies can deliver higher CPU utilizations than others, even for the same timing
requirements
• The best policy depends on the required timing characteristics of the processes being scheduled
34
< 1.0
35. Scheduling Policies
• One very simple scheduling policy is known as cyclostatic scheduling or sometimes as Time Division Multiple
Access scheduling
• A cyclostatic schedule is divided into equal-sized time slots over an interval equal to the length of the
hyperperiod H
• Processes always run in the same time slot
• Two factors affect utilization:
• The number of time slots used
• The fraction of each time slot that is used for useful work
• Depending on the deadlines for some of the processes, some time slots are left empty
• Since the time slots are of equal size, some short processes may have time left over in their time slot
• Utilization can be used as a schedulability measure: the total CPU time of all the processes must be less than
the hyperperiod
35
36. Scheduling Policies
• Another scheduling policy that is slightly more sophisticated is round robin
• Round robin uses the same hyperperiod as does cyclostatic
• It also evaluates the processes in order
• But unlike cyclostatic scheduling, if a process does not have any useful work to do, the round-robin scheduler
moves on to the next process in order to fill the time slot with useful work
• Consider a case were all three processes execute during the first hyperperiod, but during the second one, P1
has no useful work and is skipped
• The last time slot in the hyperperiod is left empty; if there are any occasional non-periodic tasks without
deadlines, they can be executed these empty time slots
• Round-robin scheduling is often used in hardware such as buses because it is very simple to implement but it
provides some amount of flexibility
• In addition to utilization, scheduling overhead must be considered— the execution time required to choose
the next execution process, which is incurred in addition to any context switching overhead
• In general, the more sophisticated the scheduling policy, the more CPU time it takes during system operation
to implement it
• The final decision on a scheduling policy must take into account both theoretical utilization and practical
scheduling overhead 36
37. Priority Driven Scheduling
Rules:
• Each process has a fixed priority that does not vary during the course of execution
• The ready process with the highest priority (with 1 as the highest priority of all) is selected for execution
• A process continues execution until it completes or it is preempted by a higher-priority process
37
Process Priority Release Time (ms) Execution Time (ms)
P1 1 15 10
P2 2 0 30
P3 3 18 20
0 10 20 30 40 50 60 t (ms)
P2 P1 P2 P3
Priorities assigned to a process can be
Static – Priority does not change during runtime
Eg: Rate monotonic, Deadline monotonic
Dynamic – Priority changes during runtime
Eg: Earliest Deadline First
38. Rate Monotonic Scheduling
Rate-monotonic analysis (RMA) uses a relatively simple model of the system
• All processes run periodically on a single CPU
• Context switching time is ignored
• There are no data dependencies between processes
• The execution time for a process is constant
• All deadlines are at the ends of their periods
• The highest-priority ready process is always selected for execution
Process Execution Time
(ms)
Period
(ms)
P1 1 4
P2 2 6
P3 3 12
U = 𝑖=1
3 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒
𝑇𝑖𝑚𝑒 𝑃𝑒𝑟𝑖𝑜𝑑
=
1
4
+
2
6
+
3
12
= 0.83 < 1.0
P1
P2
P3
0 2 4 6 8 10 12
39. Inter Process Communication
• Processes often need to communicate with each other
• Interprocess communication mechanisms are provided by the operating system as part of
the process abstraction
• In general, a process can send a communication in one of two ways: blocking or
nonblocking
• Blocking communication - After sending a blocking communication, the process goes into
the waiting state until it receives a response
• Non-blocking communication – Allows the process to continue execution after sending
the communication
• There are two major styles of interprocess communication: shared memory and message
passing
39
40. IPC – Shared Memory
• Two components, such as a CPU and an I/O device,
communicate through a shared memory location
• Shared memory communication works in a bus-based system
• The software on the CPU has been designed to know the
address of the shared location; the shared location has also
been loaded into the proper register of the I/O device
40
• If the CPU wants to send data to the device, it writes to the shared location
• The I/O device then reads the data from that location
• The read and write operations are standard and can be encapsulated in a procedural interface
• There must be a flag that tells the CPU when the data from the I/O device is ready
• The flag, an additional shared data location, has a value of 0 when the data are not ready and 1 when the
data are ready
• The CPU, for example, would write the data and then set the flag location to 1
41. IPC – Shared Memory
• If the flag is used only by the CPU, then the flag can be
implemented using a standard memory write operation
• If the same flag is used for bidirectional signaling between the
CPU and the I/O device, care must be taken
Consider the following scenario:
1. CPU reads the flag location and sees that it is 0
41
2. I/O device reads the flag location and sees that it is 0
3. CPU sets the flag location to 1 and writes data to the shared location
4. I/O device erroneously sets the flag to 1 and overwrites the data left by the CPU
• The above scenario is caused by a critical timing race between the two programs
• To avoid such problems, the microprocessor bus must support an atomic test-and set operation, which is
available on a number of microprocessors
42. IPC – Shared Memory
42
• The test-and-set operation first reads a location and then sets it to a specified value
• It returns the result of the test
• If the location was already set, then the additional set has no effect but the test-and-set
instruction returns a false result
• If the location was not set, the instruction returns true and the location is in fact set
• The bus supports this as an atomic operation that cannot be interrupted
• A test-and-set can be used to implement a semaphore, which is a language-level
synchronization construct
• Assume that the system provides one semaphore that is used to guard access to a block
of protected memory
• Any process that wants to access the memory must use the semaphore to ensure that no
other process is actively using it
43. IPC – Shared Memory
43
• As shown below, the semaphore names by tradition are P() to gain access to the
protected memory and V() to release it
• The P() operation uses a test-and-set to repeatedly test a location that holds a lock on
the memory block
• The P() operation does not exit until the lock is available; once it is available, the test-
and-set automatically sets the lock
• Once past the P() operation, the process can work on the protected memory block
• The V() operation resets the lock, allowing other processes access to the region by using
the P() function
44. IPC – Message Passing
44
• Message passing communication complements the shared
memory model
• Each communicating entity has its own message
send/receive unit
• The message is not stored on the communications link, but
rather at the senders/ receivers at the end points
• In contrast, shared memory communication can be seen as
a memory block used as a communication device, in which
all the data are stored in the communication link/memory
• Applications in which units operate relatively autonomously are natural candidates for message passing
communication
• For example, a home control system has one microcontroller per household device—lamp, fan, and so on
• The devices must communicate relatively infrequently; furthermore, their physical separation is large enough
that sharing a central pool of memory is no possible
• Passing communication packets among the devices is a natural way to describe coordination between these
devices
• Message passing is the natural implementation of communication in many 8-bit microcontrollers that do not
normally operate with external memory
45. IPC – Signals
45
• Another form of inter process communication commonly used in Unix is the signal
• A signal is simple because it does not pass data beyond the existence of the signal itself
• A signal is analogous to an interrupt, but it is entirely a software creation
• A signal is generated by a process and transmitted to another process by the operating
system
46. Evaluating Operating System Performance
46
• The scheduling policy does not tell all about the performance of a real system running
processes
• Analysis of scheduling policies makes some simplifying assumptions
1. Context switches require zero time
• Although it is often reasonable to neglect context switch time when it is much smaller than the
process execution time, context switching can add significant delay in some cases
2. Execution time of the processes is known
• Program time is not a single number, but can be bounded by worst-case and best-case execution times
3. Worst-case or best-case times for the processes in isolation has already been
determined
• Processes interact with each other in the cache. Cache conflicts among processes can drastically
degrade process execution time.
47. Power Management and Optimization for Processes
47
• The RTOS and system architecture can use static and dynamic power management
mechanisms to help manage the system’s power consumption
• A power management policy is a strategy for determining when to perform certain power
management operations
• A power management policy in general examines the state of the system to determine
when to take actions
• However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms
• Going into a low-power mode takes time; generally, the more that is shut off, the longer
the delay incurred during restart
• Because power-down and power-up are not free, modes should be changed carefully
• Determining when to switch into and out of a power-up mode requires an analysis of the
overall system activity
48. Power Management and Optimization for Processes
48
• Avoiding a power-down mode can cost unnecessary power
• Powering down too soon can cause severe performance penalties
• Re-entering run mode typically costs a considerable amount of time
• A straightforward method is to power up the system when a request is received
• This works as long as the delay in handling the request is acceptable
A more sophisticated technique is predictive shutdown
• The goal is to predict when the next request will be made and to start the system just
before that time, saving the requestor the start-up time
• In general, predictive shutdown techniques are probabilistic—they make guesses about
activity patterns based on a probabilistic model of expected behavior
• Because they rely on statistics, they may not always correctly guess the time of the next
activity
49. Power Management and Optimization for Processes
49
This can cause two types of problems
1. The requestor may have to wait for an activity period
• In the worst case, the requestor may not meet a deadline due to the delay incurred by system start-up
2. The system may restart itself when no activity is imminent
• As a result, the system will waste power
• The choice of a good probabilistic model of service requests is important
• The policy mechanism should also not be too complex, since the power it consumes to
make decisions is part of the total system power budget
A very simple technique is to use fixed times
• For instance, if the system does not receive inputs during an interval of length Ton, it
shuts down; a powered-down system waits for a period Toff before returning to the
power-on mode
• The choice of Toff and Ton must be determined by experimentation
50. Power Management and Optimization for Processes
50
The Advanced Configuration and Power Interface (ACPI) is an open
industry standard for power management services
• Designed to be compatible with a wide variety of OSs
• It was targeted initially to PCs
• ACPI provides some basic power management facilities and abstracts
the hardware layer, the OS has its own power management module that
determines the policy
• OS then uses ACPI to send the required controls to the hardware and to observe the hardware’s state as
input to the power manager
ACPI supports the following five basic global power states
• G3, the mechanical off state, in which the system consumes no power
• G2, the soft off state, which requires a full OS reboot to restore the machine to working condition. This
state has four substates
51. Power Management and Optimization for Processes
51
• G2, the soft off state,
• S1, a low wake-up latency state with no loss of system context
• S2, a low wake-up latency state with a loss of CPU and system cache state
• S3, a low wake-up latency state in which all system state except for main memory is lost
• S4, the lowest-power sleeping state, in which all devices are turned off
• G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is inversely proportional to power consumption
• G0, the working state, in which the system is fully usable
• The legacy state, in which the system does not comply with ACPI
The power manager typically includes an observer, which receives messages through the ACPI
interface that describe the system behavior
It also includes a decision module that determines power management actions based on those
observations
52. Portable Operating System Interface (POSIX)
52
• A family of standards specified by the IEEE Computer Society for maintaining
compatibility between operating systems
• POSIX defines both the system- and user-level application programming
interfaces (API), along with command line shells and utility interfaces, for
software compatibility (portability) with variants of Unix and other operating
systems
• The need for standardization arose because enterprises using computers
wanted to be able to develop programs that could be moved among
different manufacturer's computer systems without having to be recoded
• Unix was selected as the basis for a standard system interface partly because
it was "manufacturer-neutral”
53. Portable Operating System Interface (POSIX)
53
• POSIX is created to make application portability easier
• So it’s not for UNIX systems only. Non-UNIX systems can be POSIX-compliant too.
• The standard doesn’t dictate the development of the application or the operating system
• It only defines the contract between them
• POSIX-compliant application source code should be able to run across many systems
because the standard is defined at the source code level
• However, the standard doesn’t guarantee any object or binary code level portability
• POSIX is written in terms of Standard C
• But developers can implement it in any language they like
• The standard only deals with aspects of the operating system that interacts with
applications
54. Windows CE
54
• Windows Consumer Electronics/ Windows Embedded Compact - Microsoft's version of
Windows for handheld devices and embedded systems that use x86, ARM, MIPS and
SuperH CPUs
• Windows CE uses the same Win32 programming interface (API) as regular Windows, but
can run in less than 1MB of memory
• Windows CE conforms to the definition of a real-time operating system, with a
deterministic interrupt latency
• From Version 3 and onward, the system supports 256 priority levels and uses priority
inheritance for dealing with priority inversion. Current version: 8 (2013)
Features of Windows CE version 8:
• Windows 8 can run on ARM processors provided by TI, NVIDEA and Qualcomm
• ARM got more powerful and there are no x86 processors available that is good enough
for tablets today
55. Windows CE
55
There are 3 major factors that are considered to decide on which Embedded OS to use.
1. Hardware
2. Performance
3. Cost and Support
Hardware
• The OS should support a variety of processors, CPU architectures, embedded peripherals
• OS should operate with minimum hardware requirements
• A bare minimum Windows 8 today can run in 281 Megabytes of system memory
• Consider system properties of a development platforms (Alioth - PXA300 based Dev Board), it has 256 MB
RAM (being a development platform), but the minimal memory usage of a full blown GUI running on the
LCD with touch is far less than 32 MB
• In reality there are many embedded devices are designed with 4MB of flash and 16MB of RAM and
Windows CE 6.0 running on it
• Windows 8 is restricted to a particular list of Hi-End ARM Processors.
56. Windows CE
56
Performance
• Each embedded device is different and unique in its performance requirements
• Some of the performance requirements
• Boot up Performance
• Power Management
• Real Time characteristics
• Security
• The requirement for instant power up has been a requirement in almost all embedded products
• Boot time in Windows 8 has been reduced by using the hibernate option
• Windows provided an organized power management architecture whose power manager can be re-written
to suit product needs
• Windows 8 with deterministic interrupt latency and multiple priority levels, it can handle several real time
tasks in the system
• Devices running the same Windows CE may look completely different mainly because of the various
platforms it supports and its super modular approach that helps the device developer to remove all
unwanted and vulnerable components out of his build