Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
OVERVIEW OF RTOS
1. ROBOT OPERATING SYSTEM
Department of Robotics & Automation
JSS Academy of Technical Education, Bangalore-560060
(Course Code: 21RA54)
2. Books
Text Books:
1. Silberschatz, Galvin, Gagne, Operating System concepts 6th edition, John Wiley, 2003.
2. Raj Kamal, Systems -Architecture, Programming and Design, Tata McGraw Hill, 2006.
3. P. Raghavan, Amol Lad, Sriram Neelakandan, Embedded Linux System Design and
Development, Auerbach Publications 2005.
4. Jonathan Corbet, Allesandro Rubini & Greg Kroah-Hartman, Linux Device Drivers
3rdedition, 2005.
Reference Books:
1. https://www.youtube.com/watch?v=PEzpOembKNc
2. https://www.youtube.com/watch?v=mCs21yByQqk
3. https://www.youtube.com/watch?v=hDn4hM148V8
3. Course Learning Objectives
• Understand the fundamental concepts of Operating Systems.
• Explain the mechanisms of Operating Systems to handle processes, threads
and their communication.
• Analyze the file structure and the protection and security mechanism.
• Explain the Memory management technique to improve the CPU utilisation and
its response speed.
4. Course outcomes (Course Skill Set)
CO2: Explain RTOS task scheduling, task synchronisation and task communication
mechanisms
At the end of the course, students will be able to,
5. Continuous Internal Evaluation (CIE)
• Three IA Tests, each of 20 Marks
• Two assignments each of 20 Marks
• The sum of three tests, two assignments, will be out of 100 marks and will be scaled
down to 50 marks
The minimum passing mark for the CIE is 40% of the maximum marks (20 marks out of 50)
6. Semester End Examination(SEE)
• The question paper shall be set for 100 marks.
• The duration of SEE is 03 hours.
• The question paper will have 10 questions.
• 2 questions per module. Each question is set for 20 marks.
• The students have to answer 5 full questions, selecting one full question from each module.
• The student has to answer for 100 marks and marks scored out of 100 shall be
proportionally reduced to 50 marks.
• SEE minimum passing mark is 35% of the maximum marks (18 out of 50 marks).
• Students should secure a minimum of 40% (40 marks out of 100) in the sum total of the CIE and SEE
taken together.
8. RTOS Task and Task State, Preemptive Scheduler, Process
Synchronization, Message Queues, Mailboxes, Pipes, Critical Section,
Semaphores, Classical Synchronization Problem & Deadlocks.
Content
9. Introduction to RTOS
• A real-time operating system (RTOS) is a special-purpose operating system used in computers with strict
time constraints for performing any job.
• RTOS (embedded systems), a combination of hardware and software designed for a specific function, may
also operate within a larger system.
• Real-time operating systems have functions similar to general-purpose OS, like Linux, Windows, or Mac,
but are designed so that a scheduler in the OS can meet specific deadlines for different tasks.
10. Introduction to RTOS
Examples: (Non-Critical Systems) Embedded system for controlling a home dishwasher.
• The embedded system may allow various options for scheduling the operation of the dishwasher
• Water temperature, type of cleaning, (light or heavy), and timer indicating when the dishwasher will start.
• The majority of embedded systems, including FAX machines, microwave ovens, wristwatches, and
networking devices such as switches and routers, do not qualify as safety-critical.
(Critical Systems): Safety-critical systems.
• In a safety-critical system, incorrect operation-usually due to a missed deadline results in some
"catastrophe.“
• Examples of safety-critical systems include weapons, antilock brake, flight management, and health-related
embedded systems, such as pacemakers.
• In these scenarios, the real-time system must respond to events by the specified deadlines; otherwise,
serious injury or worse occurs.
11. Classification of RTOS
1. A hard real-time system has the most stringent requirements, guaranteeing that
critical real-time tasks are completed within their deadlines. Safety-critical
systems are typically hard real-time systems.
2. A soft real-time system is less restrictive, simply providing that a critical real-
time task will receive priority over other tasks and will retain that priority until it is
completed.
• Many commercial operating systems, as well as Linux, provide soft real-time
support.
12. Introduction to RTOS
• Examples of RTOS: Airline traffic control systems, Command Control Systems, airline reservation
systems, pacemakers, Network Multimedia Systems, robots, etc.
13. System Characteristics 1. Single purpose
2. Small size
3. Inexpensively mass-produced
4. Specific timing requirements
1. The real-time system serves only one purpose: controlling antilock brakes or delivering music on an MP3
player.
2. Many real-time systems exist in environments where physical space is constrained. (E.g., a wristwatch or
a microwave oven space is considerably less than what is available in a desktop computer.
3. As a result of space constraints, most real-time systems lack both the CPU processing power and the
amount of memory available in standard desktop PCs.
4. Many real-time systems run on 8 or 16-bit processors.
5. Real-time systems might have less physical memory (RAM
14. Task Scheduling in a RTOS
• A task is like a process or thread in an OS.
• An application program can also be said to be a program consisting of the tasks
• Task has various states that OS controls
• Task is a term used for the process in the RTOS for the embedded systems.
• Runs when it is scheduled to run by the OS (kernel)
• A task requests a (system call) or receives or sends messages through OS functions
• Runs by executing the instructions, and the continuous changes of its state take place as the program
counter (PC) changes
Task
15. Task Scheduling in a RTOS
The task controlled by
(i) OS process scheduling mechanism, which lets it execute on the CPU
(ii) OS process resource-management mechanism lets it use the system memory and other system
resources such as network, file, display or printer.
Task
16. Task Scheduling in a RTOS
(i) Idle state [Not attached or not registered]
(ii) Ready State [Attached or registered]
(iii) Running state
(iv) Blocked (waiting) state
(v) Delayed for a preset period
The number of possible states depends on the RTOS.
Task State
18. Task Scheduling in a RTOS
Task State
1. Idle (created) state: A task has been created, and memory has been
allotted to its structure; however, it is not ready and is not schedulable by
the kernel.
2. Ready (Active) State: The created task is ready and is schedulable by the
kernel but not running at present, as another higher priority task is
scheduled to run and gets the system resources at this instance.
3. Running state: Executing the codes and getting the system resources at
this instance.
• It will run till it needs some IPC (input) or wait for an event or till it gets
preempted by another higher-priority task.
(i) Idle state [Not attached or not registered]
(ii) Ready State [Attached or registered]
(iii) Running state
(iv) Blocked (waiting) state
(v) Delayed for a preset period.
19. Task Scheduling in a RTOS
Task State
4. Blocked (waiting) state: Execution of task codes suspends after
saving the needed parameters into its context.
5. Deleted (finished) state: Deleted Task: The created task has
memory de-allotted to its structure.
• It frees the memory.
(i) Idle state [Not attached or not registered]
(ii) Ready State [Attached or registered]
(iii) Running state
(iv) Blocked (waiting) state
(v) Delayed for a preset period.
20. Preemptive Scheduling
• A preemptive scheduler is a type of CPU scheduler that has the ability to interrupt a currently executing
process and move it back to the ready queue.
• The primary characteristic of preemptive scheduling is that it allows the operating system to forcibly
suspend a running process, preempt its control of the CPU, and give the CPU to another process.
21. Preemptive Scheduling
• Preemptive scheduling is used when a process switches from a running state to a ready state or from a
waiting state to a ready state.
• The resources are assigned to the process for a particular time and then removed.
• If the resources still have the remaining CPU execution time, the process returns to the ready queue.
• The process remains in the ready queue until it is given a chance to execute again.
22. Preemptive Scheduling
• When a high-priority process comes in the ready queue, it doesn't have to wait for the running process to
finish its burst time.
• However, the running process is interrupted during its execution and placed in the ready queue until the
high-priority process uses the resources.
• As a result, each process gets some CPU time in the ready queue.
• It improves the overhead of switching processes from running to ready state and vice versa, increasing
preemptive scheduling flexibility. It may or may not include SJF and Priority scheduling.
24. Preemptive Scheduling
1. First, the process P2 comes at time 0. CPU is assigned to process P2.
2. When process P2 is running, process P3 arrives at time 1, and the remaining time for process P2
(5 ms) is greater than the time needed by process P3 (4 ms). So, the processor is assigned to P3.
3. When process P3 was running, process P1 came at time 2, and the remaining time for process P3
(3 ms) is less than the time needed by processes P1 (4 ms) and P2 (5 ms). As a result, P3
continues the execution.
4. When process P3 continues the process, process P0 arrives at time 3. P3's remaining time (2 ms)
equals P0's necessary time (2 ms). So, process P3 continues the execution.
5. When process P3 finishes, the CPU is assigned to P0, which has a shorter burst time than the
other processes.
6. After process P0 completes, the CPU is assigned to process P1 and then to process P2.
25. Process Synchronization
• In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems.
• Process synchronization plays a crucial role in ensuring the correct and efficient functioning of multi-process
systems.
On the basis of synchronization, processes are categorized into two types:
1. Independent Process: The execution of one process does not affect the execution of other processes.
2. Cooperative Process: A process that can affect or be affected by other processes executing in the system.
26. Process Synchronization
• Process synchronization is helpful when multiple processes are running at the same time and more than
one process has access to the same data or resources at the same time.
• When more than two processes have access to the same data or resources at the same time it can cause
data inconsistency.
• To remove this data inconsistency processes should be synchronized with each other.
27. Process Synchronization
• Bank account that has a current balance of 500
• Two users have access to that account.
• User 1 and User 2 both are trying to access the balance.
• If process 1 is for withdrawal and process 2 is for checking the balance both will occur at the same time
then user 1 might get the wrong current balance.
• To avoid this kind of data inconsistency process synchronization in OS is very helpful.
Example
28. Process Synchronization
How Process Synchronization in OS works
• Process 1 is trying to write the shared data while
Process 2 and Process 3 are trying to read the same
data so there are huge changes in Process 2, and
Process 3 might get the wrong data.
Different sections of a program
• Entry Section:- used to decide the entry of the process
• Critical Section:- used to make sure that only one process access and modifies the shared data or resources.
• Exit Section:- used to allow a process that is waiting in the entry section and make sure that finished
processes are also removed from the critical section.
• Remainder Section:- The remainder section contains other parts of the code which are not in the Critical or
Exit sections.
29. Process Synchronization
• Race Condition occurs when more than one process tries to access and modify the same shared data or
resources because many processes try to modify the shared data or resources there are huge chances of a
process getting the wrong result or data.
Race Condition (Timing of Event)
• Outcome depends on the timing of events, is called a race condition.
• The value of the shared data depends on the execution order of the process as many processes try to
modify the data or resources at the same time.
• The race condition is associated with the critical section
30. Process Synchronization
• Process synchronization involves coordinating the execution of multiple processes to ensure they access
shared resources in a controllable and predictable manner
• The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access.
• To achieve this, various synchronization techniques used are;
1. Semaphores
2. Monitors
3. Critical sections.
31. Process Synchronization
Critical Section Problem
• Critical section, make sure that only one process at a time has access to shared data or resources and
only that process can modify that data.
• When many processes try to modify the shared data or resources critical section allows only a single
process to access and modify the shared data or resources.
• Two functions are very important in the critical section wait() and signal().
• To handle the entry of processes in the critical section wait() function is used.
• On the other hand, the single() function is used to remove finished processes from the critical section.
32. Process Synchronization
Critical Section Problem
• Consider a system consisting of n processes {P0, P1, ..., Pn-1}.
• Each process has a segment of code called a critical section, in which the process may change common
variables, update a table, write a file, and so on.
• The important feature of the system is that when one process executes in its critical section, no other
process is allowed to execute in its critical section. (no two processes are executing in their critical sections
at the same time).
• The critical-section problem is to design a protocol that the processes can use to cooperate.
• Each process must request permission to enter its critical section.
• The section of code implementing this request is the entry section.
• The critical section may be followed by an exit section.
• The remaining code is the remainder section.
• The general structure of a typical process Pi is shown in Figure
• The entry section and exit section are enclosed in boxes to highlight these important segments of code.
34. Process Synchronization
Critical Section Problem
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their critical
sections, then only those processes that are not executing in their remainder sections can participate in
the decision on which will enter its critical section next, and this selection cannot be postponed
indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section
and before that request is granted.
35. Process Synchronization
• Semaphores are integer variables, used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
• Semaphores are integer variables i.e. shared by multiple process
• The Semaphore cannot be negative. The least value for a Semaphore is zero (0).
• The Maximum value of a Semaphore can be anything.
• The Semaphores usually have two operations.
The two Semaphore operations are:
• Wait ( )
• Signal ( )
Semaphore
36. Process Synchronization
Semaphore
• The Wait Operation is used for deciding the condition for the process to enter the critical section or wait for
execution of process.
• (Wait operation / Sleep operation / Down operation / Decrease operation / P Function)
• The Wait Operation works on the basis of Semaphore or Mutex Value (Mutual Exclusion).
Basic Algorithm of Wait Operation
P (Semaphore value)
{
Allow the process to enter if the value of Semaphore is greater than zero or positive.
Do not allow the process if the value of Semaphore is less than zero or zero.
Decrement the Semaphore value if the Process leaves the Critical State.
}
wait(S)
{
while (S<=0);
S--;
}
Wait
• The wait operation decrements the value of its argument S, if it is positive.
• If S is negative or zero, then no operation is performed.
37. Process Synchronization
Semaphore
Wait
• The wait operation decrements the value of its argument S, if it is positive.
• If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
• The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
38. Classic Problems of Synchronization
• Below are some of the classical problems depicting flaws of process synchronization in systems where
cooperating processes are present.
• These problems are used for testing nearly every newly proposed synchronization scheme
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
39. Classical Problems of Synchronization
Bounded Buffer (Producer-Consumer) Problem
• There is a pool of n buffers. Each buffer is capable of holding one item.
• A producer produces items and places them in the buffers.
• A consumer consumes items from the buffers.
• When all the buffers are full, the producer has no place to place the item, and therefore, the producer waits.
• The consumer waits when the buffers are empty.
• As the buffers are shared between the producer and the consumer, synchronization must be provided between the
producer and the consumer.
• I.e., both the producer and the consumer should not access the buffers simultaneously.
• I.e. the access of the buffers should be done atomically to avoid inconsistencies in the data stored in the buffers.
• The solution to this bounded-buffer problem using semaphores is as follows:
40. Classic Problems of Synchronization
Bounded Buffer (Producer-Consumer) Problem
Write a solution using a semaphore to ensure overflow condition for producers underflow for consume and a
critical section for the buffer.
• The solution to this problem is, to create two counting semaphores “full” and “empty” to keep track of the
current number of full and empty buffers respectively
• Producers produce a product and consumers consume the product, but both use one of the containers
each time.
• Mutex – binary Semaphore which is used to acquire and release lock
• Empty – counting Semaphore whose initial value is the number of slots in the buffer, since initially all slots are empty
• Full–counting semaphore whose initial value is 0.
42. Dining Philosophers Problem
The problem (Resource allocation)
• Five philosophers (Processes) are sitting around a circular table, and their job is
to think and eat alternatively.
• There is a bowl of rice for each philosopher and 5 forks /chopsticks (resources
have to be shared between the processes).
• To eat, a philosopher needs both a right chopstick and a left chopstick.
• A philosopher can only eat if the philosopher's left and right chopsticks are
available.
• In case both the left and right chopsticks of the philosopher are not available,
then the philosopher puts down their (either left or right) chopstick and starts
thinking again.
44. Dining Philosophers Problem
The solution (using semaphores)
• One simple solution is to represent each fork / chopstick with a semaphore.
• A philosopher tries to grab a fork / chopstick by executing a wait () operation on
that semaphore
• He releases his fork/chopstick by executing the signal() operation on the
semaphore
Thus, the shared data are
• Semaphore chopstick[5];
• Where all the elements of chopstick are initialized to 1.
45. Dining Philosophers Problem The solution (using semaphores)
• This solution guarantees that no two neighbours are
eating simultaneously, it could still create a deadlock
• Suppose all 5 philosophers become hungry
simultaneously, and each grabs their left chopstick, all
the elements of the chopstick will now be equal to 0
• When each philosopher tries to grab his right chopstick,
he will be delayed forever.
46. Dining Philosophers Problem The solution (using semaphores)
• This solution guarantees that no two neighbours are eating simultaneously, it could still create a deadlock
• Suppose all 5 philosophers become hungry simultaneously, and each grabs their left chopstick, all the
elements of the chopstick will now be equal to 0
• When each philosopher tries to grab his right chopstick, he will be delayed forever.
The solution to avoid Deadlock
• Allow at most 4 philosophers to be sitting simultaneously at the table
• Allow a philosopher to pick up his chopsticks only if both chopsticks are available (To do this he must
pick them up in a critical section)
• Use an asymmetric solution, i.e., an odd philosopher picks up first his left chopstick and then his right
chopstick, whereas even a philosopher picks up his right chopstick and then left chopstick
47. The Readers Writers Problem
• A database is to be shared among several concurrent processes
• Some of these processes may want only to read the database, whereas others may want to update (write)
the database
• We distinguish these two types of processes by referring to the former as Readers and latter as Writers
• If two readers access the shared data simultaneously, no adverse affects will results.
• If a writer and reader/writer access the database simultaneously, results in chaos.
• To ensure these difficulties do not arise, we require that the writers have exclusive access to the shared
database.
49. The Readers Writers Problem
The solution (using semaphores)
Use of two semaphores and an integer variable
1. mutex, a binary semaphore (initialized to 1) which is used to ensure mutual exclusion when readcount
is updated. i.e., when any reader enters or exit from the critical section
2. wrt, a semaphore (initialized to 1) common to both reader and writer processes
3. readcount: an integer variable (initialized to 0) that keeps track of how many processes are currently
reading the data.
50. Message Queues
• A message queue is an inter-process communication (IPC) mechanism that allows processes to
exchange data in the form of messages between two processes.
• It allows processes to communicate asynchronously by sending messages to each other where the
messages are stored in a queue, waiting to be processed, and are deleted after being processed.
51. Message Queues
• The message queue is a buffer used in non-shared memory environments, where tasks communicate
by passing messages to each other rather than accessing shared variables.
• Tasks share a common buffer pool.
• The message queue is an unbounded FIFO queue protected from concurrent access by different
threads.
• Many tasks can write messages into the queue, but only one can read messages from the queue at a
time.
• The reader waits on the message queue until there is a message to process.
• Messages can be of any size.
52. Steps to Perform IPC using Message Queues
Message Queues
• A message queue is a linked list of messages stored within the kernel and identified by a message queue
identifier.
• Below are the following steps to perform communication using message queues.
1. A new queue is created, or an existing queue is opened by msgget().
2. New messages are added to the end of a queue by msgsnd().
3. Messages are fetched from a queue by msgrcv().
4. Perform control operations on the message queue msgctl().
53. Steps to Perform IPC using Message Queues
Message Queues
• A message queue is a linked list of messages stored within the kernel and identified by a message queue
identifier.
• ftok(): is used to generate a unique key.
• msgget(): either returns the message queue identifier for a newly created
message queue or returns the identifiers for a queue which exists with the
same key value.
• msgsnd(): Data is placed onto a message queue by calling msgsnd().
• msgrcv(): messages are retrieved from a queue.
• msgctl(): It performs various operations on a queue. Generally, it is used to
destroy message queues.
System calls used for message queues
54. Steps to Perform IPC using Message Queues
Message Queues
1. A new queue is created, or an existing queue is opened by msgget().
2. New messages are added to the end of a queue by msgsnd().
[Every message has a positive long integer type field, a non-negative length, and the actual data bytes (corresponding to the
length), all specified to msgsnd() when the message is added to a queue.]
3. Messages are fetched from a queue by msgrcv().
[All processes can exchange information through access to a common system message queue. The sending process places a
message onto a queue that another process can read. Each message is given an identification or type so that processes can
select the appropriate message. The process must share a common key to gain access to the queue in the first place.]
4. Perform control operations on the message queue msgctl().
55. Mailboxes
• Tasks can also communicate by sending messages via mailboxes
• Mutual exclusion of the mailbox is handled by the operating system
• A mailbox is a special memory location that one or more tasks can use to transfer data, or generally for
synchronization
The tasks rely on the kernel to allow them to
• Write to the mailbox via a post operation
• Read from it via a pend operation
• Direct access to any mailbox is not allowed
• A mailbox can only contain one message
56. Mailboxes
• The important difference between the pend operation and simply polling the mailbox location is that the
pending task is suspended while waiting for the data to appear. (no CPU time is wasted for polling the
mailbox)
The mail that is passed via the mailbox can be
• a single piece of data or
• a pointer to a data structure
57. Mailboxes
• Although several tasks can pend on the same mailbox
• Only one task can receive the message
A waiting list is associated with each mailbox
• A task desiring a message from an empty mailbox is suspended
and placed on the waiting list until a message is received.
58. Mailboxes
Generally, three types of operations can be performed on a mailbox
• Initialize (with or without a message)
• Deposit a message (POST)
• Wait for a message (PEND)
59. Mailboxes
• In general, mailboxes are much like queues.
• The RTOS has functions to create, to write and to read from mailboxes, and functions to check whether the mailbox
contains any messages and to destroy the mailbox if it is no longer needed.
The details of mailboxes, are different in different RTOSs.
• RTOSs allow a certain number of messages in each mailbox, when you create the mailbox, others allow only one message
in a mailbox at a time.
• Once one message is written to a mailbox under these systems, the mailbox is full; no other message can be written to the
mailbox until the first one is read.
• In some RTOSs, the no. of messages in each mailbox is unlimited.
• There is a limit to the total no. of messages that can be in all of the mailboxes in the system, but these messages will be
distributed into the individual mailboxes as they are needed.
• In some RTOSs, you can prioritize mailbox messages. Higher-priority messages will be read before lower-priority
messages, regardless of the order in which they are written into the mailbox.
60. Pipes
• The pipe() system call in OS facilitates interprocess communication by creating a unidirectional
communication channel between two processes.
• It allows one process to write data into the pipe, while another process can read from it.
• This mechanism is particularly useful for achieving coordination and data transfer between processes,
such as in pipelines or filters.
• Pipes are a fundamental building block for implementing more complex communication and
synchronization mechanisms in Unix-like OS Linux and macOS.
• They provide a way for processes to exchange data without the need for shared memory or explicit file
operations, enhancing the modularity and efficiency of process communication in a multitasking
environment.
61. Pipes
• Pipes in OS are a mechanism that allows two or more processes to communicate and share data.
• It enables the flow of data from the output (stdout) of one process directly into the input (stdin) of another
process without the need for intermediate files or temporary storage.
• They are represented by the | symbol in the command line.
62. Pipes
• Process A generates some output data and sends it to the standard output (stdout).
• Process B, which is running concurrently or as a separate process, reads from its standard input (stdin).
• By connecting the stdout of Process A to the stdin of Process B using the | symbol in the command line,
the data flows directly from Process A to Process B without being written to a file or stored in memory.
How a pipe in OS works: