1. OS-Chapter 3
Cooperating Process:
Cooperating Processes are those processes that depend on other processes or
processes. They work together to achieve a common task in an operating
system. These processes interact with each other by sharing the resources such
as CPU, memory, and I/O devices to complete the task.
What is Shared Memory?
The fundamental model of inter-process communication is the shared memory
system. In a shared memory system, the collaborating communicates with each
other by establishing the shared memory region in the address space region.
If the process wishes to initiate communication and has data to share, create a
shared memory region in its address space. After that, if another process wishes
to communicate and tries to read the shared data, it must attach to the starting
process's shared address space.
What is Message Passing?
In this message-passing process model, the processes communicate with others
by exchanging messages. A communication link between the processes is
required for this purpose, and it must provide at least two operations: transmit
(message) and receive (message). Message sizes might be flexible or fixed.
2. OS-Chapter 3
Comparison between Shared Memory and the Message Passing
Shared Memory Message Passing
It is mainly used for data
communication.
It is mainly used for communication.
It offers a maximum speed of
computation because communication
is completed via the shared memory,
so the system calls are only required
to establish the shared memory.
It takes a huge time because it is
performed via the kernel (system
calls).
The code for reading and writing the
data from the shared memory should
be written explicitly by the developer.
No such code is required in this case
because the message-passing feature
offers a method for communication
and synchronization of activities
executed by the communicating
processes.
It is used to communicate between the
single processor and multiprocessor
systems in which the processes to be
communicated are on the same
machine and share the same address
space.
It is most commonly utilized in a
distributed setting when
communicating processes are spread
over multiple devices linked by a
network.
3. OS-Chapter 3
It is a faster communication strategy
than the message passing.
It is a relatively slower
communication strategy than shared
memory.
Make sure that processes in shared
memory aren't writing to the same
address simultaneously.
It is useful for sharing little quantities
of data without causing disputes.
Semaphores :
It is an abstract data type designed to control the way into a shared resource by
multiple threads and prevent critical section problems in a concurrent system
such as a multitasking operating system. They are a kind of synchronization
primitive.
Race condition
A race condition in an operating system (OS) happens when multiple threads or
processes access shared resources simultaneously, leading to unpredictable
behavior. This can occur due to poor synchronization, where the order of
execution is undetermined. Here's a concise overview:
Definition: A race condition occurs when multiple threads or processes access
shared data simultaneously, causing unexpected results.
In OS: In multithreaded contexts, threads sharing resources can lead to race
conditions, resulting in inappropriate behavior due to lack of synchronization.
4. OS-Chapter 3
Critical Sections: Race conditions often occur inside critical sections, where
multiple threads' execution results in unpredictable outcomes due to
simultaneous access to shared variables.
Vulnerability: Race conditions are also security vulnerabilities, where multiple
threads read and write the same variable, causing data corruption or unexpected
behavior.
What is the Critical Section in OS?
Critical Section refers to the segment of code or the program that tries to access
or modify the value of the variables in a shared resource.
The section above the critical section is called the Entry Section. The process
that is entering the critical section must pass the entry section.
The section below the critical section is called the Exit Section.
The section below the exit section is called the Reminder Section and this
section has the remaining code that is left after execution.
Reminder Section—>
5. OS-Chapter 3
What is the Critical Section Problem in OS?
When there is more than one process accessing or modifying a shared resource
at the same time, then the value of that resource will be determined by the last
process. This is called the race condition.
Consider an example of two processes, p1 and p2. Let x=3 be a variable present
in the shared resource.
Let us consider the following actions are done by the two processes,
Consider an example of two processes, p1 and p2.
Let value=3 be a variable present in the shared resource.
process p1
x+3
x=6
process p2
x-3
x=3
The original value of, the x should be 6, but due to the interruption of the
process p2, the value is changed back to 3. This is the problem of
synchronization.
The critical section problem is to make sure that only one process should be in a
critical section at a time. When a process is in the critical section, no other
processes are allowed to enter the critical section. This solves the race
condition.
Example of Critical Section Problem
Let us consider a
● Let us consider a scenario where money is withdrawn from the bank by
both the cashier(through cheque) and the ATM at the same time.
6. OS-Chapter 3
● Consider an account having a balance of ₹10,000. Let us consider that
when a cashier withdraws the money, it takes 2 seconds for the balance to
be updated in the account.
● It is possible to withdraw ₹7000 from the cashier and within the balance
update time of 2 seconds, also withdraw an amount of ₹6000 from the
ATM.
● Thus, the total money withdrawn becomes greater than the balance of the
bank account.
This happened because of two withdrawals occurring at the same time. In the
case of the critical section, only one withdrawal should be possible and it can
solve this problem.
Critical Section Problem
The use of critical sections in a program can cause a number of issues, including
Deadlock: When two or more threads or processes wait for each other to release
a critical section, it can result in a deadlock situation in which none of the
threads or processes can move. Deadlocks can be difficult to detect and resolve,
and they can have a significant impact on a program’s performance and
reliability.
Starvation: When a thread or process is repeatedly prevented from entering a
critical section, it can result in starvation, in which the thread or process is
unable to progress. This can happen if the critical section is held for an
unusually long period of time, or if a high-priority thread or process is always
given priority when entering the critical section.
Overhead: When using critical sections, threads or processes must acquire and
release locks or semaphores, which can take time and resources. This may
reduce the program’s overall performance.
7. OS-Chapter 3
Solutions to the critical section problem must satisfy the following
requirements
1) Mutual Exclusion: When one process is executing in its critical section,
no other process is allowed to execute in its critical section.
Conditions Required for Mutual Exclusion
According to the following four criteria, mutual exclusion is applicable:
1. When using shared resources, it is important to ensure mutual
exclusion between various processes. There cannot be two processes
running simultaneously in either of their critical sections.
2. It is not advisable to make assumptions about the relative speeds of the
unstable processes.
3. No process should outside its critical section block other processes.
4. Its critical section must be accessible by multiple processes in a finite
amount of time; multiple processes should never be kept waiting in an
infinite loop.
2. Progress: If no process is executing in its CS and there exist some processes
that wish to enter their CS, then the selection of the process that will enter the
CS next cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
4. No Assumption of Relative Speeds: The solution to the critical section
problem should not make any assumptions about the relative speeds of the
processes or the number of processors in the system
8. OS-Chapter 3
Critical Section Solution
1) Disabling interrupts
Disabling interrupts is a common approach to implementing mutual exclusion.
To achieve mutual exclusion, a process disables interrupts before entering its
critical section and then enables interrupts after it leaves its critical section. By
disabling interrupts, the CPU will be unable to switch processes. Only Kernel
can easily enable and disable the interrupts
This approach is simple and can be implemented with 2 assembler instructions.
However, it can have some disadvantages, including:
● Decreased performance
● Problems with real-time applications
● Increased vulnerability to crashes and data loss
Disabling interrupts is not sufficient to achieve mutual exclusion on a
multiprocessor machine. There also needs to be a way to prevent the other
processors from accessing the resource.
A lock variable
A lock variable is a software mechanism that synchronizes processes. It's a busy
waiting solution that can be used for more than two processes.
The lock variable has two possible values: 1 and 0. If the value of the lock is 1,
the critical section is occupied. If the value of the lock is 0, the critical section is
unoccupied.
A process that wants to get into the critical section first checks the value of the
lock variable. If it's 0, the process sets the value of the lock to 1 and enters the
critical section. Otherwise, it waits.
A lock variable is implemented in user mode, which means it doesn't require
support from the operating system.
9. OS-Chapter 3
Initially, the lock value is set to 0.
TSL instructions
The Test and Set Lock (TSL) mechanism is a synchronization technique.
1. Purpose: TSL allows a process to enter the critical section only when it executes the
TSL instruction.
2. Mechanism: TSL employs a "test and set" instruction. It reads a memory location,
stores its value in a register, and sets the memory location to a predetermined value
(usually 1), ensuring synchronization among executing processes.
3. Implementation: Processes use TSL to solve critical section problems, preventing
concurrent access and ensuring data integrity.
TSL is vital for managing shared resources in multitasking environments, enhancing system
stability and reliability.
Strict alternation
The strict alternation approach in operating systems, also known as the turn variable
approach, is a synchronization mechanism implemented for synchronizing two processes.
Here's how it works:
1. Definition: A strict alternation or turn variable approach provides mutual exclusion
for two processes, ensuring that only one process executes at a time.
2. Functionality: Processes take turns executing, providing synchronization between
them. This approach guarantees mutual exclusion but does not ensure progress and
follows a strict alternation pattern.
3. Implementation: When one process is executing, the other is waiting, ensuring
bounded waiting. This mechanism can involve disabling interrupts or using lock
variables to achieve strict alternation.
The turn variable or strict alternation approach is a fundamental concept in process
synchronization, crucial for ensuring orderly execution in operating systems.
10. OS-Chapter 3
https://www.slideshare.net/DhavalChandarana/unit-3-interprocess-
communication
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section
problem. In Peterson’s solution, we have two shared variables:
● boolean flag[i]: Initialized to FALSE, initially no one is interested in
entering the critical section
● int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
● Mutual Exclusion is assured as only one process can access the critical
section at any time.
11. OS-Chapter 3
● Progress is also assured, as a process outside the critical section does not
block other processes from entering the critical section.
● Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
● It involves busy waiting. (In Peterson’s solution, the code statement-
“while(flag[j] && turn == j);” is responsible for this. Busy waiting is not
favored because it wastes CPU cycles that could be used to perform other
tasks.)
● It is limited to 2 processes.
● Peterson’s solution cannot be used in modern CPU architectures.
Semaphore:
A Semaphore is a lower-level object. A semaphore is a non-negative integer variable.
The value of the Semaphore indicates the number of shared resources available in the
system. The value of the semaphore can be modified only by two functions, namely
wait() (P) and signal() (V) operations (apart from the initialization).
When any process accesses the shared resources, it performs the wait() operation on
the semaphore and when the process releases the shared resources, it performs the
signal() operation on the semaphore. When a process is modifying the value of the
semaphore, no other process can simultaneously modify the value of the semaphore.
The Semaphore is further divided into 2 categories:
Binary semaphore
Binary Semaphores have two operations namely wait(P) and signal(V)
operations. Both operations are atomic. Semaphore(s) can be initialized to zero
or one.
Syntax:
// Wait Operation
wait(Semaphore S) {
while (S<=0);
S--;
12. OS-Chapter 3
}
// Signal Operation
signal(Semaphore S) {
S++;
}
1. In the binary semaphore, it can take only integer values either 0 or 1.
2. Here 1 stands for up-operation(V) and 0 stands for down-operation (P).
3. The major reason behind introducing binary semaphores is that it allows only
one process to enter into a critical section if they are sharing resources.
4. It cannot ensure bounded waiting because it is only a variable that retains an
integer value. It’s possible that a process will never get a chance to enter the
critical section, causing it to starve.
2. Counting semaphore
1. A counting semaphore is a structure that allows multiple processes to access
a shared resource simultaneously. It has a variable that can take more than
two values and a list of tasks or entities. The value of the counting
semaphore indicates the maximum number of processes that can enter the
critical section at the same time.
2. A counting semaphore uses a count that helps tasks to be acquired or
released numerous times. The value of the counting semaphore can range
over an unrestricted domain. It can take non-negative integer values.
3. A counting semaphore is initialized with the total number of resources
available. A process that wants to enter the critical section first decreases
the semaphore value by 1 and then checks whether it gets negative or not.
13. OS-Chapter 3
Counting Semaphore vs. Binary Semaphore
Here, are some major differences between counting and binary semaphore:
Counting Semaphore Binary Semaphore
No mutual exclusion Mutual exclusion
Any integer value Value only 0 and 1
More than one slot Only one slot
Provide a set of Processes It has a mutual exclusion mechanism.
Advantages of Semaphores:
● Semaphores are machine-independent (because they are implemented in the
kernel services).
● Semaphores permit more than one thread to access the critical section,
unlike monitors.
● In semaphores there is no spinning, hence no waste of resources due to no
busy waiting.
Monitor
Monitor in an operating system is one method for achieving process
synchronization. Programming languages help the monitor to accomplish
mutual exclusion between different activities in a system. wait() and notify()
14. OS-Chapter 3
constructs are synchronization functions that are available in the Java
programming language.
Syntax of monitor in OS
Monitor in os has a simple syntax similar to how we define a class, it is as
follows:
Monitor monitorName{
variables_declaration;
condition_variables;
procedure p1{ ... };
procedure p2{ ... };
...
procedure pn{ ... };
{
initializing_code;
}
}
Monitor in an operating system is simply a class containing variable_declarations,
condition_variables, various procedures (functions), and an initializing_code block
that is used for process synchronization.
Characteristics of Monitors in OS
A monitor in os has the following characteristics:
● We can only run one program at a time inside the monitor.
● Monitors in an operating system are defined as a group of methods and fields
that are combined with a special type of package in the os.
● A program cannot access the monitor's internal variable if it is running outside
the monitor. However, a program can call the monitor's functions.
● Monitors were created to make synchronization problems less complicated.
15. OS-Chapter 3
● Monitors provide a high level of synchronization between processes.
Components of Monitor in an Operating System
The monitor is made up of four primary parts:
1. Initialization: The code for initialization is included in the package, and we just
need it once when creating the monitors.
2. Private Data: It is a feature of the monitor in an operating system to make the
data private. It holds all of the monitor's secret data, which includes private
functions that may only be utilized within the monitor. As a result, private
fields and functions are not visible outside of the monitor.
3. Monitor Procedure: Procedures or functions that can be invoked from outside
of the monitor are known as monitor procedures.
4. Monitor Entry Queue: Another important component of the monitor is the
Monitor Entry Queue. It contains all of the threads, which are commonly
referred to as procedures only.
Condition Variables
There are two sorts of operations we can perform on the monitor's condition variables:
1. Wait
2. Signal
16. OS-Chapter 3
Consider a condition variable (y) is declared in the monitor:
y.wait(): The activity/process that applies the wait operation on a condition variable
will be suspended, and the suspended process is located in the condition variable's
block queue.
y.signal(): If an activity/process applies the signal action on the condition variable,
then one of the blocked activity/processes in the monitor is given a chance to execute.
Classical Epic Problem
The Classical Epic Problem refers to a process synchronization issues. These
problems involve coordinating multiple processes in a system to ensure proper
execution. Here are some key classical synchronization problems:
1. Bounded-buffer (or Producer-Consumer) Problem.
2. Dining-Philosophers Problem.
3. Readers and Writers Problem.
These are summarized, for detailed explanation, you can view the linked
articles for each.
● Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer
problem. This problem is generalized in terms of the Producer-
Consumer problem. Solution to this problem is, creating two
counting semaphores “full” and “empty” to keep track of the
current number of full and empty buffers respectively. Producers
produce a product and consumers consume the product, but both
use of one of the containers each time.
17. OS-Chapter 3
● Dining-Philosophers Problem:
The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick between each
pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two
chopsticks adjacent to him. One chopstick may be picked up by
any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.
● Readers and Writers Problem:
Suppose that a database is to be shared among several
concurrent processes. Some of these processes may want only to
read the database, whereas others may want to update (that is,
to read and write) the database. We distinguish between these
two types of processes by referring to the former as readers and
18. OS-Chapter 3
to the latter as writers. Precisely in OS we call this situation as
the readers-writers problem. Problem parameters:
● One set of data is shared among a number of processes.
● Once a writer is ready, it performs its write. Only one
writer may write at a time.
● If a process is writing, no other process can read it.
● If at least one reader is reading, no other process can
write.
● Readers may not write and only read.