INTER PROCESS
COMMUNICATION
PROCESS
TYPES OF PROCESS
A system can have two types of processes -
◦ Independent Process
◦ Cooperating Process
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes
INTER PROCESS COMMUNICATION
Inter-process communication (IPC) is a mechanism that allows processes to communicate with
each other and synchronize their actions.
Interprocess Communication or IPC provides a mechanism to exchange data and information
across multiple processes and lets different programs run in parallel, share data, and
communicate with each other.
The communication between these processes can be seen as a method of co-operation between
them.
Why Interprocess Communication is Necessary
◦ Computational Speedup
◦ Modularity
◦ Information and data sharing
◦ Privilege separation
◦ Processes can communicate with each other and synchronize their action.
INTER PROCESS COMMUNICATION APPROCHS
Shared Memory
◦ Interprocess communication using shared memory requires communicating processes to establish a
region of shared memory.
◦ Typically, a shared-memory region resides in the address space of the process creating the shared-
memory segment.
◦ Other processes that wish to communicate using this shared-memory segment must attach it to their
address space.
Producer-Consumer problem
◦ There are two processes: Producer and Consumer.
◦ The producer produces some items and the Consumer consumes that item.
◦ The two processes share a common space or memory location known as a buffer where the item
produced by the Producer is stored and from which the Consumer consumes the item if needed.
TWO TYPES OF PROBLEMS
◦ Unbounded buffer problem in which the Producer can keep on producing items and there is no limit
on the size of the buffer
◦ Bounded buffer problem in which the Producer can produce up to a certain number of items before it
starts waiting for Consumer to consume it.
Message Passing
◦ When two or more processes participate in inter-process communication, each process sends
messages to the others via Kernel.
◦ Here is an example of sending messages between two processes: – Here, the process sends a
message like “M” to the OS kernel.
◦ This message is then read by Process B. A communication link is required between the two
processes for successful message exchange. There are several ways to create these links.
Message Passing
A message-passing facility provides at least two operations:
◦ send(message) receive(message)
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
Message Passing
Direct Communication
In direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication.
In this scheme, the send() and receive() primitives are defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
A communication link in this scheme has the following properties:
• A link is established automatically between every pair of processes that want to communicate.
◦ The processes need to know only each other’s identity to communicate
◦ .A link is associated with exactly two processes.
• Between each pair of processes, there exists exactly one link.
Message Passing
Indirect Communication
With indirect communication, the messages are sent to and received from mailboxes, or ports.
A mailbox can be viewed abstractly as an object into which messages can be placed by
processes and from which messages can be removed.
◦ send(A, message)—Send a message to mailbox A.
◦ receive(A, message)—Receive a message from mailbox A
Steps
• Create a new mailbox.
• Send and receive messages through the mailbox.
• Delete a mailbox.
Message Passing
Indirect Communication
Synchronization
• Blocking send: The sending process is blocked until the message is received by the receiving
process or by the mailbox.
• Nonblocking send: The sending process sends the message and resumes operation.
• Blocking receive: The receiver blocks until a message is available.
• Nonblocking receive: The receiver retrieves either a valid message or a null.
Buffering
Whether communication is direct or indirect, messages exchanged by communicating processes reside in a
temporary queue. Basically, such queues can be implemented in three ways:
• Zero capacity: The queue has a maximum length of zero; thus, the link cannot have any messages waiting
in it. In this case, the sender must block until the recipient receives the message.
• Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If the queue is
not full when a new message is sent, the message is placed in the queue (either the message is copied or a
pointer to the message is kept), and the sender can continue execution without waiting. The link’s capacity is
finite, however. If the link is full, the sender must block until space is available in the queue.
• Unbounded capacity: The queue’s length is potentially infinite; thus, any number of messages can wait in
it. The sender never blocks.
Buffering
Pipes
◦ Pipes are a type of data channel commonly used for one-way communication between two processes.
◦ Because this is a half-duplex technique, the primary process communicates with the secondary process.
◦ However, additional lines are required to achieve a full duplex. The two pipes create a bidirectional data
channel between the two processes.
◦ But one pipe creates a unidirectional data channel.
◦ Pipes are primarily used on Windows operating systems.
◦ Like in the diagram it is shown that one process will send a message to the pipe.
◦ The message will be retrieved and another process will write it to the standard output.
Signal
◦ Signal
◦ The signal is a facility that allows processes to communicate with each other.
◦ A signal is a way of telling a process that it needs to do something.
◦ A process can send a signal to another process.
◦ A signal also allows a process to interrupt another process.
◦ A signal is a way of communicating between processes.
Process Synchronization
◦ Process Synchronization is the coordination of execution of multiple processes in a multi-process
system to ensure that they access shared resources in a controlled and predictable manner.
◦ It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.
◦ The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other, and to prevent the possibility of inconsistent data
due to concurrent access.
◦ To achieve this, various synchronization techniques such as semaphores, monitors, and critical
sections are used.
RACE CONDITION
◦ A race condition is a problem that occurs in an operating system (OS) when two or more
processes or threads are executing concurrently. The outcome of their execution depends on the
order in which they are executed.
◦ A race condition can occur when:
◦ Many threads share a resource or execute the same piece of code
◦ Many processes share data with each other and access the data concurrently
◦ If not handled properly, this can lead to an undesirable situation where the output state is
dependent on the order of execution of the threads.
◦ A race condition can lead to unexpected behavior or system crashes. It becomes a bug when
events do not happen in the order the programmer intended.
Critical Section Problem
◦ A critical section is a code segment that can be accessed by only one process at a time.
◦ The critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables.
◦ So the critical section problem means designing a way for cooperative processes to access shared
resources without creating data inconsistencies.
Critical Section Problem
◦ Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed
to execute in the critical section.
◦ Progress: If no process is executing in the critical section and other processes are waiting outside
the critical section, then only those processes that are not executing in their remainder section
can participate in deciding which will enter in the critical section next, and the selection can not
be postponed indefinitely.
◦ Bounded Waiting: A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
Peterson’s Solution
◦ Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:
◦ boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
◦ int turn: The process whose turn is to enter the critical section.
Semaphores
◦ A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be
signaled by another thread. This is different than a mutex as the mutex can be signaled only by
the thread that is called the wait function.
◦ A semaphore uses two atomic operations, wait and signal for process synchronization.
◦ A Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
◦ There are two types of semaphores: Binary Semaphores and Counting Semaphores.
Semaphores
◦ Binary Semaphores
◦ They can only be either 0 or 1.
◦ They are also known as mutex locks, as the locks can provide mutual exclusion.
◦ All the processes can share the same mutex semaphore that is initialized to 1.
◦ Then, a process has to wait until the lock becomes 0.
◦ Then, the process can make the mutex semaphore 1 and start its critical section.
◦ When it completes its critical section, it can reset the value of the mutex semaphore to 0 and some
other process can enter its critical section.
Semaphores
◦ Counting Semaphores:
◦ They can have any value and are not restricted over a certain domain.
◦ They can be used to control access to a resource that has a limitation on the number of
simultaneous accesses.
◦ The semaphore can be initialized to the number of instances of the resource.
◦ Whenever a process wants to use that resource, it checks if the number of remaining instances is
more than zero, i.e., the process has an instance available.
◦ Then, the process can enter its critical section thereby decreasing the value of the counting
semaphore by 1.
◦ After the process is over with the use of the instance of the resource, it can leave the critical
section thereby adding 1 to the number of available instances of the resource.

inter process communication in Operating System.pptx

  • 1.
  • 2.
    PROCESS TYPES OF PROCESS Asystem can have two types of processes - ◦ Independent Process ◦ Cooperating Process An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes
  • 3.
    INTER PROCESS COMMUNICATION Inter-processcommunication (IPC) is a mechanism that allows processes to communicate with each other and synchronize their actions. Interprocess Communication or IPC provides a mechanism to exchange data and information across multiple processes and lets different programs run in parallel, share data, and communicate with each other. The communication between these processes can be seen as a method of co-operation between them.
  • 4.
    Why Interprocess Communicationis Necessary ◦ Computational Speedup ◦ Modularity ◦ Information and data sharing ◦ Privilege separation ◦ Processes can communicate with each other and synchronize their action.
  • 5.
  • 6.
    Shared Memory ◦ Interprocesscommunication using shared memory requires communicating processes to establish a region of shared memory. ◦ Typically, a shared-memory region resides in the address space of the process creating the shared- memory segment. ◦ Other processes that wish to communicate using this shared-memory segment must attach it to their address space.
  • 7.
    Producer-Consumer problem ◦ Thereare two processes: Producer and Consumer. ◦ The producer produces some items and the Consumer consumes that item. ◦ The two processes share a common space or memory location known as a buffer where the item produced by the Producer is stored and from which the Consumer consumes the item if needed. TWO TYPES OF PROBLEMS ◦ Unbounded buffer problem in which the Producer can keep on producing items and there is no limit on the size of the buffer ◦ Bounded buffer problem in which the Producer can produce up to a certain number of items before it starts waiting for Consumer to consume it.
  • 8.
    Message Passing ◦ Whentwo or more processes participate in inter-process communication, each process sends messages to the others via Kernel. ◦ Here is an example of sending messages between two processes: – Here, the process sends a message like “M” to the OS kernel. ◦ This message is then read by Process B. A communication link is required between the two processes for successful message exchange. There are several ways to create these links.
  • 9.
    Message Passing A message-passingfacility provides at least two operations: ◦ send(message) receive(message) • Establish a communication link (if a link already exists, no need to establish it again.) • Start exchanging messages using basic primitives. We need at least two primitives: – send(message, destination) or send(message) – receive(message, host) or receive(message)
  • 10.
    Message Passing Direct Communication Indirect communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send() and receive() primitives are defined as: • send(P, message)—Send a message to process P. • receive(Q, message)—Receive a message from process Q. A communication link in this scheme has the following properties: • A link is established automatically between every pair of processes that want to communicate. ◦ The processes need to know only each other’s identity to communicate ◦ .A link is associated with exactly two processes. • Between each pair of processes, there exists exactly one link.
  • 11.
    Message Passing Indirect Communication Withindirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. ◦ send(A, message)—Send a message to mailbox A. ◦ receive(A, message)—Receive a message from mailbox A Steps • Create a new mailbox. • Send and receive messages through the mailbox. • Delete a mailbox.
  • 12.
  • 13.
    Synchronization • Blocking send:The sending process is blocked until the message is received by the receiving process or by the mailbox. • Nonblocking send: The sending process sends the message and resumes operation. • Blocking receive: The receiver blocks until a message is available. • Nonblocking receive: The receiver retrieves either a valid message or a null.
  • 14.
    Buffering Whether communication isdirect or indirect, messages exchanged by communicating processes reside in a temporary queue. Basically, such queues can be implemented in three ways: • Zero capacity: The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. • Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue (either the message is copied or a pointer to the message is kept), and the sender can continue execution without waiting. The link’s capacity is finite, however. If the link is full, the sender must block until space is available in the queue. • Unbounded capacity: The queue’s length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks.
  • 15.
  • 16.
    Pipes ◦ Pipes area type of data channel commonly used for one-way communication between two processes. ◦ Because this is a half-duplex technique, the primary process communicates with the secondary process. ◦ However, additional lines are required to achieve a full duplex. The two pipes create a bidirectional data channel between the two processes. ◦ But one pipe creates a unidirectional data channel. ◦ Pipes are primarily used on Windows operating systems. ◦ Like in the diagram it is shown that one process will send a message to the pipe. ◦ The message will be retrieved and another process will write it to the standard output.
  • 17.
    Signal ◦ Signal ◦ Thesignal is a facility that allows processes to communicate with each other. ◦ A signal is a way of telling a process that it needs to do something. ◦ A process can send a signal to another process. ◦ A signal also allows a process to interrupt another process. ◦ A signal is a way of communicating between processes.
  • 18.
    Process Synchronization ◦ ProcessSynchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared resources in a controlled and predictable manner. ◦ It aims to resolve the problem of race conditions and other synchronization issues in a concurrent system. ◦ The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other, and to prevent the possibility of inconsistent data due to concurrent access. ◦ To achieve this, various synchronization techniques such as semaphores, monitors, and critical sections are used.
  • 19.
    RACE CONDITION ◦ Arace condition is a problem that occurs in an operating system (OS) when two or more processes or threads are executing concurrently. The outcome of their execution depends on the order in which they are executed. ◦ A race condition can occur when: ◦ Many threads share a resource or execute the same piece of code ◦ Many processes share data with each other and access the data concurrently ◦ If not handled properly, this can lead to an undesirable situation where the output state is dependent on the order of execution of the threads. ◦ A race condition can lead to unexpected behavior or system crashes. It becomes a bug when events do not happen in the order the programmer intended.
  • 20.
    Critical Section Problem ◦A critical section is a code segment that can be accessed by only one process at a time. ◦ The critical section contains shared variables that need to be synchronized to maintain the consistency of data variables. ◦ So the critical section problem means designing a way for cooperative processes to access shared resources without creating data inconsistencies.
  • 21.
    Critical Section Problem ◦Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to execute in the critical section. ◦ Progress: If no process is executing in the critical section and other processes are waiting outside the critical section, then only those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section next, and the selection can not be postponed indefinitely. ◦ Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
  • 22.
    Peterson’s Solution ◦ Peterson’sSolution is a classical software-based solution to the critical section problem. In Peterson’s solution, we have two shared variables: ◦ boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section ◦ int turn: The process whose turn is to enter the critical section.
  • 23.
    Semaphores ◦ A semaphoreis a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another thread. This is different than a mutex as the mutex can be signaled only by the thread that is called the wait function. ◦ A semaphore uses two atomic operations, wait and signal for process synchronization. ◦ A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal(). ◦ There are two types of semaphores: Binary Semaphores and Counting Semaphores.
  • 24.
    Semaphores ◦ Binary Semaphores ◦They can only be either 0 or 1. ◦ They are also known as mutex locks, as the locks can provide mutual exclusion. ◦ All the processes can share the same mutex semaphore that is initialized to 1. ◦ Then, a process has to wait until the lock becomes 0. ◦ Then, the process can make the mutex semaphore 1 and start its critical section. ◦ When it completes its critical section, it can reset the value of the mutex semaphore to 0 and some other process can enter its critical section.
  • 25.
    Semaphores ◦ Counting Semaphores: ◦They can have any value and are not restricted over a certain domain. ◦ They can be used to control access to a resource that has a limitation on the number of simultaneous accesses. ◦ The semaphore can be initialized to the number of instances of the resource. ◦ Whenever a process wants to use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has an instance available. ◦ Then, the process can enter its critical section thereby decreasing the value of the counting semaphore by 1. ◦ After the process is over with the use of the instance of the resource, it can leave the critical section thereby adding 1 to the number of available instances of the resource.