Interprocess communication (IPC) in an operating system refers to the mechanisms and techniques that processes use to communicate and share data with each other. Processes are independent execution units within an operating system, and IPC is essential for processes to cooperate, exchange information, and synchronize their activities. Here are some common methods of IPC in operating systems:
Message Passing: In message passing, processes send and receive messages to communicate. This can be implemented using various methods:
Sockets: Processes can communicate over a network or locally using sockets, which provide a means to send and receive data streams.
Pipes: A pipe is a unidirectional communication channel between two processes. One process writes to the pipe, and the other reads from it.
Message Queues: Message queues allow processes to send and receive messages in a more structured manner. Messages are often stored in a queue, and processes can read from and write to the queue.
Shared Memory: Shared memory is a method where multiple processes can access the same region of memory. This allows them to share data more efficiently. However, it requires synchronization mechanisms to ensure that processes do not interfere with each other.
Semaphores: Semaphores are synchronization primitives used to control access to shared resources. They are often used in combination with shared memory to prevent race conditions and ensure orderly access to data.
Mutexes and Locks: Mutexes (short for mutual exclusion) and locks are used to protect critical sections of code. Only one process or thread can hold a mutex at a time, ensuring that only one entity accesses a particular resource at a given moment.
Signals: Signals are a form of asynchronous communication. One process can send a signal to another process to notify it of an event, such as a specific condition or an interrupt. The receiving process can define signal handlers to respond to these signals.
Remote Procedure Calls (RPC): RPC allows a process to execute procedures or functions on a remote process, as if they were local. This is often used in distributed systems and client-server architectures.
Named Pipes (FIFOs): Named pipes, or FIFOs (first in, first out), are similar to regular pipes but have a named file associated with them. Multiple processes can read from and write to the same named pipe, making them useful for communication between unrelated processes.
The choice of IPC mechanism depends on the specific requirements of the processes and the operating system. Different IPC methods are suitable for different scenarios. For example, message passing is useful for structured communication, shared memory is efficient for large data sharing, and semaphores help with synchronization.
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
Inter-Process communication in Operating System.ppt
1. Inter-process Communication in
Operating System
1. Inter-process Communication
2. Communication Model
3. Communication Process
4. Shared Memory
5. Producer-Consumer Problem
6. Direct Communication
7. Indirect Communication
Ms.C.Nithiya
Computer Science and Engineering
KIT-Kalaignarkarunanidhi Institute Of Technology
2. Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including
sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
Interprocess Communication
4. Cooperating Processes
Independent process cannot affect or be affected by the
execution of another process
Cooperating process can affect or be affected by the
execution of another process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience
5. Interprocess Communication –
Shared Memory
An area of memory shared among the
processes that wish to communicate
The communication is under the control of
the users processes not the operating system.
Major issues is to provide mechanism that will
allow the user processes to synchronize their
actions when they access shared memory.
6. Producer-Consumer Problem
Under cooperating processes, producer
process produces information that is
consumed by a consumer process
unbounded-buffer places no practical
limit on the size of the buffer
bounded-buffer assumes that there is a
fixed buffer size
7. Examples of IPC Systems - POSIX
n POSIX Shared Memory include shared memory and message passing.
l Process first creates shared memory segment
shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
l Also used to open an existing segment to share it. It returns a
integer value.
l Set the size of the object
ftruncate(shm fd, 4096);
l Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared
memory");
Mmap()memory mapping func creates a pointer to
shared memory object.
10. Bounded-Buffer – Shared-
Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;#if free one process enter in
int out = 0;# one process comes out and
becomes NULL
Solution is correct, but can only use BUFFER_SIZE-1 elements
11. Bounded-Buffer – Producer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced; #next new item
is stored
in = (in + 1) % BUFFER_SIZE;
}
12. Bounded Buffer – Consumer
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next
consumed */
}
13. Interprocess Communication –
Message Passing
Mechanism for processes to communicate and to
synchronize their actions without sharing address
space.
Message system – process communication takes
place between different computer systems
through network.
IPC facility provides two operations:
send(message)
receive(message)
The message size is either fixed or variable
14. Message Passing (Cont.)
If processes P and Q wish to communicate, they need to:
Establish a communication link between them
Exchange messages via send/receive
Implementation issues:
How are links established?
Can a link be associated with more than two processes?
How many links can there be between every pair of
communicating processes?
What is the capacity of a link?
Is the size of a message that the link can accommodate
fixed or variable?
Is a link unidirectional or bi-directional?
15. Message Passing (Cont.)
Implementation of communication link
Issues faced :
Naming
Synchronization
Buffering
Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
16. Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from
process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of
communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-
directional
17. Indirect Communication
Messages are directed and received from mailboxes (also referred to as
ports)
Each mailbox has a unique id
Processes can communicate only if they share a
mailbox
Properties of communication link
Link established only if processes share a common
mailbox
A link may be associated with many processes
Each pair of processes may share several
communication links
Link may be unidirectional or bi-directional
18. Indirect Communication
Operations
create a new mailbox (port)
send and receive messages through
mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message
from mailbox A
19. Indirect Communication
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two
processes
Allow only one process at a time to execute a
receive operation
Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
20. Synchronization
Implement : Message passing may be either blocking or non-
blocking
Blocking is considered synchronous
Blocking send -- the sender is blocked until the message is
received
Blocking receive -- the receiver is blocked until a message is
available
Non-blocking is considered asynchronous
Non-blocking send -- the sender sends the message and
continue
Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
21. Synchronization (Cont.)
Producer-consumer becomes little important.
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
/* consume the item in next consumed */
}
22. Buffering
Send and receive will reside in temp queue.
implemented in one of three ways
1. Zero capacity – no messages are queued on a
link.
Sender is blocked, no message is waiting,
nothing is stored.
2. Bounded capacity – finite length of n
messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
23. Thread
A thread is a flow of execution through the
process code, with its own program counter
that keeps track of which instruction to
execute next, system registers which hold its
current working variables, and a stack which
contains the execution history. ... A thread is
also called a lightweight process.
25. Benefits of Multithreading
Responsiveness – may allow continued
execution if part of process is blocked, especially
important for user interfaces
Resource Sharing – threads share resources of
process, easier than shared memory or message
passing , share code and data .
Economy – cheaper than process creation,
thread switching lowers cost than context
switching
Scalability – process run concurrently and
hence process complete quicker
26. Multithreading Models
User thread – supported above the kernel and
managed without kernel support
Kernel thread – supported and managed directly by
the os
Relationship between user thread and kernel thread exist
There are 3 common ways in establishing relationship
Many-to-One
One-to-One
Many-to-Many
27. Many-to-One
Many user-level threads
mapped to single kernel
thread
One thread blocking
causes all to block
Thread management is
done in user level
Multiple threads may not
run in parallel on system
because only one may be
in kernel at a time
Few systems currently use
this model
28. One-to-One
Each user-level thread maps
to kernel thread
Creating a user-level thread
creates a kernel thread
If one send block message
other thread works that
increase concurrency.
Whenever you are creating
user thread a kernel thread
need to be created.
If you have 4processor and 5
threads need to run at a time
that could not work.
29. Many-to-Many Model
More kernel thread can be mapped with
same or lesser kernel thread.
Kernel thread is associated with
particular application.
Developer create more user thread and
made to run parallel with kernel
Thread perform blocking call, kernel set
another thread to work for execution.