This ppt covers following topics,
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
Communication in Client-Server Systems
The document discusses processes and interprocess communication. It defines a process as a program in execution that forms the basis of computation. Processes have states like running, waiting, ready and terminated. A process is represented in memory and by a process control block. Processes communicate through shared memory or message passing. Examples of interprocess communication systems include POSIX shared memory, Mach message passing and Windows local procedure calls. Communication in client-server systems uses sockets, remote procedure calls and pipes.
This document discusses processes and interprocess communication. It begins by defining a process as a program in execution, then describes process states, scheduling, and context switching. Process memory layout and the process control block are explained. Methods of process creation, termination, and communication like shared memory and message passing are covered. Producer-consumer problems demonstrate interprocess communication using shared memory or message passing.
The document discusses processes, process states, and process scheduling. It defines a process as a program in execution that contains the program counter, CPU registers, and other information. Processes go through various states like new, running, waiting, ready, and terminated. The OS tracks information about each process using a process control block (PCB). Process scheduling involves long-term, medium-term, and short-term scheduling to manage processes in memory and allocate CPU time. Context switching refers to saving and restoring a process's state when switching between processes. Inter-process communication allows processes to share resources and data. Threads are lightweight processes that can be used for parallelism and responsiveness.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
This document discusses processes and process management. It covers key concepts like process states, process scheduling, and inter-process communication. The main points covered are:
- A process is a program in execution that needs resources like CPU time, memory, and I/O devices. The operating system is responsible for process management tasks like creation, scheduling, and synchronization.
- Processes go through various states like new, ready, running, waiting, and terminated. Each process is represented by a process control block containing its state and resource allocation information.
- The CPU scheduler selects processes from ready queues to load into memory and execute. Scheduling algorithms aim to maximize CPU usage and provide fair access to processes.
This ppt covers following topics,
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
Communication in Client-Server Systems
The document discusses processes and interprocess communication. It defines a process as a program in execution that forms the basis of computation. Processes have states like running, waiting, ready and terminated. A process is represented in memory and by a process control block. Processes communicate through shared memory or message passing. Examples of interprocess communication systems include POSIX shared memory, Mach message passing and Windows local procedure calls. Communication in client-server systems uses sockets, remote procedure calls and pipes.
This document discusses processes and interprocess communication. It begins by defining a process as a program in execution, then describes process states, scheduling, and context switching. Process memory layout and the process control block are explained. Methods of process creation, termination, and communication like shared memory and message passing are covered. Producer-consumer problems demonstrate interprocess communication using shared memory or message passing.
The document discusses processes, process states, and process scheduling. It defines a process as a program in execution that contains the program counter, CPU registers, and other information. Processes go through various states like new, running, waiting, ready, and terminated. The OS tracks information about each process using a process control block (PCB). Process scheduling involves long-term, medium-term, and short-term scheduling to manage processes in memory and allocate CPU time. Context switching refers to saving and restoring a process's state when switching between processes. Inter-process communication allows processes to share resources and data. Threads are lightweight processes that can be used for parallelism and responsiveness.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
This document discusses processes and process management. It covers key concepts like process states, process scheduling, and inter-process communication. The main points covered are:
- A process is a program in execution that needs resources like CPU time, memory, and I/O devices. The operating system is responsible for process management tasks like creation, scheduling, and synchronization.
- Processes go through various states like new, ready, running, waiting, and terminated. Each process is represented by a process control block containing its state and resource allocation information.
- The CPU scheduler selects processes from ready queues to load into memory and execute. Scheduling algorithms aim to maximize CPU usage and provide fair access to processes.
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore inter-process communication using shared memory and message passing
Processes and Processor Management discusses processes, scheduling, and interprocess communication.
A process is a program in execution that progresses sequentially through code, data, and stack sections. Processes change state between new, running, waiting, ready, and terminated. Each process has a process control block storing its state and scheduling information. The CPU scheduler selects processes from ready queues for execution on CPUs to maximize usage. Processes communicate through shared memory and message passing.
A system consists of processes that execute system code or user code. An operating system makes a computer more productive by switching the CPU between processes. A process is a program in execution that has a program counter, stack, data section, and state. The operating system manages processes using process control blocks and by moving processes between scheduling queues. Context switching allows the CPU to save and load process states when switching between processes.
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore interprocess communication using shared memory and message passing
This document provides an overview of processes and threads management in an operating system. It discusses key concepts such as process states (ready, running, blocked), context switching between processes, and the process control block (PCB) used to store process information. It also compares processes and threads, noting that threads are lightweight processes created within a process that share the same memory space. Threads provide benefits like concurrency and efficient communication compared to separate processes. The document outlines thread types, multi-threading models, common pthread functions, and system calls used for process and thread management.
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
The document discusses process management in operating systems. It defines a process as a program in execution that must progress sequentially. A process has multiple parts including code, activity, stack, and data sections. It describes the process concept, states such as running and waiting, and the process control block used to store process information. The document also discusses threads as a way to allow multiple execution locations within a process and covers process scheduling, ready and wait queues, and context switching between processes.
This document discusses process management in operating systems. It covers topics like CPU scheduling, scheduling concepts and algorithms, process states, context switching, and process scheduling queues. Process management techniques include process scheduling using long term, short term, and medium term schedulers. Key concepts covered are the process control block (PCB), which stores process information, and context switching, which saves and restores a process's context when switching between processes.
BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...Nur Atiqah Mohd Rosli
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in such as new, ready, running, waiting, and terminated. It describes the process control block that contains information about each process. It also covers topics like process creation, where a parent process can create child processes, and process termination, where a process can exit or be terminated by its parent. Interprocess communication allows processes to communicate and synchronize actions.
UNIT II PROCESS MANAGEMENT
Processes – Process Concept, Process Scheduling, Operations on Processes, Inter-process Communication; CPU Scheduling – Scheduling criteria, Scheduling algorithms, Multiple-processor scheduling, Real time scheduling; Threads- Overview, Multithreading models, Threading issues; Process Synchronization – The critical-section problem, Synchronization hardware, Mutex locks, Semaphores, Classic problems of synchronization, Critical regions, Monitors; Deadlock – System model, Deadlock characterization, Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from deadlock.
This document discusses process management concepts including processes, threads, process scheduling, and inter-process communication. A process is defined as the fundamental unit of work in a system and requires resources like CPU time and memory. Key process concepts covered include process states, process layout in memory, and the process control block. Threads allow a process to execute multiple tasks simultaneously. Process scheduling and context switching are also summarized. Methods of inter-process communication like shared memory and message passing are described along with examples of client-server communication using sockets, remote procedure calls, and remote method invocation.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
This document discusses process management in operating systems. It defines processes and describes the key components of a process, including program code, data, stack, and process state. Each process is represented by a process control block (PCB) that contains the process state, CPU registers, scheduling information, memory management information, and I/O status. Processes can be created, terminated, and communicate through interprocess communication (IPC). Context switching allows the CPU to rapidly switch between running processes.
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore inter-process communication using shared memory and message passing
Processes and Processor Management discusses processes, scheduling, and interprocess communication.
A process is a program in execution that progresses sequentially through code, data, and stack sections. Processes change state between new, running, waiting, ready, and terminated. Each process has a process control block storing its state and scheduling information. The CPU scheduler selects processes from ready queues for execution on CPUs to maximize usage. Processes communicate through shared memory and message passing.
A system consists of processes that execute system code or user code. An operating system makes a computer more productive by switching the CPU between processes. A process is a program in execution that has a program counter, stack, data section, and state. The operating system manages processes using process control blocks and by moving processes between scheduling queues. Context switching allows the CPU to save and load process states when switching between processes.
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore interprocess communication using shared memory and message passing
This document provides an overview of processes and threads management in an operating system. It discusses key concepts such as process states (ready, running, blocked), context switching between processes, and the process control block (PCB) used to store process information. It also compares processes and threads, noting that threads are lightweight processes created within a process that share the same memory space. Threads provide benefits like concurrency and efficient communication compared to separate processes. The document outlines thread types, multi-threading models, common pthread functions, and system calls used for process and thread management.
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
The document discusses process management in operating systems. It defines a process as a program in execution that must progress sequentially. A process has multiple parts including code, activity, stack, and data sections. It describes the process concept, states such as running and waiting, and the process control block used to store process information. The document also discusses threads as a way to allow multiple execution locations within a process and covers process scheduling, ready and wait queues, and context switching between processes.
This document discusses process management in operating systems. It covers topics like CPU scheduling, scheduling concepts and algorithms, process states, context switching, and process scheduling queues. Process management techniques include process scheduling using long term, short term, and medium term schedulers. Key concepts covered are the process control block (PCB), which stores process information, and context switching, which saves and restores a process's context when switching between processes.
BITS 1213 - OPERATING SYSTEM (PROCESS,THREAD,SYMMETRIC MULTIPROCESSOR,MICROKE...Nur Atiqah Mohd Rosli
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in such as new, ready, running, waiting, and terminated. It describes the process control block that contains information about each process. It also covers topics like process creation, where a parent process can create child processes, and process termination, where a process can exit or be terminated by its parent. Interprocess communication allows processes to communicate and synchronize actions.
UNIT II PROCESS MANAGEMENT
Processes – Process Concept, Process Scheduling, Operations on Processes, Inter-process Communication; CPU Scheduling – Scheduling criteria, Scheduling algorithms, Multiple-processor scheduling, Real time scheduling; Threads- Overview, Multithreading models, Threading issues; Process Synchronization – The critical-section problem, Synchronization hardware, Mutex locks, Semaphores, Classic problems of synchronization, Critical regions, Monitors; Deadlock – System model, Deadlock characterization, Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from deadlock.
This document discusses process management concepts including processes, threads, process scheduling, and inter-process communication. A process is defined as the fundamental unit of work in a system and requires resources like CPU time and memory. Key process concepts covered include process states, process layout in memory, and the process control block. Threads allow a process to execute multiple tasks simultaneously. Process scheduling and context switching are also summarized. Methods of inter-process communication like shared memory and message passing are described along with examples of client-server communication using sockets, remote procedure calls, and remote method invocation.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
This document discusses process management in operating systems. It defines processes and describes the key components of a process, including program code, data, stack, and process state. Each process is represented by a process control block (PCB) that contains the process state, CPU registers, scheduling information, memory management information, and I/O status. Processes can be created, terminated, and communicate through interprocess communication (IPC). Context switching allows the CPU to rapidly switch between running processes.
Similar to Operating Systems - Process Scheduling Management (20)
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
3. Process
Concept
3.3
● Process – a program in execution;
● Multiple parts
● The program code, also called text section
● Current activity including program counter, processor
registers
● Stack containing temporary data
4 Function parameters, return addresses, local variables
● Data section containing global variables
● Heap containing memory dynamically allocated during run
time
4. Process Concept
(Cont.)
3.4
● Program is passive entity stored on disk (executable file),
process is active
● Program becomes process when executable file loaded into
memory
● Execution of program started via GUI mouse clicks, command
line entry of its name, etc
● One program can be several processes
● Consider multiple users executing the same program
5. Basic Elements Of Process
• Processor
• Main memory
• I/O modules
• System bus
8. Process
State
3.8
● As a process executes, it changes state
● new: The process is being created
● running: Instructions are being executed
● waiting: The process is waiting for some event to occur
● ready: The process is waiting to be assigned to a processor
● terminated: The process has finished execution
10. Process Control Block
(PCB)
3.10
Information associated with each process
(also called task control block)
● Process state – running, waiting, etc
● Program counter – location of
instruction to next execute
● CPU registers – contents of all
process-centric registers
● CPU scheduling information- priorities,
scheduling queue pointers
● Memory-management information –
memory allocated to the process
● Accounting information – CPU used,
clock time elapsed since start, time
limits
● I/O status information – I/O devices
allocated to process, list of open files
13. Thread
s
3.13
● So far, process has a single thread of execution
● Consider having multiple program counters per process
● Multiple locations can execute at once
4 Multiple threads of control -> threads
● Must then have storage for thread details, multiple program counters in
PCB
14. Process
Scheduling
3.14
● Maximize CPU use, quickly switch processes onto CPU for time sharing
● Process scheduler selects among available processes for next execution
on CPU
● Maintains scheduling queues of processes
● Job queue – set of all processes in the system
● Ready queue – set of all processes residing in main memory, ready
and waiting to execute
● Device queues – set of processes waiting for an I/O device
● Processes migrate among the various queues
19. Scheduler
s
3.19
● Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
● Sometimes the only scheduler in a system
● Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
● Long-term scheduler (or job scheduler) – selects which processes should
be brought into the ready queue
● Long-term scheduler is invoked infrequently (seconds, minutes) ⇒
• (may be slow)
● The long-term scheduler controls the degree of multiprogramming
● Processes can be described as either:
● I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
● CPU-bound process – spends more time doing computations; few
very long CPU bursts
● Long-term scheduler strives for good process mix
21. Addition of Medium Term
Scheduling
● Medium-term scheduler can be added if degree of multiple
programming needs to decrease
● Remove process from memory, store on disk, bring back
in from disk to continue execution: swapping
3.21
22. Context
Switch
3.22
● When CPU switches to another process, the system must save
the state of the old process and load the saved state for the
new process via a context switch
● Context of a process represented in the PCB
● Context-switch time is overhead; the system does no useful
work while switching
● The more complex the OS and the PCB the longer
the context switch
● Time dependent on hardware support
● Some hardware provides multiple sets of registers per CPU
□ multiple contexts loaded at once
24. Process
Creation
3.24
● Parent process create children processes, which, in turn
create other processes, forming a tree of processes
● Generally, process identified and managed via a process
identifier (pid)
● Resource sharing options
● Parent and children share all resources
● Children share subset of parent’s resources
● Parent and child share no resources
● Execution options
● Parent and children execute concurrently
● Parent waits until children terminate
25. Process Creation
(Cont.)
● Address space
● Child duplicate of parent
● Child has a program loaded into it
● UNIX examples
● fork() system call creates new process
● exec() system call used after a fork() to replace the
process’ memory space with a new program
3.25
26. Process
Termination
3.26
● Process executes last statement and then asks the operating
system to delete it using the exit() system call.
● Returns status data from child to parent (via wait())
● Process’ resources are deallocated by operating system
● Parent may terminate the execution of children processes using
the abort() system call. Some reasons for doing so:
● Child has exceeded allocated resources
● Task assigned to child is no longer required
● The parent is exiting and the operating systems does not
allow a child to continue if its parent terminates
27. Process
Termination
3.27
● Some operating systems do not allow child to exists if its parent
has terminated. If a process terminates, then all its children must
also be terminated.
● cascading termination. All children, grandchildren, etc. are
terminated.
● The termination is initiated by the operating system.
● The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information
and the pid of the terminated process
pid = wait(&status);
● If no parent waiting (did not invoke wait()) process is a zombie
● If parent terminated without invoking wait , process is an orphan
28. Interprocess
Communication
3.28
● Processes within a system may be independent or cooperating
● Cooperating process can affect or be affected by other processes,
including sharing data
● Reasons for cooperating processes:
● Information sharing
● Computation speedup
● Modularity
● Convenience
● Cooperating processes need interprocess communication (IPC)
● Two models of IPC
● Shared memory
● Message passing
30. Cooperating
Processes
3.30
● Independent process cannot affect or be affected by the execution
of another process
● Cooperating process can affect or be affected by the execution of
another process
● Advantages of process cooperation
● Information sharing
● Computation speed-up
● Modularity
● Convenience
31. Producer-Consumer
Problem
3.31
● Paradigm for cooperating processes, producer process
produces information that is consumed by a consumer
process
● unbounded-buffer places no practical limit on the size
of the buffer
● bounded-buffer assumes that there is a fixed buffer
size
32. Interprocess Communication – Shared Memory
3.32
● An area of memory shared among the processes that wish
to communicate
● The communication is under the control of the users
processes not the operating system.
● Major issues is to provide mechanism that will allow the
user processes to synchronize their actions when they
access shared memory.
33. Interprocess Communication – Message Passing
3.33
● Mechanism for processes to communicate and to synchronize
their actions
● Message system – processes communicate with each other
without resorting to shared variables
● IPC facility provides two operations:
● send(message)
● receive(message)
● The message size is either fixed or variable
34. Message Passing (Cont.)
3.34
● If processes P and Q wish to communicate, they need to:
● Establish a communication link between them
● Exchange messages via send/receive
● Implementation issues:
● How are links established?
● Can a link be associated with more than two processes?
● How many links can there be between every pair of
communicating processes?
● What is the capacity of a link?
● Is the size of a message that the link can accommodate fixed or
variable?
● Is a link unidirectional or bi-directional?
35. Message Passing (Cont.)
3.35
● Implementation of communication link
● Physical:
4 Shared memory
4 Hardware bus
4 Network
● Logical:
4 Direct or indirect
4 Synchronous or asynchronous
4 Automatic or explicit buffering
36. Direct
Communication
3.36
● Processes must name each other explicitly:
● send (P, message)– send a message to process P
● receive(Q, message) – receive a message from process Q
● Properties of communication link
● Links are established automatically
● A link is associated with exactly one pair of communicating
processes
● Between each pair there exists exactly one link
● The link may be unidirectional, but is usually bi-directional
37. Indirect
Communication
3.37
● Messages are directed and received from mailboxes (also referred
to as ports)
● Each mailbox has a unique id
● Processes can communicate only if they share a mailbox
● Properties of communication link
● Link established only if processes share a common mailbox
● A link may be associated with many processes
● Each pair of processes may share several communication links
● Link may be unidirectional or bi-directional
38. Indirect
Communication
3.38
● Operations
● create a new mailbox (port)
● send and receive messages through mailbox
● destroy a mailbox
● Primitives are defined as:
send(A, message)– send a message to mailbox A
receive(A, message)– receive a message from mailbox A
39. Indirect
Communication
3.39
● Mailbox sharing
● P1, P2, and P3 share mailbox A
● P1, sends; P2 and P3 receive
● Who gets the message?
● Solutions
● Allow a link to be associated with at most two
processes
● Allow only one process at a time to execute a receive
operation
● Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
40. Synchronizatio
n
3.40
● Message passing may be either blocking or non-blocking
● Blocking is considered synchronous
● Blocking send -- the sender is blocked until the message is
received
● Blocking receive -- the receiver is blocked until a message
is available
● Non-blocking is considered asynchronous
● Non-blocking send -- the sender sends the message and
continue
● Non-blocking receive -- the receiver receives:
● A valid message, or
● Null message
41. Buffering
3.41
● Queue of messages attached to the link.
● implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits