The document discusses interprocess communication (IPC) and threads. It covers different IPC mechanisms like signals, message passing using pipes, and shared memory. It also covers different threading models including user-level threads managed by libraries, kernel-level threads with a one-to-one mapping to OS threads, and hybrid models. The document outlines the life cycle of a thread and considerations for thread implementation.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
Advanced Computer Architecture,Program Partitioning and Scheduling,Program Partitioning & Scheduling,Latency,Levels of Parallelism,Loop-level Parallelism,Subprogram-level Parallelism,Job or Program-Level Parallelism,Communication Latency,Grain Packing and Scheduling,Program Graphs and Packing
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Multiprocessor system is an interconnection of two or more CPUs with memory and input-output equipment
The components that forms multiprocessor are CPUs IOPs connected to input –output devices , and memory unit that may be partitioned into a number of separate modules.
Multiprocessor are classified as multiple instruction stream, multiple data stream (MIMD) system.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
In this presentation, you will learn the fundamentals of Multi Processors and Multi Computers in only a few minutes.
Meanings, features, attributes, applications, and examples of multiprocessors and multi computers.
So, let's get started. If you enjoy this and find the information beneficial, please like and share it with your friends.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
Advanced Computer Architecture,Program Partitioning and Scheduling,Program Partitioning & Scheduling,Latency,Levels of Parallelism,Loop-level Parallelism,Subprogram-level Parallelism,Job or Program-Level Parallelism,Communication Latency,Grain Packing and Scheduling,Program Graphs and Packing
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Multiprocessor system is an interconnection of two or more CPUs with memory and input-output equipment
The components that forms multiprocessor are CPUs IOPs connected to input –output devices , and memory unit that may be partitioned into a number of separate modules.
Multiprocessor are classified as multiple instruction stream, multiple data stream (MIMD) system.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
In this presentation, you will learn the fundamentals of Multi Processors and Multi Computers in only a few minutes.
Meanings, features, attributes, applications, and examples of multiprocessors and multi computers.
So, let's get started. If you enjoy this and find the information beneficial, please like and share it with your friends.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
Inter-Process-Communication (or IPC for short) are mechanisms provid.pdfaesalem06
Inter-Process-Communication (or IPC for short) are mechanisms provided by the kernel to allow
processes to communicate with each other.
The types of inter process communication on Linux OS are:
The following IPC mechanisms are supported by Windows:
1. Clipboard - The clipboard acts as a central depository for data sharing among applications.
When a user performs a cut or copy operation in an application, the application puts the selected
data on the clipboard in one or more standard or application-defined formats. Any other
application can then retrieve the data from the clipboard, choosing from the available formats
that it understands.
2. File Mapping - File mapping enables a process to treat the contents of a file as if they were a
block of memory in the process\'s address space. The process can use simple pointer operations
to examine and modify the contents of the file. When two or more processes access the same file
mapping, each process receives a pointer to memory in its own address space that it can use to
read or modify the contents of the file.
3. Mailslot - Mailslots provide one-way communication. Any process that creates a mailslot is a
mailslot server. Other processes, called mailslot clients, send messages to the mailslot server by
writing a message to its mailslot.
4. RPC - RPC enables applications to call functions remotely. Therefore, RPC makes IPC as easy
as calling a function. RPC operates between processes on a single computer or on different
computers on a network.
5. Windows Socket - Windows Sockets is a protocol-independent interface capable of supporting
current and emerging networking capabilities.
The following IPC mechanisms are supported by Mac OS:
1. Mach Ports : Mach 3.0 is capable of running as a stand-alone kernel, with other traditional OS-
services like IO, file systems and networking stack running as user mode.It is much faster to
make a direct call between linked components than it is to send messages or do RPC between
separate tasks.
2. Apple Events : Universally supported by GUI applications on Mac OS for remote
control.Operations like opening or telling an application to open a file or quit etc can be done
using these.
3. Pasteboard - Copy Paste , Drag and drop done between applications is performed using this
technique.
4. Distributed Objects - Remote messaging feature of Cocoa to call an object in different Cocoa
applicaton.
Windows server uses the best technique to manage IPC because
a) It provides an efficient way for two or more processes on the same computer to share data.
b) It is capable of supporting current and emerging networking capabilities, such as quality of
service monitoring, robust asynchronous communication, I/O completion ports for superior
performance, and protocol-specific network
features.
=> Multiprocessing : refers to the use of two or morecentral processing units (CPU) within a
single computer system.All the operating systems provide support for multiprocessing.
Windows manages.
Chorus - Distributed Operating System [ case study ]Akhil Nadh PC
ChorusOS is a microkernel real-time operating system designed as a message-based computational model. ChorusOS started as the Chorus distributed real-time operating system research project at Institut National de Recherche en Informatique et Automatique (INRIA) in France in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by Chorus Systèmes. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
Linux Memory Management
1.Memory Structure of Linux OS.
2.How Program is loaded into the memory.
3.Address Translation.
4.Feature for Multithreading and Multiprocessing.
A system is said to be Real Time if it is required to complete it’s work & deliver it’s services on time.
Example – Flight Control System
All tasks in that system must execute on time.
Non Example – PC system
1.
IPC
INTERPROCESS
COMMUNICATION
By.Prof.Ruchi Sharma
02/16/13 1
2. Many operating systems provide mechanisms for
interprocess communication (IPC)
Processesmust communicate with one another in
multiprogrammed and networked environments
For example, a Web browser retrieving data from a distant
server
Essential
for processes that must coordinate activities to
achieve a common goal.
02/16/13 2
3. Software interrupts that notify a process that an
event has occurred
Do not allow processes to specify data to exchange with
other processes
Processes may catch, ignore or mask a signal
Catching a signal involves specifying a routine that the OS calls
when it delivers the signal
Ignoring a signal relies on the operating system’s default action
to handle the signal
Masking a signal instructs the OS to not deliver signals of that
type until the process clears the signal mask
02/16/13 3
4. Message-based interprocess communication
Messages can be passed in one direction at a time
One process is the sender and the other is the receiver
Message passing can be bidirectional
Each process can act as either a sender or a receiver
Messages can be blocking or nonblocking
Blocking requires the receiver to notify the sender when the
message is received
Nonblocking enables the sender to continue with other processing
Popular implementation is a pipe
A region of memory protected by the OS that serves as a buffer,
allowing two or more processes to exchange data
02/16/13 4
5. UNIX processes
All processes are provided with a set of memory
addresses, called a virtual address space
A process’s PCB is maintained by the kernel in a
protected region of memory that user processes cannot
access
A UNIX PCB stores:
The contents of the processor registers
PID
The program counter
The system stack
All processes are listed in the process table
02/16/13 5
6. UNIX processes continued
All processes interact with the OS via system calls
A process can spawn a child process by using the fork
system call, which creates a copy of the parent process
Child receives a copy of the parent’s resources as well
Process priorities are integers between -20 and 19
(inclusive)
A lower numerical priority value indicates a higher scheduling
priority
UNIXprovides IPC mechanisms, such as pipes, to allow
unrelated processes to transfer data
02/16/13 6
8. Outline
4.1 Introduction
4.2 Definition of Thread
4.3 Motivation for Threads
4.4 Thread States: Life Cycle of a Thread
4.5 Thread Operations
4.6 Threading Models
4.6.1 User-Level Threads
4.6.2 Kernel-Level Threads
4.6.3 Combining User- and Kernel-Level Threads
4.7 Thread Implementation Considerations
4.7.1 Thread Signal Delivery
4.7.2 Thread Termination
4.8 POSIX and Pthreads
4.9 Linux Threads
4.10 Windows XP Threads
4.11 Java Multithreading Case Study, Part 1: Introduction to Java Threads
02/16/13 8
9. After reading this chapter, you should understand:
the motivation for creating threads.
the similarities and differences between processes and
threads.
the various levels of support for threads.
the life cycle of a thread.
thread signaling and cancellation.
the basics of POSIX, Linux, Windows XP and Java
threads.
02/16/13 9
10. General-purpose languages such as Java, C#, Visual
C++ .NET, Visual Basic .NET and Python have made
concurrency primitives available to applications
programmer
Multithreading
Programmer specifies applications contain threads of
execution
Each thread designate a portion of a program that may
execute concurrently with other threads
02/16/13 10
11. Thread
Lightweight process (LWP)
Threads of instructions or thread of control
Shares address space and other global information with
its process
Registers, stack, signal masks and other thread-specific
data are local to each thread
Threads may be managed by the operating system
or by a user application
Examples: Win32 threads, C-threads, Pthreads
02/16/13 11
13. Threads have become prominent due to
trends in
Software design
More naturally expresses inherently parallel tasks
Performance
Scales better to multiprocessor systems
Cooperation
Shared address space incurs less overhead than IPC
02/16/13 13
14. Each thread transitions among a series of
discrete thread states
Threads and processes have many
operations in common (e.g. create, exit,
resume, and suspend)
Thread creation does not require
operating system to initialize resources
that are shared between parent processes
and its threads
Reducesoverhead of thread creation and
termination compared to process creation and
termination
02/16/13 14
16. User-level threads perform threading operations in user
space
Threads are created by runtime libraries that cannot execute
privileged instructions or access kernel primitives directly
User-level thread implementation
Many-to-one thread mappings
Operating system maps all threads in a multithreaded process to single
execution context
Advantages
User-level libraries can schedule its threads to optimize performance
Synchronization performed outside kernel, avoids context switches
More portable
Disadvantage
Kernel views a multithreaded process as a single thread of control
Can lead to suboptimal performance if a thread issues I/O
Cannot be scheduled on multiple processors at once
02/16/13 16
18. Kernel-level threads attempt to address the limitations of
user-level threads by mapping each thread to its own
execution context
Kernel-level threads provide a one-to-one thread mapping
Advantages: Increased scalability, interactivity, and throughput
Disadvantages: Overhead due to context switching and reduced
portability due to OS-specific APIs
Kernel-level threads are not always the optimal solution
for multithreaded applications
02/16/13 18
20. The combination of user- and kernel-level thread
implementation
Many-to-many thread mapping (m-to-n thread mapping)
Number of user and kernel threads need not be equal
Can reduce overhead compared to one-to-one thread mappings by
implementing thread pooling
Worker threads
Persistent kernel threads that occupy the thread pool
Improves performance in environments where threads are
frequently created and destroyed
Each new thread is executed by a worker thread
Scheduler activation
Technique that enables user-level library to schedule its
threads
Occurs when the operating system calls a user-level threading
library that determines if any of its threads need rescheduling
02/16/13 20