1. Pune Vidyarthi Griha’s
COLLEGE OF ENGINEERING, NASHIK.
“ OPERATING SYSTEM”
By
Prof. Anand N. Gharu
(Assistant Professor)
PVGCOE Computer Dept.
05 FEB 2018
.
1
2. CONTENTS :-
1. Introduction to different types of OS
2. Real time OS, System Component
3. OS Services, System Structure layer approach
4. Process management: Process concept-state
5. PCB, Thread
6. Process scheduling : preemptive & Non-preemptive
algorithms: FCFS, SJF, RR, Priority
7. Deadlock : Methods of handling deadlock, deadlock
prevention, avoidance, detection & recovery.
8. Case study : Process management in multicore OS
Prof. Gharu Anand N. 2
3. 3
What’s operating system?
• An Operating System (OS) is an interface between a computer user and
computer hardware. An operating system is a software which performs all
the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices
such as disk drives and printers.
Prof. Gharu Anand N.
5. 5
Services of Operating System
• Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• ProtectionProf. Gharu Anand N.
6. • Program execution
• Operating systems handle many kinds of activities from user programs to
system programs like printer spooler, name servers, file server, etc. Each
of these activities is encapsulated as a process.
• A process includes the complete execution context (code to execute, data
to manipulate, registers, OS resources in use). Following are the major
activities of an operating system with respect to program management −
• Loads a program into memory.
• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization.
• Provides a mechanism for process communication.
• Provides a mechanism for deadlock handling.
Prof. Gharu Anand N. 6
7. • I/O Operation
• An I/O subsystem comprises of I/O devices and their
corresponding driver software. Drivers hide the peculiarities
of specific hardware devices from the users.
• An Operating System manages the communication between
user and device drivers.
• I/O operation means read or write operation with any file or
any specific I/O device.
• Operating system provides the access to the required I/O
device when required.
Prof. Gharu Anand N. 7
8. • File system manipulation
• A file represents a collection of related information. Computers can
store files on the disk (secondary storage), for long-term storage
purpose. Examples of storage media include magnetic tape, magnetic
disk and optical disk drives like CD, DVD. Each of these media has its
own properties like speed, capacity, data transfer rate and data access
methods.
• Following are the major activities of an operating system with respect
to file management −
• Program needs to read a file or write a file.
• The operating system gives the permission to the program for
operation on file.
• Permission varies from read-only, read-write, denied and so on.
• Operating System provides an interface to the user to create/delete
files.
• Operating System provides an interface to the user to create/delete
directories.
• Operating System provides an interface to create the backup of fileProf. Gharu Anand N. 8
9. • Communication
• In case of distributed systems which are a collection of processors that
do not share memory, peripheral devices, or a clock, the operating
system manages communications between all the processes. Multiple
processes communicate with one another through communication lines
in the network.
• Some activities are :
• Two processes often require data to be transferred between them
• Both the processes can be on one computer or on different computers,
but are connected through a computer network.
• Communication may be implemented by two methods, either by
Shared Memory or by Message Passing.Prof. Gharu Anand N. 9
10. • Error handling
• Errors can occur anytime and anywhere. An error may occur in CPU,
in I/O devices or in the memory hardware. Following are the major
activities of an operating system with respect to error handling −
• The OS constantly checks for possible errors.
• The OS takes an appropriate action to ensure correct and consistent
computing.
• Resource Management
• In case of multi-user or multi-tasking environment, resources such as
main memory, CPU cycles and files storage are to be allocated to each
user or job. Following are the major activities of an operating system
with respect to resource management −
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.Prof. Gharu Anand N. 10
11. • Protection
• Considering a computer system having multiple users and concurrent
execution of multiple processes, the various processes must be
protected from each other's activities.
• Protection refers to a mechanism or a way to control the access of
programs, processes, or users to the resources defined by a computer
system.
• The OS ensures that all access to system resources is controlled.
• The OS ensures that external I/O devices are protected from invalid
access attempts.
• The OS provides authentication features for each user by means of
passwords.Prof. Gharu Anand N. 11
12. 12
Function of Operating System
• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Resource sharing & protection
• Job accounting
• Error detection
• Coordination between other software and users
(Accounting)
Prof. Gharu Anand N.
13. Memory management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its own
address.
- Main memory provides a fast storage that can be accessed directly by the CPU. For
a program to be executed, it must in the main memory. An Operating System does
the following activities for memory management −
- Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
- In multiprogramming, the OS decides which process will get memory when and
how much.
- Allocates the memory when a process requests it to do so.
- De-allocates the memory when a process no longer needs it or has been terminated.Prof. Gharu Anand N. 13
14. Process management
In multiprogramming environment, the OS decides which process
gets the processor when and for how much time. This function is
called process scheduling.
An Operating System does the following activities for
processor management −
- Keeps tracks of processor and status of process. The program
responsible for this task is known as traffic controller.
- Allocates the processor (CPU) to a process.
- De-allocates processor when a process is no longer required.Prof. Gharu Anand N. 14
15. Device management
An Operating System manages device communication via their
respective drivers.
It does the following activities for device management −
- Keeps tracks of all devices. Program responsible for this task is
known as the I/O controller.
- Decides which process gets the device when and for how much
time.
- Allocates the device in the efficient way.
- De-allocates devices.Prof. Gharu Anand N. 15
16. File management
A file system is normally organized into directories for easy
navigation and usage. These directories may contain files and other
directions.
An Operating System does the following activities for file
management −
- Keeps track of information, location, uses, status etc. The
collective facilities are often known as file system.
- Decides who gets the resources.
- Allocates the resources.
- De-allocates the resources.
Prof. Gharu Anand N. 16
17. Other activities are :
Security − By means of password and similar other techniques, it
prevents unauthorized access to programs and data.
Control over system performance − Recording delays between
request for a service and response from the system.
Job accounting − Keeping track of time and resources used by
various jobs and users.
Error detecting aids − Production of dumps, traces, error
messages, and other debugging and error detecting aids.
Coordination between other softwares and users − Coordination
and assignment of compilers, interpreters, assemblers and otherProf. Gharu Anand N. 17
20. Batch Operating System
The users of a batch operating system do not interact with the
computer directly. Each user prepares his job on an off-line device
like punch cards and submits it to the computer operator. To speed
up processing, jobs with similar needs are batched together and run
as a group. The programmers leave their programs with the
operator and the operator then sorts the programs with similar
requirements into batches.
The problems with Batch Systems are as follows −
Lack of interaction between the user and the job.
CPU is often idle, because the speed of the mechanical I/O devices
is slower than the CPU.
Difficult to provide the desired priority.Prof. Gharu Anand N. 20
22. Multitasking
Multitasking is when multiple jobs are executed by the CPU
simultaneously by switching between them. Switches occur so frequently
that the users may interact with each program while it is running. An OS
does the following activities related to multitasking −
The user gives instructions to the operating system or to a program
directly, and receives an immediate response.
The OS handles multitasking in the way that it can handle multiple
operations/executes multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
These Operating Systems were developed to provide interactive use of a
computer system at a reasonable cost.
A time-shared operating system uses the concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared CPU.
Each user has at least one separate program in memory.Prof. Gharu Anand N. 22
24. Multiprogramming
When two or more programs reside in memory at the same time, is referred
as multiprogramming. Multiprogramming assumes a single shared
processor. Multiprogramming increases CPU utilization by organizing jobs
so that the CPU always has one to execute.
An OS does the following activities related to multiprogramming.
The operating system keeps several jobs in memory at a time.
This set of jobs is a subset of the jobs kept in the job pool.
The operating system picks and begins to execute one of the jobs in the
memory.
Multiprogramming operating systems monitor the state of all active
programs and system resources using memory management programs to
ensures that the CPU is never idle, unless there are no jobs to process.
Prof. Gharu Anand N. 24
26. Spooling
Spooling is an acronym for simultaneous peripheral operations on line.
Spooling refers to putting data of various I/O jobs in a buffer. This buffer is
a special area in memory or hard disk which is accessible to I/O devices.
An operating system does the following activities related to distributed
environment −
Handles I/O device data spooling as devices have different data access rates.
Maintains the spooling buffer which provides a waiting station where data
can rest while the slower device catches up.
Maintains parallel computation because of spooling process as a computer
can perform I/O in parallel fashion. It becomes possible to have the
computer read data from a tape, write data to disk and to write out to a tape
Prof. Gharu Anand N. 26
27. Time sharing Operating System
Time-sharing is a technique which enables many people, located at
various terminals, to use a particular computer system at the same
time. Time-sharing or multitasking is a logical extension of
multiprogramming. Processor's time which is shared among
multiple users simultaneously is termed as time-sharing.
Advantages of Timesharing operating systems are as follows −
Provides the advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.Prof. Gharu Anand N. 27
28. Distributed Operating System
Distributed systems use multiple central processors to serve multiple real-
time applications and multiple users. Data processing jobs are distributed
among the processors accordingly.
The processors communicate with one another through various
communication lines (such as high-speed buses or telephone lines). These
are referred as loosely coupled systems or distributed systems. Processors
in a distributed system may vary in size and function. These processors are
referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
- Speedup the exchange of data with one another via electronic mail.
- If one site fails in a distributed system, the remaining sites can potentially
continue operating.
- Better service to the customers.
- Reduction of the load on the host computer.
- Reduction of delays in data processing.
Prof. Gharu Anand N. 28
29. Network Operating System
A Network Operating System runs on a server and provides the
server the capability to manage data, users, groups, security,
applications, and other networking functions. The primary purpose
of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local
area network (LAN), a private network or to other networks.
Examples of network operating systems include Microsoft
Windows Server 2003, Microsoft Windows Server 2008, UNIX,
Linux, Mac OS X, Novell NetWare, and BSD.
Prof. Gharu Anand N. 29
30. Real time Operating System
A real-time system is defined as a data processing system in which
the time interval required to process and respond to inputs is so
small that it controls the environment. The time taken by the
system to respond to an input and display of required updated
information is termed as the response time. So in this method, the
response time is very less as compared to online processing.
Example, Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, air traffic
control systems, etc.
Prof. Gharu Anand N. 30
31. Component of Operating System
• Process management
• • I/O management
• • Main Memory management
• • File & Storage Management
• • Protection
• • Networking
• • Protection
• • Command Interpreter
Prof. Gharu Anand N. 31
32. Component of OS
• Interpreter (shell) :
A command interpreter is the part of a computer
operating system that understands and executes
commands that are entered interactively by a human
being or from a program. In some operating systems,
the command interpreter is called the shell.
Prof. Gharu Anand N. 32
33. Component of OS
• System call :
In computing, a system call is the programmatic way in which a computer
program requests a service from the kernel of the operating system it is
executed on. A system call is a way for programs to interact with the operating
system. A computer program makes a system call when it makes a request to
the operating system’s kernel. System call provides the services of the
operating system to the user programs via Application Program Interface(API). It
provides an interface between a process and operating system to allow user-
level processes to request services of the operating system. System calls are
the only entry points into the kernel system. All programs needing resources
must use system calls
Prof. Gharu Anand N. 33
34. System call
Services Provided by System Calls :
• Process creation and management
• Main memory management
• File Access, Directory and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
Process control: end, abort, create, terminate, allocate and free memory.
File management: create, open, close, delete, read file etc.
Device management
Information maintenance
Prof. Gharu Anand N. 34
38. Operating system structure
When DOS was originally written its developers had no idea how
big and important it would eventually become. It was written by a
few programmers in a relatively short amount of time, without the
benefit of modern software engineering techniques, and then
gradually grew over time to exceed its original expectations. It does
not break the system into subsystems, and has no distinction
between user and kernel modes, allowing all programs direct access
to the underlying hardware. ( Note that user versus kernel mode
was not supported by the 8088 chip set anyway, so that really
wasn't an option back then. )Prof. Gharu Anand N. 38
40. Monolithic structure
It is the oldest architecture used for developing operating system.
Operating system resides on kernel for anyone to execute. System call is
involved i.e. Switching from user mode to kernel mode and transfer
control to operating system shown as event 1. Many CPU has two modes,
kernel mode, for the operating system in which all instruction are allowed
and user mode for user program in which I/O devices and certain other
instruction are not allowed. Two operating system then examines the
parameter of the call to determine which system call is to be carried out
shown in event 2. Next, the operating system index’s into a table that
contains procedure that carries out system call. This operation is shown in
events. Finally, it is called when the work has been completed and the
system call is finished, control is given back to the user mode as shown in
Prof. Gharu Anand N. 40
42. Layered system structure
Layer – 0 deals with hardware
Layer – 1 deals with allocation of CPU to processes
Layer – 2 implement memory management i,e. paging,
segmentation etc
Layer – 3 deals with device drivers. Device driver provide
device depenadant interaction to devices
Layer – 4 deals with input/output buffering
Layer – 5 deals with user interface
Prof. Gharu Anand N. 42
44. Virtual Machine structure
The system originally called CP/CMS, later renamed VM/370, was based
on an astute observation. That was a time sharing system, provides
multiprogramming and an extended machine with a more convenient
interface than the bare hardware.
The heart of the system known as virtual machine monitor that runs on
the bare hardware and does the multiprogramming, providing several
virtual machines to next layer up as shown in the given figure.
These virtual machines aren't extended machines, with files and other
nice features. They are the exact copies of the bare hardware, including
the kernel/user mode, Input/Output, interrupts, and everything else the
real machine hasProf. Gharu Anand N. 44
46. Microkernel structure
.
This structures the operating system by removing all nonessential portions
of the kernel and implementing them as system and user level programs.
•Generally they provide minimal process and memory management, and a
communications facility.
•Communication between components of the OS is provided by message
passing.
The benefits of the microkernel are as follows:
•Extending the operating system becomes much easier.
•Any changes to the kernel tend to be fewer, since the kernel is smaller.
•The microkernel also provides more security and reliability.
Main disadvantage is poor performance due to increased system overhead
from message passing
Prof. Gharu Anand N. 46
48. Client server structure
.
in the client-server model, as shown in the figure given below, all the
kernel does is handle the communication between the clients and the
servers.
By splitting the operating system (OS) up into parts, each of which only
handles one fact of the system, such as file service, terminal service,
process service, or memory service, each part becomes small and
manageable.
The adaptability of the client-server model, to use in distributed system is
the advantage of this model.
Prof. Gharu Anand N. 48
51. Process
• A process is a program in execution. Process is not as same as
program code but a lot more than it. A process is an 'active' entity as
opposed to program which is considered to be a 'passive' entity.
Attributes held by process include hardware state, memory, CPU etc.
• Process memory is divided into four sections for efficient working :
• The text section is made up of the compiled program code, read in
from non-volatile storage when the program is launched.
• The data section is made up the global and static variables, allocated
and initialized prior to executing the main.
• The heap is used for the dynamic memory allocation, and is
managed via calls to new, delete, malloc, free, etc.
• The stack is used for local variables. Space on the stack is reserved
for local variables when they are declared.
Prof. Gharu Anand N. 51
53. Process state
New - The process is being created.
Ready - The process is waiting to be assigned to a processor.
Running - Instructions are being executed.
Waiting - The process is waiting for some event to occur(such
as an I/O completion or reception of a signal).
Terminated - The process has finished execution.
Prof. Gharu Anand N. 53
55. Process Control Block
Process State - Running, waiting, etc., as discussed above.
Process ID, and parent process ID.
CPU registers and Program Counter - These need to be saved and
restored when swapping processes in and out of the CPU.
CPU-Scheduling information - Such as priority information and
pointers to scheduling queues.
Memory-Management information - E.g. page tables or segment
tables.
Accounting information - user and kernel CPU time consumed,
account numbers, limits, etc.
I/O Status information - Devices allocated, open file tables, etc.
Prof. Gharu Anand N. 55
58. Thread concepts
Thread is an execution unit which consists of its own program counter, a stack, and a
set of registers. Threads are also known as Lightweight processes. Threads are popular
way to improve application through parallelism. The CPU switches rapidly back and
forth among the threads giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multpile
processes can be executed parallely by increasing number of threads.
Prof. Gharu Anand N. 58
59. Types of Thread
There are two types of threads :
User Threads
Kernel Threads
User threads, are above the kernel and without kernel support. These are
the threads that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern
OSs support kernel level threads, allowing the kernel to perform multiple
simultaneous tasks and/or to service multiple kernel system calls
simultaneously.
Prof. Gharu Anand N. 59
60. Advantages of Thread
Responsiveness
Resource sharing, hence allowing better utilization of resources.
Economy. Creating and managing threads becomes easier.
Scalability. One thread runs on one CPU. In Multithreaded
processes, threads can be distributed over a series of processors to
scale.
Context Switching is smooth. Context switching refers to the
procedure followed by CPU to change from one task to another.
.
Prof. Gharu Anand N. 60
62. Process Schedulling
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.Process
scheduling is an essential part of a Multiprogramming operating systems.
Prof. Gharu Anand N. 62
63. Process Schedulling
An operating system uses two types of scheduling processes
execution, preemptive and non - preemptive.
1. Preemptive process:
In preemptive scheduling policy, a low priority process has to be suspend
its execution if high priority process is waiting in the same queue for its
execution.
2. Non - Preemptive process:
In non - preemptive scheduling policy, processes are executed in first
come first serve basis, which means the next process is executed only
when currently running process finishes its execution.
.
Prof. Gharu Anand N. 63
64. Types of scheduler
1) Long Term Scheduler
It selects the process that are to be placed in ready queue. The
long term scheduler basically decides the priority in which
processes must be placed in main memory. Processes of long term
scheduler are placed in the ready state because in this state the
process is ready to execute waiting for calls of execution from
CPU which takes time that’s why this is known as long term
scheduler.
Prof. Gharu Anand N. 64
65. Types of scheduler
2) Mid – Term Scheduler
It places the blocked and suspended processes in the secondary memory of
a computer system. The task of moving from main memory to secondary
memory is called swapping out.The task of moving back a swapped out
process from secondary memory to main memory is known as swapping
in. The swapping of processes is performed to ensure the best utilization
of main memory.
3) Short Term Scheduler
It decides the priority in which processes is in the ready queue are
allocated the central processing unit (CPU) time for their execution. The
short term scheduler is also referred as central processing unit (CPU)
scheduler.
Prof. Gharu Anand N. 65
66. Compare types of scheduler
S.N. Long-Term
Scheduler
Short-Term
Scheduler
Medium-Term
Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process
swapping scheduler.
2 Speed is lesser than
short term scheduler
Speed is fastest among
other two
Speed is in between
both short and long
term scheduler.
3 It controls the degree
of multiprogramming
It provides lesser
control over degree of
multiprogramming
It reduces the degree of
multiprogramming.
4 It is almost absent or
minimal in time
sharing system
It is also minimal in
time sharing system
It is a part of Time
sharing systems.
5 It selects processes
from pool and loads
them into memory for
execution
It selects those
processes which are
ready to execute
It can re-introduce the
process into memory
and execution can be
continued.Prof. Gharu Anand N. 66
67. Schedulling criteria
CPU utilization : To make out the best use of CPU and not to waste any
CPU cycle, CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from 40%
(lightly loaded) to 90% (heavily loaded.)
Throughput : It is the total number of processes completed per unit time
or rather say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.
Turnaround time : It is the amount of time taken to execute a particular
process, i.e. The interval from time of submission of the process to the
time of completion of the process(Wall clock time).
Prof. Gharu Anand N. 67
68. Schedulling criteria
Waiting time : The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to acquire
get control on the CPU.
Load average : It is the average number of processes residing in the ready
queue waiting for their turn to get into the CPU.
Response time : Amount of time it takes from when a request was
submitted until the first response is produced. Remember, it is the time till
the first response and not the completion of process execution(final
response).
Prof. Gharu Anand N. 68
71. FCFS Algorithms
Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling
algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high
Prof. Gharu Anand N. 71
77. SJF(SJN) Algorithms
This is also known as shortest job first, or SJF
This is a non-preemptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time
is known in advance.
Impossible to implement in interactive systems where required
CPU time is not known.
The processer should know in advance how much time process
will take.Prof. Gharu Anand N. 77
82. RR Algorithms
Round Robin is the preemptive process scheduling algorithm.
Each process is provided a fix time to execute, it is called
a quantum.
Once a process is executed for a given time period, it is preempted
and other process executes for a given time period.
Context switching is used to save states of preempted processes.
Prof. Gharu Anand N. 82
91. Priority Algorithms
Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to
be executed first and so on.
Processes with same priority are executed on first come first served
basis.
Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Prof. Gharu Anand N. 91
92. Interprocess communication
92
In computer science, inter-process communication or interprocess
communication (IPC) allows communicating processes to exchange the
data and information.
There are two methods of IPC :
1. Shared memory
2. Message passing
Prof. Gharu Anand N.
94. Interprocess communication
94
shared memory :
In this, processes are interact with each other through shared variable;
processes are exchange information by reading & writing data using
shared variable.
message passing :.
in this, instead of reading or writing, processes send and receive the
messages.
send and receive functions are implemented in OS.
SEND (B, message)
RECEIVE (A, memory address)
Prof. Gharu Anand N.
95. Critical section
95
critical section is a piece of code that accesses a shared resource (data
structure or device) that must not be concurrently accessed by more than
one thread of execution.
Prof. Gharu Anand N.
96. Mutual Exclusion
96
A mutual exclusion (mutex) is a program in which Shared resource is not
allowed to access by more than one process at same time is called mutual
exclusion.
Prof. Gharu Anand N.
97. Semaphore in IPC
In computer science, a semaphore is a variable or abstract data
type used to control access to a common resource by multiple processes in
a concurrent system such as a multiprogramming operating system.
Semaphore is a simply a variable. This variable is used to solve critical
section problem and to achieve process synchronization in the multi
processing environment
The two most common kinds of semaphores are counting semaphores and
binary semaphores. Counting semaphore can take non-negative integer
values and Binary semaphore can take the value 0 & 1. only.Prof. Gharu Anand N. 97
98. Types of Semaphore
Semaphores are a useful tool in the prevention of race conditions; however,
their use is by no means a guarantee that a program is free from these
problems. Semaphores which allow an arbitrary resource count are
called counting semaphores, while semaphores which are restricted to the
values 0 and 1 (or locked/unlocked, unavailable/available) are
called binary semaphores and are used to implement locks.
Prof. Gharu Anand N. 98
100. Monitor
A monitor is a synchronization construct that allows threads to have both mutual
exclusion and the ability to wait (block) for a certain condition to become true.
Monitors also have a mechanism for signaling other threads that their condition has
been met.
A monitor consists of a mutex (lock) object and condition variables. A condition
variable is basically a container of threads that are waiting for a certain condition.
Monitors provide a mechanism for threads to temporarily give up exclusive access
in order to wait for some condition to be met, before regaining exclusive access and
resuming their task.
Prof. Gharu Anand N. 100
101. Monitor
ASPECTS SEMAPHORE MONITOR
Basic Semaphores is an integer variable
S.
Monitor is an abstract data
type.
Action The value of Semaphore S
indicates the number of shared
resources availabe in the system
The Monitor type contains
shared variables and the set
of procedures that operate on
the shared variable.
Access When any process access the
shared resources it perform wait()
operation on S and when it
releases the shared resources it
performs signal() operation on S.
When any process wants to
access the shared variables in
the monitor, it needs to
access it through the
procedures.
Condition
variable
Semaphore does not have
condition variables.
Monitor has condition
variables.Prof. Gharu Anand N. 101
103. IPC Problem
103
1. Producer Consumer Problem
2. Reader Writer Problem
3. Dining Philosopher Problem
4. Sleeping Barber Problem
Prof. Gharu Anand N.
104. Producer-consumer problem
104
In computing, the producer–consumer problem (also known as
the bounded-buffer problem) is a classic example of a multi-
process synchronization problem. The problem describes two
processes, the producer and the consumer, who share a common,
fixed-size buffer used as a queue. The producer's job is to generate
data, put it into the buffer, and start again. At the same time, the
consumer is consuming the data (i.e., removing it from the buffer),
one piece at a time. The problem is to make sure that the producer
won't try to add data into the buffer if it's full and that the consumer
won't try to remove data from an empty buffer.Prof. Gharu Anand N.
105. Producer-consumer problem
105
The solution for the producer is to either go to sleep or discard data
if the buffer is full. The next time the consumer removes an item
from the buffer, it notifies the producer, who starts to fill the buffer
again. In the same way, the consumer can go to sleep if it finds the
buffer to be empty. The next time the producer puts data into the
buffer, it wakes up the sleeping consumer. The solution can be
reached by means of inter-process communication, typically
using semaphores. An inadequate solution could result in
a deadlock where both processes are waiting to be awakened. The
problem can also be generalized to have multiple producers and
Prof. Gharu Anand N.
106. Reader - writer problem
106
The R-W problem is another classic problem for which design of
synchronization and concurrency mechanisms can be tested. The
producer/consumer is another such problem; the dining philosophers is
another.
Definition
There is a data area that is shared among a number of processes.
Any number of readers may simultaneously write to the data area.
Only one writer at a time may write to the data area.
If a writer is writing to the data area, no reader may read it.
If there is at least one reader reading the data area, no writer may write to
it.
Readers only read and writers only write
A process that reads and writes to a data area must be considered a writer
(consider producer or consumer)
Prof. Gharu Anand N.
107. Dining philosopher problem
107
The Dining Philosopher Problem – The Dining Philosopher Problem states
that K philosophers seated around a circular table with one chopstick
between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks
adjacent to him. One chopstick may be picked up by any one of its adjacent
followers but not both.
Prof. Gharu Anand N.
109. DEADLOCK
• A deadlock is a situation in which two computer programs sharing the
same resource are effectively preventing each other from accessing the
resource, resulting in both programs ceasing to function. The earliest
computer operating systems ran only one program at a time.
109Prof. Gharu Anand N.
110. DEADLOCK condition
110
Mutual Exclusion: One or more than one resource are non-
sharable (Only one process can use at a time)
Hold and Wait: A process is holding at least one resource
and waiting for resources.
Prof. Gharu Anand N.
111. DEADLOCK condition
111
No Preemption: A resource cannot be taken from a process
unless the process releases the resource.
Circular Wait: A set of processes are waiting for each other
in circular form.
Prof. Gharu Anand N.
112. Methods for handling
deadlock
• There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not
let the system into deadlock state.
• 2) Deadlock detection and recovery: Let deadlock occur,
then do preemption to handle it once occurred.
• 3) Ignore the problem all together: If deadlock is very
rare, then let it happen and reboot the system. This is the
approach that both Windows and UNIX take.
112Prof. Gharu Anand N.
113. Deadlock recovery
• Preemption We can take a resource from one process and give it to
other. This will resolve the deadlock situation, but sometimes it does
causes problems.
• RollbackIn situations where deadlock is a real possibility, the system
can periodically make a record of the state of each process and when
deadlock occurs, roll everything back to the last checkpoint, and restart,
but allocating resources differently so that deadlock does not occur.
• Kill one or more processesThis is the simplest way, but it works.
113Prof. Gharu Anand N.
114. Deadlock prevention
We can prevent Deadlock by eliminating any of the above four condition.
Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tap drive and
printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before start of its execution, this way hold and wait
condition is eliminated but it will lead to low device utilization. for example, if a process requires
printer at a later time and we have allocated printer before the start of its execution printer will
remained blocked till it has completed its execution.
2. Process will make new request for resources after releasing the current set of resources. This
solution may lead to starvation.
Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request for the resources only
in increasing order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than
R5 such request will not be granted, only request for resources more than R5 will be granted.Prof. Gharu Anand N. 114
115. Deadlock Avoidance
Banker’s Algorithm
Banker's algorithm is a deadlock avoidance algorithm. It is named so
because this algorithm is used in banking systems to determine whether a
loan can be granted or not.
Consider there are n account holders in a bank and the sum of the money in
all of their accounts is S. Everytime a loan has to be granted by the bank, it
subtracts the loan amount from the total money the bank has. Then it
checks if that difference is greater than S. It is done because, only then, the
bank would have enough money even if all the n account holders draw all
their money at once.
Banker's algorithm works in a similar way in computers. Whenever a new
process is created, it must exactly specify the maximum instances of eachProf. Gharu Anand N. 115
116. Deadlock AvoidanceLet us assume that there are n processes and m resource types. Some data structures
are used to implement the banker's algorithm. They are:
Available: It is an array of length m. It represents the number of available resources
of each type. If Available[j] = k, then there are k instances available, of resource
type Rj.
Max: It is an n x m matrix which represents the maximum number of instances of
each resource that a process can request. If Max[i][j] = k, then the process Pi can
request atmost k instances of resource type Rj.3
Allocation: It is an n x m matrix which represents the number of resources of each
type currently allocated to each process. If Allocation[i][j] = k, then process Pi is
currently allocated k instances of resource type Rj.
Need: It is an n x m matrix which indicates the remaining resource needs of each
process. If Need[i][j] = k, then process Pi may need k more instances of resource
type Rj to complete its task.
Need[i][j] = Max[i][j] - Allocation [i][j]Prof. Gharu Anand N. 116
121. pipe
• pipe is a connection between two processes, such that the standard
output from one process becomes the standard input of the other process.
In UNIX Operating System, Pipes are useful for communication
between related processes(inter-process communication).
• Pipe is one-way communication only i.e we can use a pipe such that One
process write to the pipe, and the other process reads from the pipe. It
opens a pipe, which is an area of main memory that is treated as
a “virtual file”.
• The pipe can be used by the creating process, as well as all its child
processes, for reading and writing. One process can write to this “virtual
file” or pipe and another related process can read from it.
• If a process tries to read before something is written to the pipe, the
process is suspended until something is written.
• The pipe system call finds the first two available positions in the
process’s open file table and allocates them for the read and write ends
of the pipe. 121Prof. Gharu Anand N.