SlideShare a Scribd company logo
1 of 44
Download to read offline
Unit2 – Part 2
Process
Reasons for needing cooperating
processes
Information Sharing:
Sharing of information between multiple processes can be accomplished
using cooperating processes. This may include access to the same files. A
mechanism is required so that the processes can access the files in parallel
to each other..
Modularity:
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to
faster and more efficient completion of the required tasks.
Computation speed up:
Subtasks of a single task can be performed parallelly using cooperating
processes. This increases the computation speedup as the task can be
executed faster.
Convenience:
There are many tasks that a user needs to do such as compiling, printing,
editing etc. It is convenient if these tasks can be managed by cooperating
processes.
Problems of Cooperative system
Possible to have deadlock
– Each process waiting for a message from the other process.
Possible to have starvation
– Two processes sending a message to each other while another process waits for
a message.
Possible to Damage the Data
– Cooperative system may Damage the data which is occurred due to Modularity.
Information sharing
– In cooperative system Information is shared without the user well.
– It may also share the personal data or sensitive information of the user which
the user does not want to share with other.
Data may be Hacked
– Some office data E.g. Banks etc. information of a clients can be hack through
cooperative system in which all the information should be showed to other
system.
– Also money transfer from one account to other account can be done.
Process Queues and
Scheduling
Process Queues
Ready queue
is one of the many queues that a process may be added to
▪ CPU scheduling schedules from ready queue.
Job queue
▪ set of all processes started in the system waiting for memory
Device queues
▪ set of processes waiting for an I/O device
▪ A process will wait in such a queue until I/O is finished or until the waited event happens
▪ Processes migrate among the various queues
Two-State Process Model
State & Description
Running
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to
execute.
Two-state process model refers to running and non-running states
which are described below
Process Scheduling
• In a multiprogramming or time-sharing system, there
may be multiple processes ready to execute.
• We need to select one them and give the CPU to that.
• This is scheduling (decision).
• There are various criteria that can be used in the scheduling
decision.
• The scheduling mechanism (dispatcher) than assigns
the selected process to the CPU and starts execution of
it.
Select
(Scheduling Algorithm)
Dispatch
(mechanism)
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
ready queue
Main Memory
CPU
Long-term
scheduler
Short-term
scheduler
job queue
Schedulers
• Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast)
• Long-term scheduler is invoked very infrequently
(seconds, minutes)  (may be slow)
Addition of Medium Term
Scheduling
Medium term
scheduler
Short term
Scheduler
(CPU Scheduler)
Medium term
scheduler
Representation of Process
Scheduling
CPU Scheduler
I/O queue
ready queue
Process Behavior
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
• CPU-bound process – spends more time doing
computations; few very long CPU bursts
• CPU burst: the execution of the program in CPU between two
I/O requests. We may have a short or long CPU burst.
Process Creation and
Termination
Process Creation
Parent process create children processes, which, in turn
create other processes, forming a tree of processes
Generally, process identified and managed via a process
identifier (pid)
• Resource sharing alternatives:
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution alternatives:
• Parent and children execute concurrently
• Parent waits until children terminate
Process
Process Process
Process Process Process
Process Termination
Process executes last statement and asks the operating
system to delete it (can use exit system call)
Process resources are deallocated by operating system
Parent may terminate execution of children processes
(abort)
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating systems do not allow child to continue if its parent
terminates
• All children terminated - cascading termination
What is Thread
A thread is a path of execution within a process. A process can
contain multiple threads.
Basic unit of CPU utilization
Thread has its own
• Thread ID
• Program counter
• Registers
• stack
A thread is a flow of execution through the process code, with its
own program counter that keeps track of which instruction to
execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code
segment, data segment and open files. When one thread alters a
code segment memory item, all other threads see that.
What is Thread
A thread is also called a lightweight process. Threads provide
a way to improve application performance through
parallelism. Threads represent a software approach to
improving performance of operating system by reducing the
overhead. Thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can
exist outside a process. Each thread represents a separate
flow of control. Threads have been successfully used in
implementing network servers and web server. They also
provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The
following figure shows the working of a single-threaded and a
multithreaded process.
Difference between Process and Thread
S.N. Process Thread
1 Process is heavy weight or resource
intensive.
Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with
operating system.
Thread switching does not need to interact
with operating system.
3 In multiple processing environments,
each process executes the same code but
has its own memory and file resources.
All threads can share same set of open files,
child processes.
4 If one process is blocked, then no other
process can execute until the first
process is unblocked.
While one thread is blocked and waiting, a
second thread in the same task can run.
5 Multiple processes without using threads
use more resources.
Multiple threaded processes use fewer
resources.
6 In multiple processes each process
operates independently of the others.
One thread can read, write or change another
thread's data.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a
process.
• Efficient communication.
• It is more economical to create and context switch
threads.
• Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed
threads acting on kernel, an operating system core.
User Level Threads
In this case, the thread management kernel is not
aware of the existence of threads. The thread library
contains code for creating and destroying threads,
for passing message and data between threads, for
scheduling thread execution and for saving and
restoring thread contexts. The application starts with
a single thread.
User Level Threads
Advantages
• Thread switching does not require Kernel mode
privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level
thread.
• User level threads are fast to create and manage.
Disadvantages
• There is a lack of coordination between threads and
operating system kernel.
• Multithreaded application cannot take advantage of
multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel.
There is no thread management code in the application
area. Kernel threads are supported directly by the
operating system. Any application can be programmed to
be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process
as a whole and for individuals threads within the process.
Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Kernel Level Threads
Advantages
• Kernel can simultaneously schedule multiple threads
from the same process on multiple processes.
• If one thread in a process is blocked, the Kernel can
schedule another thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and
manage than the user threads.
• Transfer of control from one thread to another within
the same process requires a mode switch to the Kernel.
Multithreading
Multithreading is the ability of an operating system process to manage its
use by more than one user at a time and to even manage multiple requests
by the same user without having to have multiple copies of the
programming running in the computer. Each user request for a program or
system service is kept track of as a thread with a separate identity. As
programs work on behalf of the initial request for that thread and are
interrupted by other requests, the status of work on behalf of that thread is
kept track of until the work is completed.
Multithreading is a technique that allows a program or a process to execute
many tasks concurrently. at the same time and parallel. It allows a process
to run its tasks in parallel mode on a single processor system.
A multithreading is a specialized form of multitasking. Multitasking threads
require less overhead than multitasking processes.
A process consists of the memory space allocated by the operating system
that can contain one or more threads. A thread cannot exist on its own; it
must be a part of a process. A process remains running until all of the
threads are done executing.
Multithreading enables you to write very efficient programs that make
maximum use of the CPU, because idle time can be kept to a minimum.
Advantages of multithreading over multitasking.
Process Synchronization
Process Synchronization means sharing system
resources by processes in a such a way that,
Concurrent access to shared data is handled thereby
minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms
to ensure synchronized execution of cooperating
processes.
Process Synchronization was introduced to handle
problems that arose while multiple process
executions
28
Synchronization
• How does the sender/receiver behave if it can not
send/receive the message immediately
• Depend if Blocking or Non-Blocking communication is used
• Blocking is considered synchronous
• Sender blocks until receiver or kernel receives
• Receiver blocks until message available
• Non-blocking is considered asynchronous
• Sender sends the message really or tries later, but always
returns immediately
• Receiver receives a valid message or null, but always returns
immediately
Buffering
These and many other reasons make it a need for operating systems to
have Buffers or Temporary memory locations it can use. For example
imagine that there are two different processes. It can be tricky to transfer
data between these processes as the processes may be at two different
states at a given time.
Let us say process A : Is sending a bitmap to the printer driver so that it
can send it to the printer. Unfortunately the driver is busy printing
another page at that time. So until the driver is ready the OS stores the
data in a buffer.
Exact behavior depends also on the Available Buffer
Queue of messages attached to the link; implemented in one of three
ways
1. Zero capacity – 0 messages
Sender must wait for receiver
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Classical IPC/Synchronization
problems
• Producer Consumer (bounded buffer)
• Dining philosophers
• Readers and writers
• Sleeping barber
Bounded Buffer Problem
This problem is generalized in terms of the Producer
Consumer problem, where a finite buffer pool is
used to exchange messages between producer and
consumer processes.
• Because the buffer pool has a maximum size, this
problem is often called the Bounded buffer
problem.
• Solution to this problem is, creating two counting
semaphores "full" and "empty" to keep track of the
current number of full and empty buffers
respectively.
What is the Problem Statement?
There is a buffer of n slots and each slot is capable of
storing one unit of data. There are two processes
running, namely, producer and consumer, which are
operating on the buffer.
A producer tries to insert data into an empty slot of the
buffer. A consumer tries to remove data from a filled slot
in the buffer. As you might have guessed by now, those
two processes won't produce the expected output if they
are being executed concurrently.
There needs to be a way to make the producer and
consumer work in an independent manner.
Here's a Solution
One solution of this problem is to use semaphores. The
semaphores which will be used here are:
m, a binary semaphore which is used to acquire and
release the lock.
empty, a counting semaphore whose initial value is the
number of slots in the buffer, since, initially all slots are
empty.
full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the
number of empty slots in the buffer and full represents
the number of occupied slots in the buffer.
Here's a Solution
• producer first waits until there is atleast one empty slot.
• Then it decrements the empty semaphore because, there will now be one less
empty slot, since the producer is going to insert data in one of those slots.
• Then, it acquires lock on the buffer, so that the consumer cannot access the
buffer until producer completes its operation.
• After performing the insert operation, the lock is released and the value of full is
incremented because the producer has just filled a slot in the buffer.
• The consumer waits until there is atleast one full slot in the buffer.
• Then it decrements the full semaphore because the number of occupied slots
will be decreased by one, after the consumer completes its operation.
• After that, the consumer acquires lock on the buffer.
• Following that, the consumer completes the removal operation so that the data
from one of the full slots is removed.
• Then, the consumer releases the lock.
• Finally, the empty semaphore is incremented by 1, because the consumer has
just removed data from an occupied slot, thus making it empty.
Dining Philosophers Problem
The dining philosopher's problem involves the
allocation of limited resources to a group of
processes in a deadlock-free and starvation-free
manner.
There are five philosophers sitting around a table, in
which there are five chopsticks/forks kept beside
them and a bowl of rice in the centre, When a
philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a
philosopher wants to think, he keeps down both
chopsticks at their original place.
What is the Problem Statement?
Consider there are five philosophers sitting around a
circular dining table. The dining table has five
chopsticks and a bowl of rice in the middle.
At any instant, a philosopher is either eating or
thinking. When a philosopher wants to eat, he uses
two chopsticks - one from their left and one from
their right. When a philosopher wants to think, he
keeps down both chopsticks at their original place.
Here's the Solution
From the problem statement, it is clear that a
philosopher can think for an indefinite amount of
time. But when a philosopher starts eating, he has to
stop at some point of time. The philosopher is in an
endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the
five chopsticks.
Here's the Solution
When a philosopher wants to eat the rice, he will wait for the
chopstick at his left and picks up that chopstick. Then he waits
for the right chopstick to be available, and then picks it too.
After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and
each of them pickup one chopstick, then a deadlock situation
occurs because they will be waiting for another chopstick
forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only
if both the left and right chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all
the four philosophers pick up four chopsticks, there will be
one chopstick left on the table. So, one philosopher can start
eating and eventually, two chopsticks will be available. In this
way, deadlocks can be avoided.
The Readers Writers Problem
In this problem there are some
processes(called readers) that only read the shared
data, and never change it, and there are other
processes(called writers) who may change the data
in addition to reading, or instead of reading it.
There are various type of readers-writers problem,
most centered on relative priorities of readers and
writers.
The Problem Statement
There is a shared resource which should be accessed
by multiple processes. There are two types of
processes in this context. They
are reader and writer. Any number of readers can
read from the shared resource simultaneously, but
only one writer can write to the shared resource.
When a writer is writing data to the resource, no
other process can access the resource.
A writer cannot write to the resource if there are
non zero number of readers accessing the resource
at that time.
The Solution
From the above problem statement, it is evident that
readers have higher priority than writer. If a writer wants
to write to the resource, it must wait until there are no
readers currently accessing that resource.
Here, we use one mutex m and a semaphore w. An
integer variable read_count is used to maintain the
number of readers currently accessing the resource. The
variable read_count is initialized to 0. A value of 1 is
given initially to m and w.
Instead of having the process to acquire lock on the
shared resource, we use the mutex m to make the
process to acquire and release lock whenever it is
updating the read_count variable.
Sleeping Barber problem
Problem: The analogy is based upon a hypothetical barber
shop with one barber. There is a barber shop which has one
barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair.
• If there is no customer, then the barber sleeps in his
own chair.
• When a customer arrives, he must wake up the barber.
• If there are many customers and the barber is cutting
a customer’s hair, then the remaining customers either
wait if there are empty chairs in the waiting room or they
leave if no chairs are empty.
Solution:
The solution to this problem includes three semaphores. First is for the customer
which counts the number of customers present in the waiting room (customer in
the barber chair is not included because he is not waiting). Second, the barber 0 or
1 is used to tell whether the barber is idle or is working, And the third mutex is
used to provide the mutual exclusion which is required for the process to execute.
In the solution, the customer has the record of the number of customers waiting in
the waiting room if the number of customers is equal to the number of chairs in
the waiting room then the upcoming customer leaves the barbershop.
When the barber shows up in the morning, he executes the procedure barber,
causing him to block on the semaphore customers because it is initially 0. Then the
barber goes to sleep until the first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires
the mutex for entering the critical region, if another customer enters thereafter,
the second one will not be able to anything until the first one has released the
mutex. The customer then checks the chairs in the waiting room if waiting
customers are less then the number of chairs then he sits otherwise he leaves and
releases the mutex.
If the chair is available, then customer sits in the waiting room and increments the
variable waiting value and also increases the customer’s semaphore this wakes up
the barber if he is sleeping.
At this point, customer and barber are both awake and the barber is ready to give
that person a haircut. When the haircut is over, the customer exits the procedure
and if there are no customers in waiting room barber sleeps.
Unit 2 part 2(Process)

More Related Content

What's hot

What's hot (20)

Chapter 3 chapter reading task
Chapter 3 chapter reading taskChapter 3 chapter reading task
Chapter 3 chapter reading task
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Chapter 2 part 1
Chapter 2 part 1Chapter 2 part 1
Chapter 2 part 1
 
Mis unit iii by arnav
Mis unit iii by arnavMis unit iii by arnav
Mis unit iii by arnav
 
Operating system
Operating systemOperating system
Operating system
 
Complete Operating System notes
Complete Operating System notesComplete Operating System notes
Complete Operating System notes
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
 
Operating system interview question
Operating system interview questionOperating system interview question
Operating system interview question
 
Process management
Process managementProcess management
Process management
 
OSCh1
OSCh1OSCh1
OSCh1
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
Operating system
Operating systemOperating system
Operating system
 
Operating system 06 operating system classification
Operating system 06 operating system classificationOperating system 06 operating system classification
Operating system 06 operating system classification
 
Process management in os
Process management in osProcess management in os
Process management in os
 
Unit 1 operating system
Unit 1 operating systemUnit 1 operating system
Unit 1 operating system
 
CSI-503 - 2. Processor Management
CSI-503 - 2. Processor ManagementCSI-503 - 2. Processor Management
CSI-503 - 2. Processor Management
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 
Os Question Bank
Os Question BankOs Question Bank
Os Question Bank
 
Unit 4
Unit  4Unit  4
Unit 4
 
Advanced Operating System- Introduction
Advanced Operating System- IntroductionAdvanced Operating System- Introduction
Advanced Operating System- Introduction
 

Similar to Unit 2 part 2(Process)

Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemProcess, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemLieYah Daliah
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentationchnrketan
 
Process Management Operating Systems .pptx
Process Management        Operating Systems .pptxProcess Management        Operating Systems .pptx
Process Management Operating Systems .pptxSAIKRISHNADURVASULA2
 
Process & Thread Management
Process & Thread  ManagementProcess & Thread  Management
Process & Thread ManagementVpmv
 
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...morganjohn3
 
In computing, scheduling is the action .
In computing, scheduling is the action .In computing, scheduling is the action .
In computing, scheduling is the action .nathansel1
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Schedulingghayour abbas
 
4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John LadoMark John Lado, MIT
 
Multi-Threading.pptx
Multi-Threading.pptxMulti-Threading.pptx
Multi-Threading.pptxCHANDRUG31
 
Unit 4 Real Time Operating System
Unit 4 Real Time Operating SystemUnit 4 Real Time Operating System
Unit 4 Real Time Operating SystemDr. Pankaj Zope
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................harendersin82880
 

Similar to Unit 2 part 2(Process) (20)

Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemProcess, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentation
 
Process Management Operating Systems .pptx
Process Management        Operating Systems .pptxProcess Management        Operating Systems .pptx
Process Management Operating Systems .pptx
 
Process Management
Process ManagementProcess Management
Process Management
 
Process & Thread Management
Process & Thread  ManagementProcess & Thread  Management
Process & Thread Management
 
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
 
In computing, scheduling is the action .
In computing, scheduling is the action .In computing, scheduling is the action .
In computing, scheduling is the action .
 
UNIT I-Processes.pptx
UNIT I-Processes.pptxUNIT I-Processes.pptx
UNIT I-Processes.pptx
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Scheduling
 
4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado
 
Thread
ThreadThread
Thread
 
Operating System
Operating SystemOperating System
Operating System
 
Multi-Threading.pptx
Multi-Threading.pptxMulti-Threading.pptx
Multi-Threading.pptx
 
Unit 4 Real Time Operating System
Unit 4 Real Time Operating SystemUnit 4 Real Time Operating System
Unit 4 Real Time Operating System
 
Os
OsOs
Os
 
Os
OsOs
Os
 
Os
OsOs
Os
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................
 
Threads
ThreadsThreads
Threads
 

More from WajeehaBaig

Unit 3 part 2(DEADLOCK)
Unit 3 part 2(DEADLOCK)Unit 3 part 2(DEADLOCK)
Unit 3 part 2(DEADLOCK)WajeehaBaig
 
Unit 2 part 1(Process)
Unit 2 part 1(Process)Unit 2 part 1(Process)
Unit 2 part 1(Process)WajeehaBaig
 
lecture 1 (Part 2) kernal and its categories
lecture 1 (Part 2) kernal and its categorieslecture 1 (Part 2) kernal and its categories
lecture 1 (Part 2) kernal and its categoriesWajeehaBaig
 
Unit 3 part 1(DEADLOCK)
Unit 3 part 1(DEADLOCK)Unit 3 part 1(DEADLOCK)
Unit 3 part 1(DEADLOCK)WajeehaBaig
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)WajeehaBaig
 

More from WajeehaBaig (6)

File system
File systemFile system
File system
 
Unit 3 part 2(DEADLOCK)
Unit 3 part 2(DEADLOCK)Unit 3 part 2(DEADLOCK)
Unit 3 part 2(DEADLOCK)
 
Unit 2 part 1(Process)
Unit 2 part 1(Process)Unit 2 part 1(Process)
Unit 2 part 1(Process)
 
lecture 1 (Part 2) kernal and its categories
lecture 1 (Part 2) kernal and its categorieslecture 1 (Part 2) kernal and its categories
lecture 1 (Part 2) kernal and its categories
 
Unit 3 part 1(DEADLOCK)
Unit 3 part 1(DEADLOCK)Unit 3 part 1(DEADLOCK)
Unit 3 part 1(DEADLOCK)
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)
 

Recently uploaded

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the weldingMuhammadUzairLiaqat
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm Systemirfanmechengr
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncssuser2ae721
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptMadan Karki
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
lifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxlifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxsomshekarkn64
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
Piping Basic stress analysis by engineering
Piping Basic stress analysis by engineeringPiping Basic stress analysis by engineering
Piping Basic stress analysis by engineeringJuanCarlosMorales19600
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptJasonTagapanGulla
 
Correctly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleCorrectly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleAlluxio, Inc.
 

Recently uploaded (20)

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the welding
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm System
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.ppt
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
lifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxlifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptx
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
Piping Basic stress analysis by engineering
Piping Basic stress analysis by engineeringPiping Basic stress analysis by engineering
Piping Basic stress analysis by engineering
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.ppt
 
Correctly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleCorrectly Loading Incremental Data at Scale
Correctly Loading Incremental Data at Scale
 

Unit 2 part 2(Process)

  • 1. Unit2 – Part 2 Process
  • 2. Reasons for needing cooperating processes Information Sharing: Sharing of information between multiple processes can be accomplished using cooperating processes. This may include access to the same files. A mechanism is required so that the processes can access the files in parallel to each other.. Modularity: Modularity involves dividing complicated tasks into smaller subtasks. These subtasks can completed by different cooperating processes. This leads to faster and more efficient completion of the required tasks. Computation speed up: Subtasks of a single task can be performed parallelly using cooperating processes. This increases the computation speedup as the task can be executed faster. Convenience: There are many tasks that a user needs to do such as compiling, printing, editing etc. It is convenient if these tasks can be managed by cooperating processes.
  • 3. Problems of Cooperative system Possible to have deadlock – Each process waiting for a message from the other process. Possible to have starvation – Two processes sending a message to each other while another process waits for a message. Possible to Damage the Data – Cooperative system may Damage the data which is occurred due to Modularity. Information sharing – In cooperative system Information is shared without the user well. – It may also share the personal data or sensitive information of the user which the user does not want to share with other. Data may be Hacked – Some office data E.g. Banks etc. information of a clients can be hack through cooperative system in which all the information should be showed to other system. – Also money transfer from one account to other account can be done.
  • 5. Process Queues Ready queue is one of the many queues that a process may be added to ▪ CPU scheduling schedules from ready queue. Job queue ▪ set of all processes started in the system waiting for memory Device queues ▪ set of processes waiting for an I/O device ▪ A process will wait in such a queue until I/O is finished or until the waited event happens ▪ Processes migrate among the various queues
  • 6. Two-State Process Model State & Description Running When a new process is created, it enters into the system as in the running state. Not Running Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute. Two-state process model refers to running and non-running states which are described below
  • 7. Process Scheduling • In a multiprogramming or time-sharing system, there may be multiple processes ready to execute. • We need to select one them and give the CPU to that. • This is scheduling (decision). • There are various criteria that can be used in the scheduling decision. • The scheduling mechanism (dispatcher) than assigns the selected process to the CPU and starts execution of it. Select (Scheduling Algorithm) Dispatch (mechanism)
  • 8. Schedulers Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types • Long-Term Scheduler • Short-Term Scheduler • Medium-Term Scheduler Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU ready queue Main Memory CPU Long-term scheduler Short-term scheduler job queue
  • 9. Schedulers • Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) • Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow)
  • 10. Addition of Medium Term Scheduling Medium term scheduler Short term Scheduler (CPU Scheduler) Medium term scheduler
  • 11. Representation of Process Scheduling CPU Scheduler I/O queue ready queue
  • 12. Process Behavior • Processes can be described as either: • I/O-bound process – spends more time doing I/O than computations, many short CPU bursts • CPU-bound process – spends more time doing computations; few very long CPU bursts • CPU burst: the execution of the program in CPU between two I/O requests. We may have a short or long CPU burst.
  • 14. Process Creation Parent process create children processes, which, in turn create other processes, forming a tree of processes Generally, process identified and managed via a process identifier (pid) • Resource sharing alternatives: • Parent and children share all resources • Children share subset of parent’s resources • Parent and child share no resources • Execution alternatives: • Parent and children execute concurrently • Parent waits until children terminate Process Process Process Process Process Process
  • 15. Process Termination Process executes last statement and asks the operating system to delete it (can use exit system call) Process resources are deallocated by operating system Parent may terminate execution of children processes (abort) • Child has exceeded allocated resources • Task assigned to child is no longer required • If parent is exiting • Some operating systems do not allow child to continue if its parent terminates • All children terminated - cascading termination
  • 16. What is Thread A thread is a path of execution within a process. A process can contain multiple threads. Basic unit of CPU utilization Thread has its own • Thread ID • Program counter • Registers • stack A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.
  • 17. What is Thread A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead. Thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.
  • 18.
  • 19. Difference between Process and Thread S.N. Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process. 2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system. 3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes. 4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run. 5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources. 6 In multiple processes each process operates independently of the others. One thread can read, write or change another thread's data.
  • 20. Advantages of Thread • Threads minimize the context switching time. • Use of threads provides concurrency within a process. • Efficient communication. • It is more economical to create and context switch threads. • Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
  • 21. Types of Thread Threads are implemented in following two ways − • User Level Threads − User managed threads. • Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.
  • 22. User Level Threads In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.
  • 23. User Level Threads Advantages • Thread switching does not require Kernel mode privileges. • User level thread can run on any operating system. • Scheduling can be application specific in the user level thread. • User level threads are fast to create and manage. Disadvantages • There is a lack of coordination between threads and operating system kernel. • Multithreaded application cannot take advantage of multiprocessing.
  • 24. Kernel Level Threads In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process. The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.
  • 25. Kernel Level Threads Advantages • Kernel can simultaneously schedule multiple threads from the same process on multiple processes. • If one thread in a process is blocked, the Kernel can schedule another thread of the same process. • Kernel routines themselves can be multithreaded. Disadvantages • Kernel threads are generally slower to create and manage than the user threads. • Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.
  • 26. Multithreading Multithreading is the ability of an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. Each user request for a program or system service is kept track of as a thread with a separate identity. As programs work on behalf of the initial request for that thread and are interrupted by other requests, the status of work on behalf of that thread is kept track of until the work is completed. Multithreading is a technique that allows a program or a process to execute many tasks concurrently. at the same time and parallel. It allows a process to run its tasks in parallel mode on a single processor system. A multithreading is a specialized form of multitasking. Multitasking threads require less overhead than multitasking processes. A process consists of the memory space allocated by the operating system that can contain one or more threads. A thread cannot exist on its own; it must be a part of a process. A process remains running until all of the threads are done executing. Multithreading enables you to write very efficient programs that make maximum use of the CPU, because idle time can be kept to a minimum. Advantages of multithreading over multitasking.
  • 27. Process Synchronization Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes. Process Synchronization was introduced to handle problems that arose while multiple process executions
  • 28. 28 Synchronization • How does the sender/receiver behave if it can not send/receive the message immediately • Depend if Blocking or Non-Blocking communication is used • Blocking is considered synchronous • Sender blocks until receiver or kernel receives • Receiver blocks until message available • Non-blocking is considered asynchronous • Sender sends the message really or tries later, but always returns immediately • Receiver receives a valid message or null, but always returns immediately
  • 29. Buffering These and many other reasons make it a need for operating systems to have Buffers or Temporary memory locations it can use. For example imagine that there are two different processes. It can be tricky to transfer data between these processes as the processes may be at two different states at a given time. Let us say process A : Is sending a bitmap to the printer driver so that it can send it to the printer. Unfortunately the driver is busy printing another page at that time. So until the driver is ready the OS stores the data in a buffer. Exact behavior depends also on the Available Buffer Queue of messages attached to the link; implemented in one of three ways 1. Zero capacity – 0 messages Sender must wait for receiver 2. Bounded capacity – finite length of n messages Sender must wait if link full 3. Unbounded capacity – infinite length Sender never waits
  • 30. Classical IPC/Synchronization problems • Producer Consumer (bounded buffer) • Dining philosophers • Readers and writers • Sleeping barber
  • 31. Bounded Buffer Problem This problem is generalized in terms of the Producer Consumer problem, where a finite buffer pool is used to exchange messages between producer and consumer processes. • Because the buffer pool has a maximum size, this problem is often called the Bounded buffer problem. • Solution to this problem is, creating two counting semaphores "full" and "empty" to keep track of the current number of full and empty buffers respectively.
  • 32. What is the Problem Statement? There is a buffer of n slots and each slot is capable of storing one unit of data. There are two processes running, namely, producer and consumer, which are operating on the buffer. A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a filled slot in the buffer. As you might have guessed by now, those two processes won't produce the expected output if they are being executed concurrently. There needs to be a way to make the producer and consumer work in an independent manner.
  • 33. Here's a Solution One solution of this problem is to use semaphores. The semaphores which will be used here are: m, a binary semaphore which is used to acquire and release the lock. empty, a counting semaphore whose initial value is the number of slots in the buffer, since, initially all slots are empty. full, a counting semaphore whose initial value is 0. At any instant, the current value of empty represents the number of empty slots in the buffer and full represents the number of occupied slots in the buffer.
  • 34. Here's a Solution • producer first waits until there is atleast one empty slot. • Then it decrements the empty semaphore because, there will now be one less empty slot, since the producer is going to insert data in one of those slots. • Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until producer completes its operation. • After performing the insert operation, the lock is released and the value of full is incremented because the producer has just filled a slot in the buffer. • The consumer waits until there is atleast one full slot in the buffer. • Then it decrements the full semaphore because the number of occupied slots will be decreased by one, after the consumer completes its operation. • After that, the consumer acquires lock on the buffer. • Following that, the consumer completes the removal operation so that the data from one of the full slots is removed. • Then, the consumer releases the lock. • Finally, the empty semaphore is incremented by 1, because the consumer has just removed data from an occupied slot, thus making it empty.
  • 35. Dining Philosophers Problem The dining philosopher's problem involves the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner. There are five philosophers sitting around a table, in which there are five chopsticks/forks kept beside them and a bowl of rice in the centre, When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place.
  • 36. What is the Problem Statement? Consider there are five philosophers sitting around a circular dining table. The dining table has five chopsticks and a bowl of rice in the middle. At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place.
  • 37. Here's the Solution From the problem statement, it is clear that a philosopher can think for an indefinite amount of time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher is in an endless cycle of thinking and eating. An array of five semaphores, stick[5], for each of the five chopsticks.
  • 38. Here's the Solution When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he puts both the chopsticks down. But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then a deadlock situation occurs because they will be waiting for another chopstick forever. The possible solutions for this are: A philosopher must be allowed to pick up the chopsticks only if both the left and right chopsticks are available. Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and eventually, two chopsticks will be available. In this way, deadlocks can be avoided.
  • 39. The Readers Writers Problem In this problem there are some processes(called readers) that only read the shared data, and never change it, and there are other processes(called writers) who may change the data in addition to reading, or instead of reading it. There are various type of readers-writers problem, most centered on relative priorities of readers and writers.
  • 40. The Problem Statement There is a shared resource which should be accessed by multiple processes. There are two types of processes in this context. They are reader and writer. Any number of readers can read from the shared resource simultaneously, but only one writer can write to the shared resource. When a writer is writing data to the resource, no other process can access the resource. A writer cannot write to the resource if there are non zero number of readers accessing the resource at that time.
  • 41. The Solution From the above problem statement, it is evident that readers have higher priority than writer. If a writer wants to write to the resource, it must wait until there are no readers currently accessing that resource. Here, we use one mutex m and a semaphore w. An integer variable read_count is used to maintain the number of readers currently accessing the resource. The variable read_count is initialized to 0. A value of 1 is given initially to m and w. Instead of having the process to acquire lock on the shared resource, we use the mutex m to make the process to acquire and release lock whenever it is updating the read_count variable.
  • 42. Sleeping Barber problem Problem: The analogy is based upon a hypothetical barber shop with one barber. There is a barber shop which has one barber, one barber chair, and n chairs for waiting for customers if there are any to sit on the chair. • If there is no customer, then the barber sleeps in his own chair. • When a customer arrives, he must wake up the barber. • If there are many customers and the barber is cutting a customer’s hair, then the remaining customers either wait if there are empty chairs in the waiting room or they leave if no chairs are empty.
  • 43. Solution: The solution to this problem includes three semaphores. First is for the customer which counts the number of customers present in the waiting room (customer in the barber chair is not included because he is not waiting). Second, the barber 0 or 1 is used to tell whether the barber is idle or is working, And the third mutex is used to provide the mutual exclusion which is required for the process to execute. In the solution, the customer has the record of the number of customers waiting in the waiting room if the number of customers is equal to the number of chairs in the waiting room then the upcoming customer leaves the barbershop. When the barber shows up in the morning, he executes the procedure barber, causing him to block on the semaphore customers because it is initially 0. Then the barber goes to sleep until the first customer comes up. When a customer arrives, he executes customer procedure the customer acquires the mutex for entering the critical region, if another customer enters thereafter, the second one will not be able to anything until the first one has released the mutex. The customer then checks the chairs in the waiting room if waiting customers are less then the number of chairs then he sits otherwise he leaves and releases the mutex. If the chair is available, then customer sits in the waiting room and increments the variable waiting value and also increases the customer’s semaphore this wakes up the barber if he is sleeping. At this point, customer and barber are both awake and the barber is ready to give that person a haircut. When the haircut is over, the customer exits the procedure and if there are no customers in waiting room barber sleeps.