SlideShare a Scribd company logo
Process Control Block (PCB)
 To implement the process model, the operating
system maintains a table (an array of structures)
called the process table, with one entry per
process, these entries are known as Process
Control Block.
 Process table contains the information what the
operating system must know to manage and
control process switching, including the process
location and process attributes.
Process Control Block (PCB)
Various fields and information stored in PCB are
given as below
Process Id: Each process is given Id number at
the time of creation.
Process state: The state may be ready,
running, and blocked.
Program counter: The counter indicates the
address of the next instruction to be executed
for this process.
Process Control Block (PCB)
CPU registers: Along with the program counter, this
state information must be saved when an interrupt
occurs, to allow the process to be continued correctly
afterward
CPU-scheduling information: This information
includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Accounting information: This information includes
the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on
Status information: The information includes the list
of I/O devices allocated to this process, a list of open
files, and so on.
Thread
• A program has one or more locus of execution.
Each execution is called a thread of execution.
• In traditional operating systems, each process
has an address space and a single thread of
execution.
• It is the smallest unit of processing that can be
scheduled by an operating system.
• A thread is a single sequence stream within in a
process. Because threads have some of the
properties of processes,
Thread Structure
• The thread has a program counter that keeps track
of which instruction to execute next.
• It has registers, which holds its current working
variables.
• It has a stack, which contains the execution
history, with one frame for each procedure called
but not yet returned from.
• What threads add to the process model is to allow
multiple executions to take place in the same
process environment, to a large degree
independent of one another
Thread Structure
• Having multiple threads running in parallel in one
process is similar to having multiple processes
running in parallel in one compute
(a) Three processes each with one thread. (b) One process with three threads.
Thread Structure
 In former case, the threads share an address
space, open files, and other resources.
 In the latter case, process share physical memory,
disks, printers and other resources.
 In Fig. (a) we see three traditional processes.
Each process has its own address space and a
single thread of control.
 In contrast, in Fig. (b) we see a single process with
three threads of control
Thread Structure
• Although in both cases we have three threads, in
Fig. (a) each of them operates in a different
address space, whereas in Fig. (b) all three of
them share the same address space.
Multithreading and Multitasking
Multithreading
 The ability of an operating system to execute
different parts of a program, called threads
simultaneously, is called multithreading.
 The programmer must carefully design the
program in such a way that all the threads can
run at the same time without interfering with
each other.
Multithreading
 On a single processor, multithreading generally
occurs by time division multiplexing the processor
switches between different threads.
 This context switching generally happens so
speedy that the user perceives the threads or
tasks as running at the same time.
Multitasking
• The ability to execute more than one task at the
same time is called multitasking.
• In multitasking, only one CPU is involved, but it
switches from one program to another so quickly
that it gives the appearance of executing all of the
programs at the same time here are two basic
types of multitasking.
Preemptive: In preemptive multitasking, the
operating system assign CPU, time slices to each
program.
Multitasking
Cooperative: In cooperative multitasking, each
program can control the CPU for as long as it needs
CPU. If a program is not using the CPU, however, it
can allow another program to use it.
Similarities and dissimilarities between
process and thread.
Similarities
 Like processes threads share CPU and only one
thread is active (running) at a time.
 Like processes threads within a process execute
sequentially.
 Like processes thread can create children.
 Like a traditional process, a thread can be in any one
of several states: running, blocked, ready, or
terminated.
 Like process threads have Program Counter, stack,
Registers and state
Similarities and dissimilarities between
process and thread.
Dissimilarities
 Unlike processes threads are not independent of one
another, threads within the same process share an
address space.
 Unlike processes all threads can access every address
in the task.
 Unlike processes threads are design to assist one
other.
Note that processes might or might not assist one
another because processes may be originated from
different users.
Thread Usage-Why do we need threads?
• E.g., a word processor has different parts; parts
for
– Interacting with the user
– Formatting the page as soon as the changes are
made
– Timed savings (for auto recovery)
– Spelling and grammar checking
15
Thread Usage-Why do we need
threads?
1. Simplifying the programming model since many
activities are going on at once.
2. They are easier to create and destroy than
processes since they don't have any resources
attached to them
3. Performance improves by overlapping activities
if there is too much I/O
Thread Usage-Why do we need
threads?
4. Real parallelism is possible if there are multiple
CPUs
Note: implementation details are beyond the
scope of the course (distributed systems).
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a
process.
Efficient communication.
 It is more economical to create and context
switch threads.
Threads allow utilization of multiprocessor
architectures to a greater scale and
efficiency.
Context Switch
 A context switch is the mechanism to store and
restore the state or context of a CPU in Process
Control block so that a process execution can be
resumed from the same point at a later time.
 Using this technique, a context switcher enables
multiple processes to share a single CPU.
Context switching is an essential part of a
multitasking operating system features.
Context Switch
 When the scheduler switches the CPU from
executing one process to execute another, the
state from the current running process is stored
into the process control block.
 After this, the state for the process to run next is
loaded from its own PCB and used to set the PC,
registers, etc.
 At that point, the second process can start
executing
Context Switch
See
Context switching
 Switching the CPU to another process requires
saving the environment of the old process and
loading the saved environment of the new
process. This task is called context switching.
 Context switch time also called dispatch
latency is an overhead /is the wasted in the
transition by the OS/ and it depends on the
hardware (1 to 100ms)
 It is sometimes a performance bottle neck
Context Switch
See
Interprocess communication
 Processes frequently need to communicate with
other processes.
 Processes may share a memory area or a file for
communication
 There are three issues related to IPC
1. How can one process pass information to another.
2. How can we make two or more processes don’t
interfere with each other when engaged in critical
activities; e.g., getting the last 1MB of memory
Interprocess communication
25
Interprocess communication
26
Interprocess communication
27
Interprocess communication
28
Interprocess communication
3. Sequencing of events when dependency exist;
e.g., one process produces data and another
process consumes it.
 These issues are also applied to threads; the
first is easy for thread since they share a
common address space.
Why interprocess communication?
As we said process must be cooperative to doing
the task and to doing this task process must be
communicate to each others ,why? The output of
one of process would be the input of the others
process
Why interprocess communication is important?
 To know the information of each others and
transfer data among themselves
Why interprocess communication
 When the dependencies among occurs the
process would occur, example if process A
produces data and process prints it, and B
would be wait until the process A produce the
data, before starting printing data
Interprocess communication
 To doing task process must be communicate at
certain points
Example P1-----------@-----------P2
 To do task process P1 task process P2 need the
inputs(who use this resource) if process P1 and p2
doing works by using variables in @,they use the
variables @ as sequential when p1 using variables
@,P2 must be waiting up to p1 finish the variable
or using @
Interprocess communication
 Why p1 and p2 waiting to each others?
Because of they have common shared
resource and depend on each others and
both process using global and local variable
When one process using the resource the
other process must be waiting to each
others
P1----------R----------p2
Interprocess communication
• From the above when p1 and p2 using the
memory ,files and shared resources they must be
use turn by turn , but to using this resource they
must be computation to each other to the
resource
Interprocess communication
• Example, if two process trying to printing the
process p1 reach to the printer and the printer is
on printing but process p2 coming while process
p1 on printing and override the p1 and delete
each others by merging and p2 start to print
( means if they do not waiting to each others).
Race Conditions
 When several processes access and manipulate
the same data concurrently and the outcome of
the execution depends on the particular order in
which the access takes place, is called a race
condition.
 In the os processes that are working together
may be share some common resources such as
one read and the other need to write
Race Conditions
Race conditions
– Arises as a result of sharing some resources
– E.g. printer spooler
– When a process wants to print a file, it enters a
file name on a special spooler directory
Race Conditions
 Another process, the printer daemon,
periodically checks to see if there are any files
to be printed, and if there are it prints them
and removes from the directory.
 Recall the daemon is a process running in the
back ground and started automatically when
the system is booted.
 Assume that the spooler directory has a large
number of slots, numbered 0,1,2,3,…,n, each
capable of holding a file name.
Race Conditions
See
– out-points to the next file to be printed
– in-points to the next free slot in the directory
Race Conditions
Then the following may happen
– Process A reads in and stores the value 7 in a local
variable.
– A clock interrupt occurs and the CPU is switched to B.
– B also reads in and stores the value 7 in a local
variable.
– It stores the name of its file in slot 7 and updates in to
be 8.
– A runs again; it runs its file name in slot 7, erasing the
file name that B wrote.
– It updates in to be 8.
Race Conditions
 The printer daemon will not notice this; but B
will never receive any output
 Situation like this, where two or more process
are reading and writing some shared data and
the final result depends on who runs precisely
when are called race conditions.
Critical Regions
 Critical region is a part of program where the
shared resources is found and it is also called as
critical section
 On the critical section there is high race
conditions occurs, but how prevent this race
conditions
 The main ways of make prevention is to making
as more than one process do not using the
shared resources such as share memory, shared
files at the same times
Avoiding Race Conditions
• To avoid race condition we need Mutual
Exclusion.
• Mutual Exclusion is someway of making sure
that if one process is using a shared variable or
file, the other processes will be excluded from
doing the same things.
• The difficulty above in the printer spooler occurs
because process B started using one of the shared
variables before process A was finished with it.
Critical Region
What is critical region?
 It is the part of program where a shared resource
is found it is also called critical section, in this
critical section there is high race condition occur
,but how we prevent this those race condition ?
 The main way is to prevent as not more than
one process using the shared memory, shared files
at the same time ,means when P1 using resources
p2 must be wait or prevents by using OS design
Avoiding Race Conditions
 That part of the program where the shared
memory is accessed is called the critical region
or critical section
 If we could arrange matters such that no two
processes were ever in their critical regions at
the same time, we could avoid race conditions.
 Although this requirement avoids race conditions,
this is not sufficient for having parallel processes
cooperate correctly and efficiently using shared
data.
Avoiding Race Conditions
(Rules for avoiding Race Condition) Solution to
Critical section problem:
Critical Regions
See
Mutual exclusion using critical Regions
 As we seen on the above process A enter its
critical regions at a time T1 in little later at a T2
Process B attempts to enter it is critical regions
and we allows only one at a time ,consequently B
is temporary suspended until Time T3 when PA
leaves its critical regions and allowing B to enter
immediacy.
 But in the other word what we need is called as
mutual exclusion that is some what making sure
that if some process using a resource the other
process must be prevents
Mutual exclusion using critical Regions
 Eventually B leaves at T4 and we are back to the
original situation with no process in the critical
region
Mutual exclusion using with busy waiting
What is busy waiting ?
 Means waiting some process up to it leave from the
critical regions without doing any things
There are two way of achieving mutual exclusion
1.Mutual exclusion using with busy waiting
Disabling Interrupt
 One of the simple processor system have
simplest solution to have each processes disabling
the over all interrupt just after one process enter
to the critical region and enable after the process
leave from the critical region
 Disabling Interrupt
• Example if P1 want to share/ use critical region and
enter to the critical region and the process p2 need
to enter to the critical region would be disable or
the enter point is switch off and after the process is
terminate from the critical region and the CPU can
send the signal to the process and enabling
• This approach is generally un attractive because it
is unwise the user process when the CPU forget to
turn on again the process forget and cause the end
system
Mutual exclusion using with busy waiting
Lock Variable
 Let look for the software solution having single
shared and lock variable initially zero (0) when
the process wants to enter its critical region, it tests
the lock ,if lock is zero the process is set its to 1
and enter to the critical region ,if the lock is
already 1 the process just waits until it become 0
,when lock is 0 no process is in the critical region
and when lock is 1 there is the process is in the
critical region
 Lock Variable
 The problem of this algorithm is, it may be contain
the total flow because of that one process read
the lock and see that it is 0 , before it set the lock
to 1 an other process is scheduled runs and set the
lock to 1 and the two process would be in the
critical regions at the same time.
Mutual exclusion using without busy waiting
 The process which occur in the Critical region to
share resource and they make that race take
place because they share common resources such
as RAM,CPU , Files and Folder,
 this Race problem is solve by using ME with busy
waiting have some drawback b/ce the CPU busy
and cause unexpected effect, to solve this
problem an other algorithms are developed
Mutual exclusion using without busy waiting
Sleep and Wake Up
 Sleeping is the process in which the process do not
busy waiting up to the process complete the task
and sleeping on some part or doing some task up
to the first process complete the task or critical
region is free
 Then when the process is out from the critical
region the caller called as wake up operating
system and sending some signal as the process
using the resource by the means of signal when
the resource is free.
 Sleep and Wake Up
Notice, what happen when the process who sleep do
not hear the signal or the alarm of wakeup?
Producer and consumer problem
 When two processes share common file buffer is the
part of memory which is used to holding information
when the size of the memory is limited
 Producer –consumer occur in two processes
known as p1 putting some information in the buffer
which is called as producer and an other process
called as consumer takes the stored resource from
the buffer and make the buffer is free.
Producer and consumer problem
• Producer is the process who put resource into
buffer and the consumer is the process which is
make buffer free by taking the data from the
buffer.
• But what happen if the producer putting all
resources on the buffer ,but when sending the
signal the consumer not heard and the buffer is
full and the producer is got sleep up to the
buffer is free, but the consumer take and make
the buffer is empty ,but the producer do not hear
as the buffer is empty.
Process Scheduling
What is scheduler ?
 Due to many processes would be use single
resources, example CPU, but have million
processes this process is done by Operating
system because of many process uses single
resources.
 The process scheduling is the activity of the
process manager that handles the removal
of the running process from the CPU and the
selection of another process on the basis of a
particular strategy.
Process Scheduling
 Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process to
be loaded into the executable memory at a time
and the loaded process shares the CPU using time
multiplexing.
Process Scheduling Queue
 The OS maintains all PCBs in Process
Scheduling Queues.
 The OS maintains a separate queue for each of
the process states and PCBs of all processes in the
same execution state are placed in the same
queue.’
 When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its
new state queue
Process Scheduling Queue
Ready queue - This queue keeps a set of all
processes be located in in main memory, ready
and waiting to execute. A new process is always
put in this queue./when the process is ready for the
RAM to CPU
Device queues - The processes which are blocked
due to unavailability of an I/O device constitute
this queue.
Process Scheduling Queues
The Operating System maintains the following
important process scheduling queues:
Job queue - This queue keeps all the processes in
the system/when the process want to enter to
the CPU/from HD to RAM
Process scheduling
 When a process is multi programmed, it
frequently has multiple processes competing for
the CPU at the same time.
 If only one CPU is available a choice has to be
made which process to run next.
 Multiprogramming - aims to increase the
output
 Time sharing - to allow all users use the CPU
equally
63
Scheduling Queues
• As the process enters the system or when a
running process is interrupted, it is put into
a ready queue
• There are also device queues(waiting
queues), where each device has its own
device queue.
• All are generally stored in a queue(linked
list), not necessarily a FIFO queue.
64
Scheduling levels
• Short-term (CPU scheduler)—selects from jobs in memory those
jobs that are ready to execute and allocates the CPU to them.
Which process execute next, after define it call the dispatcher and it
do the remaining work and it do the context switching
• Medium-term—used especially with time-sharing systems as an
intermediate scheduling level.
– A swapping scheme is implemented to remove partially run
programs from memory and reinstate them later to continue
where they left off. When RAM is full it swap out the process to
the buffer and when the space available it swap in to the main
memory
• Long-term (job scheduler)—determines which jobs are brought
into memory for processing.
65
Scheduling Algorithms
What are the most common algorithms??
1. FCFS
2. Round Robin
3. Shortest Job First
4. Shortest Remaining Job First
5. Priority Scheduling
Scheduling Algorithms
FCFS (First Come First Serve)
Selection criteria :
 The process that request first is served first. It
means that processes are served in the exact
order of their arrival.
Decision Mode :
 Non preemptive: Once a process is selected, it
runs until it is blocked for an I/O or some event,
or it is terminated.
Scheduling Algorithms
FCFS (First Come First Serve)
Implementation:
• This strategy can be easily implemented by using FIFO
queue, FIFO means First In First Out. When CPU
becomes free, a process from the first position in a
queue is selected to run.
Example :
Consider the following set of four processes. Their arrival
time and time required to complete the execution are
given in following table. Consider all time values in
milliseconds.
FCFS (First Come First Serve)
 Initially only process P0 is present and it is allowed to run. But, when P0 completes, all other
processes are present. So, next process P1 from ready queue is selected and allowed to run till
it complete. This procedure is repeated till all processes completed their execution
FCFS (First Come First Serve)
FCFS (First Come First Serve)
Advantages:
 Simple, fair, no starvation.
 Easy to understand, easy to implement.
Disadvantages :
 Not efficient. Average waiting time is too high.
 Convoy effect is possible. All small I/O bound processes
wait for one big CPU bound process to acquire CPU.
 CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes.
Scheduling Algorithms
Shortest Job First (SJF):
Selection Criteria :
 The process, that requires shortest time to
complete execution, is served first.
Decision Mode :
 Non preemptive: Once a process is selected, it
runs until either it is blocked for an I/O or some
event, or it is terminated.
Implementation :
Shortest Job First (SJF):
• This strategy can be implemented by using sorted FIFO
queue.
• All processes in a queue are sorted in ascending order
based on their required CPU bursts. When CPU
becomes free, a process from the first position in a
queue is selected to run.
Example :
• Consider the following set of four processes. Their
arrival time and time required to complete the execution
are given in following table. Consider all time values in
milliseconds.
Shortest Job First (SJF):
Shortest Job First (SJF):
• Initially only process P0 is present and it is
allowed to run. But, when P0 completes, all
other processes are present.
• So, process with shortest CPU burst P2 is selected
and allowed to run till it completes.
• Whenever more than one process is available,
such type of decision is taken.
• This procedure us repeated till all process
complete their execution.
Shortest Job First (SJF):
Shortest Job First (SJF):
Advantages:
 Less waiting time.
 Good response for short processes.
Disadvantages :
 It is difficult to estimate time required to
complete execution.
 Starvation is possible for long process. Long
process may wait forever.
Shortest Remaining Time Next (SRTN):
Selection criteria :
• The process, whose remaining run time is shortest, is
served first. This is a preemptive version of SJF
scheduling.
Decision Mode:
• Preemptive: When a new process arrives, its total
time is compared to the current process remaining
run time.
• If the new job needs less time to finish than the
current process, the current process is suspended
and the new job is started.
Shortest Remaining Time Next (SRTN):
Implementation :
• This strategy can also be implemented by using
sorted FIFO queue. All processes in a queue are
sorted in ascending order on their remaining run
time.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Shortest Remaining Time Next (SRTN):
Example :
• Consider the following set of four processes.
Their arrival time and time required to complete
the execution are given in following table.
Consider all time values in milliseconds.
Shortest Remaining Time Next (SRTN):
 Initially only process P0 is present and it is allowed to
run. But, when P1 comes, it has shortest remaining run
time. So, P0 is preempted and P1 is allowed to run.
 Whenever new process comes or current process blocks,
such type of decision is taken. This procedure is
repeated till all processes complete their execution.
Shortest Remaining Time Next (SRTN):
Shortest Remaining Time Next (SRTN):
Advantages :
 Less waiting time.
 Quite good response for short processes.
Disadvantages :
 Again it is difficult to estimate remaining time
necessary to complete execution.
 Starvation is possible for long process. Long
process may wait forever.
 Context switch overhead is there.
Round Robin:
Selection Criteria:
 Each selected process is assigned a time interval,
called time quantum or time slice.
 Process is allowed to run only for this time
interval. Here, two things are possible:
 First, Process is either blocked or terminated
before the quantum has elapsed. In this case the
CPU switching is done and another process is
scheduled to run.
Round Robin:
• Second, Process needs CPU burst longer than
time quantum. In this case, process is running at
the end of the time quantum.
• Now, it will be preempted and moved to the end
of the queue. CPU will be allocated to another
process. Here, length of time quantum is critical
to determine.
Round Robin:
Decision Mode:
• Preemptive:
Implementation :
 This strategy can be implemented by using circular
FIFO queue. If any process comes, or process
releases CPU, or process is preempted.
 It is moved to the end of the queue. When CPU
becomes free, a process from the first position in a
queue is selected to run.
Round Robin:
Example :
 Consider the following set of four processes. Their
arrival time and time required to complete the
execution are given in the following table.
 All time values are in milliseconds. Consider that
time quantum is of 4 ms, and context switch
overhead is of 1 ms.
Round Robin:
 At 4ms, process P0 completes its time quantum. So it preempted and another
process P1 is allowed to run. At 12 ms, process P2 voluntarily releases CPU, and
another process is selected to run. 1 ms is wasted on each context switch as
overhead. This procedure is repeated till all process completes their execution.
Round Robin:
Round Robin:
Advantages:
 One of the oldest, simplest, fairest and most
widely used algorithms.
Disadvantages:
 Context switch overhead is there.
 Determination of time quantum is too critical. If it
is too short, it causes frequent context switches
and lowers CPU efficiency. If it is too long, it causes
poor response for short interactive process.
Non Preemptive Priority Scheduling:
Selection criteria :
 The process, that has highest priority, is served
first.
Decision Mode:
 Non Preemptive: Once a process is selected, it
runs until it blocks for an I/O or some event, or it
terminates
Non Preemptive Priority Scheduling:
Implementation :
 This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted
based on their priority with highest priority
process at front end.
 When CPU becomes free, a process from the first
position in a queue is selected to run.
Non Preemptive Priority Scheduling:
Example :
 Consider the following set of four processes.
Their arrival time, total time required completing
the execution and priorities are given in following
table. Consider all time values in millisecond and
small values for priority means higher priority of
a process
Non Preemptive Priority Scheduling:
 Initially only process P0 is present and it is allowed to run. But,
when P0 completes, all other processes are present. So, process
with highest priority P3 is selected and allowed to run till it
completes. This procedure is repeated till all processes complete
their execution.
Non Preemptive Priority Scheduling:
Non Preemptive Priority Scheduling:
Advantages:
 Priority is considered. Critical processes can get
even better response time.
Disadvantages:
 Starvation is possible for low priority processes.
It can be overcome by using technique called
‘Aging’.
 Aging: gradually increases the priority of
processes that wait in the system for a long time.
Preemptive Priority Scheduling:
Selection criteria :
 The process, that has highest priority, is served
first.
Decision Mode:
 Preemptive: When a new process arrives, its
priority is compared with current process priority.
If the new job has higher priority than the current,
the current process is suspended and new job is
started.
Preemptive Priority Scheduling:
Implementation :
• This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted
based on priority with highest priority process at
front end.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Preemptive Priority Scheduling:
Example :
 Consider the following set of four processes. Their
arrival time, time required completing the
execution and priorities are given in following
table.
 Consider all time values in milliseconds and small
value of priority means higher priority of the
process.
Preemptive Priority Scheduling:
Preemptive Priority Scheduling:
Preemptive Priority Scheduling:
Advantages:
 Priority is considered. Critical processes can get
even better response time.
Disadvantages:
 Starvation is possible for low priority processes.
It can be overcome by using technique called
‘Aging’.
 Aging: gradually increases the priority of
processes that wait in the system for a long time.
Context switch overhead is there.
Process Control Block  (PCB) print 4.pdf

More Related Content

Similar to Process Control Block (PCB) print 4.pdf

Unit 2 part 2(Process)
Unit 2 part 2(Process)Unit 2 part 2(Process)
Unit 2 part 2(Process)
WajeehaBaig
 
Types or evolution of operating system
Types or evolution of operating systemTypes or evolution of operating system
Types or evolution of operating system
Ekta Bafna
 
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESPARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
suthi
 
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemProcess, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemLieYah Daliah
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.ppt
Mohammad Almuiet
 
Process Management Operating Systems .pptx
Process Management        Operating Systems .pptxProcess Management        Operating Systems .pptx
Process Management Operating Systems .pptx
SAIKRISHNADURVASULA2
 
Chapter 3.pdf
Chapter 3.pdfChapter 3.pdf
Chapter 3.pdf
HikaTariku
 
1. What important part of the process switch operation is not shown .pdf
1. What important part of the process switch operation is not shown .pdf1. What important part of the process switch operation is not shown .pdf
1. What important part of the process switch operation is not shown .pdf
fathimaoptical
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
Mani Deepak Choudhry
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentation
chnrketan
 
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptxTHE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
natyesu
 
lec2.pptx
lec2.pptxlec2.pptx
lec2.pptx
useless45
 
Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)
Gaurav Kakade
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
Mukesh Chinta
 
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.pptModule-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
KAnurag2
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/AAbdul Munam
 
Operating Systems R20 Unit 2.pptx
Operating Systems R20 Unit 2.pptxOperating Systems R20 Unit 2.pptx
Operating Systems R20 Unit 2.pptx
Prudhvi668506
 
Ch3 processes
Ch3   processesCh3   processes
Ch3 processes
Welly Dian Astika
 
Operating system
Operating systemOperating system
Operating system
Mark Muhama
 

Similar to Process Control Block (PCB) print 4.pdf (20)

Unit 2 part 2(Process)
Unit 2 part 2(Process)Unit 2 part 2(Process)
Unit 2 part 2(Process)
 
Ch4
Ch4Ch4
Ch4
 
Types or evolution of operating system
Types or evolution of operating systemTypes or evolution of operating system
Types or evolution of operating system
 
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESPARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
 
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemProcess, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.ppt
 
Process Management Operating Systems .pptx
Process Management        Operating Systems .pptxProcess Management        Operating Systems .pptx
Process Management Operating Systems .pptx
 
Chapter 3.pdf
Chapter 3.pdfChapter 3.pdf
Chapter 3.pdf
 
1. What important part of the process switch operation is not shown .pdf
1. What important part of the process switch operation is not shown .pdf1. What important part of the process switch operation is not shown .pdf
1. What important part of the process switch operation is not shown .pdf
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentation
 
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptxTHE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
 
lec2.pptx
lec2.pptxlec2.pptx
lec2.pptx
 
Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
 
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.pptModule-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/A
 
Operating Systems R20 Unit 2.pptx
Operating Systems R20 Unit 2.pptxOperating Systems R20 Unit 2.pptx
Operating Systems R20 Unit 2.pptx
 
Ch3 processes
Ch3   processesCh3   processes
Ch3 processes
 
Operating system
Operating systemOperating system
Operating system
 

Recently uploaded

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 

Recently uploaded (20)

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 

Process Control Block (PCB) print 4.pdf

  • 1. Process Control Block (PCB)  To implement the process model, the operating system maintains a table (an array of structures) called the process table, with one entry per process, these entries are known as Process Control Block.  Process table contains the information what the operating system must know to manage and control process switching, including the process location and process attributes.
  • 2. Process Control Block (PCB) Various fields and information stored in PCB are given as below Process Id: Each process is given Id number at the time of creation. Process state: The state may be ready, running, and blocked. Program counter: The counter indicates the address of the next instruction to be executed for this process.
  • 3. Process Control Block (PCB) CPU registers: Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward CPU-scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on Status information: The information includes the list of I/O devices allocated to this process, a list of open files, and so on.
  • 4. Thread • A program has one or more locus of execution. Each execution is called a thread of execution. • In traditional operating systems, each process has an address space and a single thread of execution. • It is the smallest unit of processing that can be scheduled by an operating system. • A thread is a single sequence stream within in a process. Because threads have some of the properties of processes,
  • 5. Thread Structure • The thread has a program counter that keeps track of which instruction to execute next. • It has registers, which holds its current working variables. • It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from. • What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another
  • 6. Thread Structure • Having multiple threads running in parallel in one process is similar to having multiple processes running in parallel in one compute (a) Three processes each with one thread. (b) One process with three threads.
  • 7. Thread Structure  In former case, the threads share an address space, open files, and other resources.  In the latter case, process share physical memory, disks, printers and other resources.  In Fig. (a) we see three traditional processes. Each process has its own address space and a single thread of control.  In contrast, in Fig. (b) we see a single process with three threads of control
  • 8. Thread Structure • Although in both cases we have three threads, in Fig. (a) each of them operates in a different address space, whereas in Fig. (b) all three of them share the same address space.
  • 9. Multithreading and Multitasking Multithreading  The ability of an operating system to execute different parts of a program, called threads simultaneously, is called multithreading.  The programmer must carefully design the program in such a way that all the threads can run at the same time without interfering with each other.
  • 10. Multithreading  On a single processor, multithreading generally occurs by time division multiplexing the processor switches between different threads.  This context switching generally happens so speedy that the user perceives the threads or tasks as running at the same time.
  • 11. Multitasking • The ability to execute more than one task at the same time is called multitasking. • In multitasking, only one CPU is involved, but it switches from one program to another so quickly that it gives the appearance of executing all of the programs at the same time here are two basic types of multitasking. Preemptive: In preemptive multitasking, the operating system assign CPU, time slices to each program.
  • 12. Multitasking Cooperative: In cooperative multitasking, each program can control the CPU for as long as it needs CPU. If a program is not using the CPU, however, it can allow another program to use it.
  • 13. Similarities and dissimilarities between process and thread. Similarities  Like processes threads share CPU and only one thread is active (running) at a time.  Like processes threads within a process execute sequentially.  Like processes thread can create children.  Like a traditional process, a thread can be in any one of several states: running, blocked, ready, or terminated.  Like process threads have Program Counter, stack, Registers and state
  • 14. Similarities and dissimilarities between process and thread. Dissimilarities  Unlike processes threads are not independent of one another, threads within the same process share an address space.  Unlike processes all threads can access every address in the task.  Unlike processes threads are design to assist one other. Note that processes might or might not assist one another because processes may be originated from different users.
  • 15. Thread Usage-Why do we need threads? • E.g., a word processor has different parts; parts for – Interacting with the user – Formatting the page as soon as the changes are made – Timed savings (for auto recovery) – Spelling and grammar checking 15
  • 16. Thread Usage-Why do we need threads? 1. Simplifying the programming model since many activities are going on at once. 2. They are easier to create and destroy than processes since they don't have any resources attached to them 3. Performance improves by overlapping activities if there is too much I/O
  • 17. Thread Usage-Why do we need threads? 4. Real parallelism is possible if there are multiple CPUs Note: implementation details are beyond the scope of the course (distributed systems).
  • 18. Advantages of Thread Threads minimize the context switching time. Use of threads provides concurrency within a process. Efficient communication.  It is more economical to create and context switch threads. Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
  • 19. Context Switch  A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time.  Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.
  • 20. Context Switch  When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block.  After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc.  At that point, the second process can start executing
  • 22. Context switching  Switching the CPU to another process requires saving the environment of the old process and loading the saved environment of the new process. This task is called context switching.  Context switch time also called dispatch latency is an overhead /is the wasted in the transition by the OS/ and it depends on the hardware (1 to 100ms)  It is sometimes a performance bottle neck
  • 24. Interprocess communication  Processes frequently need to communicate with other processes.  Processes may share a memory area or a file for communication  There are three issues related to IPC 1. How can one process pass information to another. 2. How can we make two or more processes don’t interfere with each other when engaged in critical activities; e.g., getting the last 1MB of memory
  • 29. Interprocess communication 3. Sequencing of events when dependency exist; e.g., one process produces data and another process consumes it.  These issues are also applied to threads; the first is easy for thread since they share a common address space.
  • 30. Why interprocess communication? As we said process must be cooperative to doing the task and to doing this task process must be communicate to each others ,why? The output of one of process would be the input of the others process Why interprocess communication is important?  To know the information of each others and transfer data among themselves
  • 31. Why interprocess communication  When the dependencies among occurs the process would occur, example if process A produces data and process prints it, and B would be wait until the process A produce the data, before starting printing data
  • 32. Interprocess communication  To doing task process must be communicate at certain points Example P1-----------@-----------P2  To do task process P1 task process P2 need the inputs(who use this resource) if process P1 and p2 doing works by using variables in @,they use the variables @ as sequential when p1 using variables @,P2 must be waiting up to p1 finish the variable or using @
  • 33. Interprocess communication  Why p1 and p2 waiting to each others? Because of they have common shared resource and depend on each others and both process using global and local variable When one process using the resource the other process must be waiting to each others P1----------R----------p2
  • 34. Interprocess communication • From the above when p1 and p2 using the memory ,files and shared resources they must be use turn by turn , but to using this resource they must be computation to each other to the resource
  • 35. Interprocess communication • Example, if two process trying to printing the process p1 reach to the printer and the printer is on printing but process p2 coming while process p1 on printing and override the p1 and delete each others by merging and p2 start to print ( means if they do not waiting to each others).
  • 36. Race Conditions  When several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition.  In the os processes that are working together may be share some common resources such as one read and the other need to write
  • 37. Race Conditions Race conditions – Arises as a result of sharing some resources – E.g. printer spooler – When a process wants to print a file, it enters a file name on a special spooler directory
  • 38. Race Conditions  Another process, the printer daemon, periodically checks to see if there are any files to be printed, and if there are it prints them and removes from the directory.  Recall the daemon is a process running in the back ground and started automatically when the system is booted.  Assume that the spooler directory has a large number of slots, numbered 0,1,2,3,…,n, each capable of holding a file name.
  • 39. Race Conditions See – out-points to the next file to be printed – in-points to the next free slot in the directory
  • 40. Race Conditions Then the following may happen – Process A reads in and stores the value 7 in a local variable. – A clock interrupt occurs and the CPU is switched to B. – B also reads in and stores the value 7 in a local variable. – It stores the name of its file in slot 7 and updates in to be 8. – A runs again; it runs its file name in slot 7, erasing the file name that B wrote. – It updates in to be 8.
  • 41. Race Conditions  The printer daemon will not notice this; but B will never receive any output  Situation like this, where two or more process are reading and writing some shared data and the final result depends on who runs precisely when are called race conditions.
  • 42. Critical Regions  Critical region is a part of program where the shared resources is found and it is also called as critical section  On the critical section there is high race conditions occurs, but how prevent this race conditions  The main ways of make prevention is to making as more than one process do not using the shared resources such as share memory, shared files at the same times
  • 43. Avoiding Race Conditions • To avoid race condition we need Mutual Exclusion. • Mutual Exclusion is someway of making sure that if one process is using a shared variable or file, the other processes will be excluded from doing the same things. • The difficulty above in the printer spooler occurs because process B started using one of the shared variables before process A was finished with it.
  • 44. Critical Region What is critical region?  It is the part of program where a shared resource is found it is also called critical section, in this critical section there is high race condition occur ,but how we prevent this those race condition ?  The main way is to prevent as not more than one process using the shared memory, shared files at the same time ,means when P1 using resources p2 must be wait or prevents by using OS design
  • 45. Avoiding Race Conditions  That part of the program where the shared memory is accessed is called the critical region or critical section  If we could arrange matters such that no two processes were ever in their critical regions at the same time, we could avoid race conditions.  Although this requirement avoids race conditions, this is not sufficient for having parallel processes cooperate correctly and efficiently using shared data.
  • 46. Avoiding Race Conditions (Rules for avoiding Race Condition) Solution to Critical section problem:
  • 48. Mutual exclusion using critical Regions  As we seen on the above process A enter its critical regions at a time T1 in little later at a T2 Process B attempts to enter it is critical regions and we allows only one at a time ,consequently B is temporary suspended until Time T3 when PA leaves its critical regions and allowing B to enter immediacy.  But in the other word what we need is called as mutual exclusion that is some what making sure that if some process using a resource the other process must be prevents
  • 49. Mutual exclusion using critical Regions  Eventually B leaves at T4 and we are back to the original situation with no process in the critical region
  • 50. Mutual exclusion using with busy waiting What is busy waiting ?  Means waiting some process up to it leave from the critical regions without doing any things There are two way of achieving mutual exclusion 1.Mutual exclusion using with busy waiting Disabling Interrupt  One of the simple processor system have simplest solution to have each processes disabling the over all interrupt just after one process enter to the critical region and enable after the process leave from the critical region
  • 51.  Disabling Interrupt • Example if P1 want to share/ use critical region and enter to the critical region and the process p2 need to enter to the critical region would be disable or the enter point is switch off and after the process is terminate from the critical region and the CPU can send the signal to the process and enabling • This approach is generally un attractive because it is unwise the user process when the CPU forget to turn on again the process forget and cause the end system
  • 52. Mutual exclusion using with busy waiting Lock Variable  Let look for the software solution having single shared and lock variable initially zero (0) when the process wants to enter its critical region, it tests the lock ,if lock is zero the process is set its to 1 and enter to the critical region ,if the lock is already 1 the process just waits until it become 0 ,when lock is 0 no process is in the critical region and when lock is 1 there is the process is in the critical region
  • 53.  Lock Variable  The problem of this algorithm is, it may be contain the total flow because of that one process read the lock and see that it is 0 , before it set the lock to 1 an other process is scheduled runs and set the lock to 1 and the two process would be in the critical regions at the same time.
  • 54. Mutual exclusion using without busy waiting  The process which occur in the Critical region to share resource and they make that race take place because they share common resources such as RAM,CPU , Files and Folder,  this Race problem is solve by using ME with busy waiting have some drawback b/ce the CPU busy and cause unexpected effect, to solve this problem an other algorithms are developed
  • 55. Mutual exclusion using without busy waiting Sleep and Wake Up  Sleeping is the process in which the process do not busy waiting up to the process complete the task and sleeping on some part or doing some task up to the first process complete the task or critical region is free  Then when the process is out from the critical region the caller called as wake up operating system and sending some signal as the process using the resource by the means of signal when the resource is free.
  • 56.  Sleep and Wake Up Notice, what happen when the process who sleep do not hear the signal or the alarm of wakeup? Producer and consumer problem  When two processes share common file buffer is the part of memory which is used to holding information when the size of the memory is limited  Producer –consumer occur in two processes known as p1 putting some information in the buffer which is called as producer and an other process called as consumer takes the stored resource from the buffer and make the buffer is free.
  • 57. Producer and consumer problem • Producer is the process who put resource into buffer and the consumer is the process which is make buffer free by taking the data from the buffer. • But what happen if the producer putting all resources on the buffer ,but when sending the signal the consumer not heard and the buffer is full and the producer is got sleep up to the buffer is free, but the consumer take and make the buffer is empty ,but the producer do not hear as the buffer is empty.
  • 58. Process Scheduling What is scheduler ?  Due to many processes would be use single resources, example CPU, but have million processes this process is done by Operating system because of many process uses single resources.  The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.
  • 59. Process Scheduling  Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.
  • 60. Process Scheduling Queue  The OS maintains all PCBs in Process Scheduling Queues.  The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue.’  When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue
  • 61. Process Scheduling Queue Ready queue - This queue keeps a set of all processes be located in in main memory, ready and waiting to execute. A new process is always put in this queue./when the process is ready for the RAM to CPU Device queues - The processes which are blocked due to unavailability of an I/O device constitute this queue.
  • 62. Process Scheduling Queues The Operating System maintains the following important process scheduling queues: Job queue - This queue keeps all the processes in the system/when the process want to enter to the CPU/from HD to RAM
  • 63. Process scheduling  When a process is multi programmed, it frequently has multiple processes competing for the CPU at the same time.  If only one CPU is available a choice has to be made which process to run next.  Multiprogramming - aims to increase the output  Time sharing - to allow all users use the CPU equally 63
  • 64. Scheduling Queues • As the process enters the system or when a running process is interrupted, it is put into a ready queue • There are also device queues(waiting queues), where each device has its own device queue. • All are generally stored in a queue(linked list), not necessarily a FIFO queue. 64
  • 65. Scheduling levels • Short-term (CPU scheduler)—selects from jobs in memory those jobs that are ready to execute and allocates the CPU to them. Which process execute next, after define it call the dispatcher and it do the remaining work and it do the context switching • Medium-term—used especially with time-sharing systems as an intermediate scheduling level. – A swapping scheme is implemented to remove partially run programs from memory and reinstate them later to continue where they left off. When RAM is full it swap out the process to the buffer and when the space available it swap in to the main memory • Long-term (job scheduler)—determines which jobs are brought into memory for processing. 65
  • 66. Scheduling Algorithms What are the most common algorithms?? 1. FCFS 2. Round Robin 3. Shortest Job First 4. Shortest Remaining Job First 5. Priority Scheduling
  • 67. Scheduling Algorithms FCFS (First Come First Serve) Selection criteria :  The process that request first is served first. It means that processes are served in the exact order of their arrival. Decision Mode :  Non preemptive: Once a process is selected, it runs until it is blocked for an I/O or some event, or it is terminated.
  • 68. Scheduling Algorithms FCFS (First Come First Serve) Implementation: • This strategy can be easily implemented by using FIFO queue, FIFO means First In First Out. When CPU becomes free, a process from the first position in a queue is selected to run. Example : Consider the following set of four processes. Their arrival time and time required to complete the execution are given in following table. Consider all time values in milliseconds.
  • 69. FCFS (First Come First Serve)  Initially only process P0 is present and it is allowed to run. But, when P0 completes, all other processes are present. So, next process P1 from ready queue is selected and allowed to run till it complete. This procedure is repeated till all processes completed their execution
  • 70. FCFS (First Come First Serve)
  • 71. FCFS (First Come First Serve) Advantages:  Simple, fair, no starvation.  Easy to understand, easy to implement. Disadvantages :  Not efficient. Average waiting time is too high.  Convoy effect is possible. All small I/O bound processes wait for one big CPU bound process to acquire CPU.  CPU utilization may be less efficient especially when a CPU bound process is running with many I/O bound processes.
  • 72. Scheduling Algorithms Shortest Job First (SJF): Selection Criteria :  The process, that requires shortest time to complete execution, is served first. Decision Mode :  Non preemptive: Once a process is selected, it runs until either it is blocked for an I/O or some event, or it is terminated. Implementation :
  • 73. Shortest Job First (SJF): • This strategy can be implemented by using sorted FIFO queue. • All processes in a queue are sorted in ascending order based on their required CPU bursts. When CPU becomes free, a process from the first position in a queue is selected to run. Example : • Consider the following set of four processes. Their arrival time and time required to complete the execution are given in following table. Consider all time values in milliseconds.
  • 75. Shortest Job First (SJF): • Initially only process P0 is present and it is allowed to run. But, when P0 completes, all other processes are present. • So, process with shortest CPU burst P2 is selected and allowed to run till it completes. • Whenever more than one process is available, such type of decision is taken. • This procedure us repeated till all process complete their execution.
  • 77. Shortest Job First (SJF): Advantages:  Less waiting time.  Good response for short processes. Disadvantages :  It is difficult to estimate time required to complete execution.  Starvation is possible for long process. Long process may wait forever.
  • 78. Shortest Remaining Time Next (SRTN): Selection criteria : • The process, whose remaining run time is shortest, is served first. This is a preemptive version of SJF scheduling. Decision Mode: • Preemptive: When a new process arrives, its total time is compared to the current process remaining run time. • If the new job needs less time to finish than the current process, the current process is suspended and the new job is started.
  • 79. Shortest Remaining Time Next (SRTN): Implementation : • This strategy can also be implemented by using sorted FIFO queue. All processes in a queue are sorted in ascending order on their remaining run time. • When CPU becomes free, a process from the first position in a queue is selected to run.
  • 80. Shortest Remaining Time Next (SRTN): Example : • Consider the following set of four processes. Their arrival time and time required to complete the execution are given in following table. Consider all time values in milliseconds.
  • 81. Shortest Remaining Time Next (SRTN):  Initially only process P0 is present and it is allowed to run. But, when P1 comes, it has shortest remaining run time. So, P0 is preempted and P1 is allowed to run.  Whenever new process comes or current process blocks, such type of decision is taken. This procedure is repeated till all processes complete their execution.
  • 82. Shortest Remaining Time Next (SRTN):
  • 83. Shortest Remaining Time Next (SRTN): Advantages :  Less waiting time.  Quite good response for short processes. Disadvantages :  Again it is difficult to estimate remaining time necessary to complete execution.  Starvation is possible for long process. Long process may wait forever.  Context switch overhead is there.
  • 84. Round Robin: Selection Criteria:  Each selected process is assigned a time interval, called time quantum or time slice.  Process is allowed to run only for this time interval. Here, two things are possible:  First, Process is either blocked or terminated before the quantum has elapsed. In this case the CPU switching is done and another process is scheduled to run.
  • 85. Round Robin: • Second, Process needs CPU burst longer than time quantum. In this case, process is running at the end of the time quantum. • Now, it will be preempted and moved to the end of the queue. CPU will be allocated to another process. Here, length of time quantum is critical to determine.
  • 86. Round Robin: Decision Mode: • Preemptive: Implementation :  This strategy can be implemented by using circular FIFO queue. If any process comes, or process releases CPU, or process is preempted.  It is moved to the end of the queue. When CPU becomes free, a process from the first position in a queue is selected to run.
  • 87. Round Robin: Example :  Consider the following set of four processes. Their arrival time and time required to complete the execution are given in the following table.  All time values are in milliseconds. Consider that time quantum is of 4 ms, and context switch overhead is of 1 ms.
  • 88. Round Robin:  At 4ms, process P0 completes its time quantum. So it preempted and another process P1 is allowed to run. At 12 ms, process P2 voluntarily releases CPU, and another process is selected to run. 1 ms is wasted on each context switch as overhead. This procedure is repeated till all process completes their execution.
  • 90. Round Robin: Advantages:  One of the oldest, simplest, fairest and most widely used algorithms. Disadvantages:  Context switch overhead is there.  Determination of time quantum is too critical. If it is too short, it causes frequent context switches and lowers CPU efficiency. If it is too long, it causes poor response for short interactive process.
  • 91. Non Preemptive Priority Scheduling: Selection criteria :  The process, that has highest priority, is served first. Decision Mode:  Non Preemptive: Once a process is selected, it runs until it blocks for an I/O or some event, or it terminates
  • 92. Non Preemptive Priority Scheduling: Implementation :  This strategy can be implemented by using sorted FIFO queue. All processes in a queue are sorted based on their priority with highest priority process at front end.  When CPU becomes free, a process from the first position in a queue is selected to run.
  • 93. Non Preemptive Priority Scheduling: Example :  Consider the following set of four processes. Their arrival time, total time required completing the execution and priorities are given in following table. Consider all time values in millisecond and small values for priority means higher priority of a process
  • 94. Non Preemptive Priority Scheduling:  Initially only process P0 is present and it is allowed to run. But, when P0 completes, all other processes are present. So, process with highest priority P3 is selected and allowed to run till it completes. This procedure is repeated till all processes complete their execution.
  • 95. Non Preemptive Priority Scheduling:
  • 96. Non Preemptive Priority Scheduling: Advantages:  Priority is considered. Critical processes can get even better response time. Disadvantages:  Starvation is possible for low priority processes. It can be overcome by using technique called ‘Aging’.  Aging: gradually increases the priority of processes that wait in the system for a long time.
  • 97. Preemptive Priority Scheduling: Selection criteria :  The process, that has highest priority, is served first. Decision Mode:  Preemptive: When a new process arrives, its priority is compared with current process priority. If the new job has higher priority than the current, the current process is suspended and new job is started.
  • 98. Preemptive Priority Scheduling: Implementation : • This strategy can be implemented by using sorted FIFO queue. All processes in a queue are sorted based on priority with highest priority process at front end. • When CPU becomes free, a process from the first position in a queue is selected to run.
  • 99. Preemptive Priority Scheduling: Example :  Consider the following set of four processes. Their arrival time, time required completing the execution and priorities are given in following table.  Consider all time values in milliseconds and small value of priority means higher priority of the process.
  • 102. Preemptive Priority Scheduling: Advantages:  Priority is considered. Critical processes can get even better response time. Disadvantages:  Starvation is possible for low priority processes. It can be overcome by using technique called ‘Aging’.  Aging: gradually increases the priority of processes that wait in the system for a long time. Context switch overhead is there.