3. Introduction
Prof. K. Adisesha (Ph. D)
3
Introduction to Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
➢ When a program is loaded into the main memory it becomes a process.
➢ It can be divided into four sections ─
❖ Stack: The process Stack contains the temporary data such as
method/function parameters, return address and local variables
❖ Heap: This is dynamically allocated memory to a process during its run
time
❖ Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
❖ Data: This section contains the global and static variables.
4. Introduction
Prof. K. Adisesha (Ph. D)
4
Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
➢ A process can have one of the following five states at a time.
❖ Start
❖ Ready
❖ Running
❖ Waiting
❖ Terminated or Exit
5. Introduction
Prof. K. Adisesha (Ph. D)
5
Process Life Cycle:
➢ Start: This is the initial state when a process is first started/created.
➢ Ready: The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the OS so that they can run.
➢ Running: Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
➢ Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
➢ Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.
6. Introduction
Prof. K. Adisesha (Ph. D)
6
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ The PCB is identified by an integer process ID (PID).
➢ A PCB keeps all the information needed to keep track of
a process.
➢ The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
7. Threading
Prof. K. Adisesha (Ph. D)
7
Thread:
A thread is also called a lightweight process. A thread is a flow of execution through
the process code.
➢ A thread shares with its peer threads few information like code segment, data segment and
open files.
➢ Threads provide a way to improve application performance through parallelism.
➢ All threads can share same set of open files, child processes.
➢ Multiple threaded processes use fewer resources.
➢ Threads are implemented in following two ways −
❖ User Level Threads − User managed threads.
❖ Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
8. Threading
Prof. K. Adisesha (Ph. D)
8
Multithreading Models:
Some operating system provide a combined user level thread and Kernel level thread
facility.
➢ In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
➢ Multithreading models are three types
❖ Many to many relationship
❖ Many to one relationship
❖ One to one relationship
9. Threading
Prof. K. Adisesha (Ph. D)
9
Thread:
Difference between User-Level & Kernel-Level Thread.
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and
manage.
Kernel-level threads are slower to create and
manage.
Implementation is by a thread library at the
user level.
Operating system supports creation of Kernel
threads.
User-level thread is generic and can run on
any operating system.
Kernel-level thread is specific to the
operating system.
Multi-threaded applications cannot take
advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
10. Threading
Prof. K. Adisesha (Ph. D)
10
Advantages of Thread:
The various advantages of using Thread are:
➢ Threads minimize the context switching time.
➢ Use of threads provides concurrency within a process.
➢ Efficient communication.
➢ It is more economical to create and context switch threads.
➢ Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency..
11. Swapping
Prof. K. Adisesha (Ph. D)
11
Swapping:
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to secondary storage and make that memory available to other processes.
➢ Swapping is also known as a technique for memory compaction.
➢ The total time taken by swapping process includes
the time it takes to move the entire process to a
secondary disk and then to copy the process back to
memory
12. Process Scheduling
Prof. K. Adisesha (Ph. D)
12
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU on the basis of a particular strategy.
➢ Process scheduling is an essential part of a Multiprogramming operating systems.
➢ The OS maintains all PCBs in Process Scheduling Queues.
❖ Job queue: This queue keeps all the processes in the
system.
❖ Ready queue: This queue keeps a set of all processes
residing in main memory, ready and waiting to
execute.
❖ Device queues: The processes which are blocked due
to unavailability of an I/O device constitute this queue
13. Process Scheduling
Prof. K. Adisesha (Ph. D)
13
Schedulers:
Schedulers are special system software which handle process scheduling in various ways.
➢ The main task is to select the jobs to be submitted into the system and to decide which process to run.
➢ Schedulers are of three types:
❖ Long-Term Scheduler:
▪ It is also called a job scheduler.
▪ A long-term scheduler determines which programs are admitted to the system for processing.
❖ Short-Term Scheduler:
▪ It is also called as CPU scheduler.
▪ Its main objective is to increase system performance in accordance with the chosen set of criteria.
❖ Medium-Term Scheduler:
▪ Medium-term scheduling is a part of swapping.
▪ It removes the processes from the memory. It reduces the degree of multiprogramming.
14. Process Scheduling
Prof. K. Adisesha (Ph. D)
14
Scheduling Criteria:
Schedulers selects the processes in memory that are ready to execute, and allocates the
CPU based on certain scheduling Criterias.
➢ Scheduling Criteria are based on:
❖ CPU utilization – keep the CPU as busy as possible
❖ Throughput – No. of processes that complete their execution per time unit
❖ Turnaround time – amount of time to execute a particular process
❖ Waiting time – amount of time a process has been waiting in the ready queue
❖ Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
15. Process Scheduling
Prof. K. Adisesha (Ph. D)
15
Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
➢ These algorithms are either non-preemptive or preemptive
➢ There are popular process scheduling algorithms:
❖ First-Come, First-Served (FCFS) Scheduling
❖ Shortest-Job-Next (SJN) Scheduling
❖ Priority Scheduling
❖ Round Robin(RR) Scheduling
❖ Multiple-Level Queues Scheduling.
16. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
16
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
➢ It is a non-preemptive, pre-emptive scheduling algorithm.
➢ Easy to understand and implement.
➢ Its implementation is based on FIFO queue.
➢ Poor in performance as average wait time is high..
17. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
17
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
Example: FCFS Scheduling
18. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
18
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ This is a non-preemptive, pre-emptive scheduling algorithm.
➢ Best approach to minimize waiting time.
➢ Easy to implement in Batch systems where required CPU time is known in advance.
➢ Impossible to implement in interactive systems where required CPU time is not
known.
➢ The processer should know in advance how much time process will take..
19. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
19
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ Example:
20. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
20
Priority Scheduling:
Priority scheduling is a priority based algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
➢ Processes with same priority are executed on first come first served basis.
➢ Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
21. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
21
Priority Based Scheduling:
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Example:
22. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
22
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Round Robin is the preemptive process scheduling algorithm.
➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
➢ Context switching is used to save states of preempted processes.
23. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
23
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Example:
24. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
24
Multiple-Level Queues Scheduling:
Multiple-level queues are not an independent scheduling algorithm.
➢ They make use of other existing algorithms to group and schedule jobs with common
characteristics.
❖ Multiple queues are maintained for processes with common characteristics.
❖ Each queue can have its own scheduling algorithms.
❖ Priorities are assigned to each queue.
25. Interprocess Communication
Prof. K. Adisesha (Ph. D)
25
Inter process Communication (IPC):
Inter process communication is a mechanism which allows processes to communicate
with each other and synchronize their actions.
➢ The communication between these processes can be seen as a method of co-operation
between them.
➢ Some of the methods to provide IPC:
❖ Message Queue.
❖ Shared Memory.
❖ Signal.
❖ Shared Files and Pipe
❖ Socket
26. Process Synchronization
Prof. K. Adisesha (Ph. D)
26
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process Synchronization ensures a perfect co-ordination among the process.
➢ Maintaining data consistency demands mechanisms to ensure synchronized execution
of cooperating processes.
➢ Process Synchronization can be provided by using several different tools like:
❖ Semaphores
❖ Mutual Exclusion or Mutex
❖ Monitor
27. Process Synchronization
Prof. K. Adisesha (Ph. D)
27
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
➢ On the basis of synchronization, processes are categorized as one of the following two types:
❖ Independent Process: Execution of one process does not affects the execution of
other processes.
❖ Cooperative Process: Execution of one process affects the execution of other
processes.
28. Process Synchronization
Prof. K. Adisesha (Ph. D)
28
Process Synchronization:
Race Condition:
➢ When serval process access and manipulates the same data at the same time, they may enter into a race
condition
➢ Race condition occurs among process that share common storage for read and write.
➢ Race condition occurs due to improper synchronization of shared memory access.
Critical section problem:
➢ Critical section is a code segment that can be accessed by only one process at a time.
➢ Critical section contains shared variables which need to be synchronized to maintain consistency of data
variables.
➢ Any solution to the critical section problem must satisfy three requirements:
❖ Mutual Exclusion
❖ Progress
❖ Bounded Waiting.
29. Process Synchronization
Prof. K. Adisesha (Ph. D)
29
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread.
➢ A semaphore uses two atomic operations, wait and signal for process synchronization.
➢ Classical problems of Synchronization with Semaphore Solution:
❖ Bounded-buffer (or Producer-Consumer) Problem
❖ Dining- Philosophers Problem
❖ Readers and Writers Problem
❖ Sleeping Barber Problem
30. Process Synchronization
Prof. K. Adisesha (Ph. D)
30
Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem.
➢ Solution to this problem is, creating two counting semaphores “full” and “empty” to
keep track of the current number of full and empty buffers respectively.
➢ Producers produce a product and consumers consume the product, but both use of one
of the containers each time.
31. Process Synchronization
Prof. K. Adisesha (Ph. D)
31
Dining- Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ There is one chopstick between each philosopher.
➢ A philosopher may eat if he can pickup the two chopsticks
adjacent to him.
➢ One chopstick may be picked up by any one of its adjacent
followers but not both.
➢ This problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free
manner.
32. Process Synchronization
Prof. K. Adisesha (Ph. D)
32
Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes. We
distinguish between these two types of processes by referring to the former as readers
and to the latter as writers.
➢ Precisely in OS we call this situation as the readers-writers problem.
➢ Problem parameters:
❖ One set of data is shared among a number of processes.
❖ Once a writer is ready, it performs its write. Only one writer may write at a time.
❖ If a process is writing, no other process can read it.
❖ If at least one reader is reading, no other process can write.
❖ Readers may not write and only read.
33. Process Synchronization
Prof. K. Adisesha (Ph. D)
33
Sleeping Barber Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ Barber shop with one barber, one barber chair and N
chairs to wait in.
➢ When no customers the barber goes to sleep in barber
chair and must be woken when a customer comes in.
➢ When barber is cutting hair new customers take empty
seats to wait, or leave if no vacancy.
34. Deadlocks
Prof. K. Adisesha (Ph. D)
34
Deadlock:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
➢ In operating systems when there are two or more processes
that hold some resources and wait for resources held by
other(s).
➢ Example:
❖ Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting
for resource 1.
35. Deadlocks
Prof. K. Adisesha (Ph. D)
35
Deadlock:
Methods for handling deadlock .
➢ There are three ways to handle deadlock:
❖ Deadlock prevention or avoidance: The idea is to not let the system into a
deadlock state. by using strategy of “Avoidance”, we have to make an assumption.
❖ Deadlock detection and recovery: Let deadlock occur, then do preemption to
handle it once occurred.
❖ Ignore the problem altogether: If deadlock is very rare, then let it happen and
reboot the system. This is the approach that both Windows and UNIX take.