SlideShare a Scribd company logo
1 of 75
Chapter 7
Deadlocks
System Model
• A system consists of a finite number of resources to be distributed
among a number of competing processes.
• If a system has two CPUs, then the resource type CPU has two
instances.
• A process must request a resource before using it and must
release the resource after using it.
• Each process utilizes a resource as follows:
-- Request: The process requests the resource. If the request
cannot be granted immediately then the requesting process must
wait until it can acquire the resource.
– Use : The process can operate on the resource.
– Release : The process releases the resource.
Deadlock Example
• Deadlock : It is a situation when a process in the system has
acquired some resources and waiting for more resources acquired
by some other process which in turn is waiting for the resources
acquired by this process. None of them can process and operating
system cant do any work.
• Different resource type : P1 DVD and P2 Printer
P1 requests printer and P2 requests DVD
P1 waiting for the release of printer(P2) and P2 is waiting for the
release of DVD(P1).
• Same resource type : P1,P2,P3 having three CD RW drives.
Deadlock Characterization
• Mutual exclusion: Only one process at a time can use a resource.
If a process requests that resource, the requesting process must
be delayed until the resource has been released.
• Hold and wait: A process holding at least one resource is waiting
to acquire additional resources held by other processes.
• No preemption: A resource can be released voluntarily by the
process holding it, after that process has completed its task.
• Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for
a resource that is held by Pn, and Pn is waiting for a resource that
is held by P0.
Four conditions for a deadlock to occur
Resource-Allocation Graph
• It consists of set of vertices V and a set of edges E.
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system
R = {R1, R2, …, Rm}, the set consisting of all resource types in
the system
• Request edge – Directed edge Pi  Rj , process Pi has requested
an instance of resource type Rj and is currently waiting for that
resource.
• Assignment edge – Directed edge Rj  Pi, instance of resource
type Rj has been allocated to process Pi.
Resource-Allocation Graph (Cont.)
• Process
• Resource Type with 4 instances
• Pi requests instance of Rj
• Pi is holding an instance of Rj Pi
Pi
Rj
Rj
Resource Allocation Graph with a Deadlock
Resource Allocation Graph with no deadlock
Resource-Allocation Graph (Cont.)
 Process states:
• Process P1 is holding an instance of resource type R2 and is waiting
for an instance of resource type R1 .
• Process P2 is holding an instance of R1 and an instance of R2 and is
waiting for an instance of R3.
• Process P3 is holding an instance of R3 .
 Methods for Handling Deadlocks :
• Ensure that the system will never enter a deadlock state:
– Deadlock prevention
– Deadlock avoidance
• Allow the system to enter a deadlock state and then recover.
• Ignore the problem and pretend that deadlocks never occur in the
system;
Deadlock Prevention
 Mutual Exclusion –It must hold for non-sharable resources. We
cannot prevent deadlocks by denying the mutual-exclusion
condition, because some resources are intrinsically non sharable.
E.g.: Printer cannot be simultaneously shared by several process(non
sharable).
 Hold and Wait – To ensure that this condition doesnot occurs in the
system, we must guarantee that whenever a process requests a
resource, it does not hold any other resources.
– One protocol requires process to request and be allocated all its
resources before it begins execution.
– Another protocol allows process to request resources only when
the process has none allocated to it.
Deadlock Prevention (Cont.)
• No Preemption : To ensure that this condition doesnot hold, use the
following protocol.
– If a process is holding some resources, requests another resource
that cannot be immediately allocated to it, then all resources the
process is currently holding are preempted.
– Preempted resources are added to list of resources for which
process is waiting.
– Process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.
– This protocol is used whose state can be easily saved and restored
later.
Deadlock Prevention (Cont.)
• Circular Wait
- Impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of
enumeration.
- Initially process requests any number of instances of resource
type Ri. After that process can request instance of resource type Rj
if and only if F(Rj) >F(Ri).
- E.g.: We define a one-to-one function F: R N, where N is the
set of natural numbers, R is the set of resource types includes tape
drives, disk drives, and printers.
- Alternatively, we can require that a process requesting an instance
of resource type Rj must have released any resources Ri; such that
F(Ri) >=F(Rj).
Deadlock Example with Lock Ordering
void transaction(Account from, Account to, double amount)
{
mutex lock1, lock2;
lock1 = get_lock(from);
lock2 = get_lock(to);
acquire(lock1);
acquire(lock2);
withdraw(from, amount);
deposit(to, amount);
release(lock2);
release(lock1);
}
Transactions 1 and 2 execute concurrently. Transaction1 transfers $25
from account A to account B, and Transaction2 transfers $50 from
account B to account A
Deadlock Avoidance
• Simplest and most useful model requires that each process
declare the maximum number of resources of each type that it
may need.
• Given this a priori information, it is possible to construct an
algorithm which ensures that the system will never enter a
deadlocked state.
• Resource-allocation state is defined by the number of available
and allocated resources, and the maximum demands of the
processes.
• The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a
circular-wait condition.
Safe State
• A state is in safe if the system can allocate resources to a process in
some order and still avoid a deadlock.
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of
all the processes in the system such that if Pi resource needs are
not immediately available, then Pi can wait until all Pj have
finished.
– When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
– When Pi terminates, Pi +1 can obtain its needed resources, and
so on .
Basic Facts
• If a system is in safe state  no deadlocks
If a system is in unsafe state  possibility of deadlock.
• Avoidance  ensure that a system will never enter an unsafe
state.
Avoidance Algorithms :
• Single instance of a resource type
Use a resource-allocation graph algorithm
• Multiple instances of a resource type
Use the banker’s algorithm
Resource-Allocation Graph Algorithm
• Claim edge Pi  Rj indicates that process Pj may request resource Rj
at sometime in the future; represented by a dashed line.
• Claim edge converts to request edge when a process requests a
resource.
• Request edge converted to an assignment edge when the resource
is allocated to the process.
• When a resource is released by a process, assignment edge
reconverts to a claim edge.
• Resources must be claimed a priori in the system.
Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
• It is applicable to resource allocation system with multiple instances
of each resource type.
• The name was chosen because the algorithm could be used in a
banking system to ensure that the bank never allocated its available
cash in such a way that it could no longer satisfy the needs of all its
customers.
Banker’s Algorithm
• Available: Vector of length m. If available [j] = k, there are k
instances of resource type Rj available
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at
most k instances of resource type Rj
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of Rj
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more
instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Let n = number of processes, and m = number of resources types.
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3.Work=Work + Allocation
Finish [i] = true .
Go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process Pi
Requesti = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj
1.If Requesti  Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must wait,
since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the
state as follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe  the resources are allocated to Pi
 If unsafe  Pi must wait, and the old resource-allocation
state is restored
Deadlock Detection
• Allow system to enter deadlock state
• Detection algorithm
• Recovery scheme
Single Instance of Each Resource Type
• If all the resources have single instance then we use a variant of
resource allocation graph called wait for graph.
• We obtain this graph by removing the resource nodes and
collapsing the appropriate edges.
• Maintains wait-for graph
– Nodes are processes
– Pi  Pj means Pi is waiting for Pj to release a resource that
Pi needs.
• Periodically invoke an algorithm that searches for a cycle in the
graph. If there is a cycle, there exists a deadlock.
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph Corresponding wait-for graph
Several instances of resource type :Detection Algorithm
Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1  i  n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will be affected by deadlock when it
happens?
• If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell
which of the many deadlocked processes “caused” the deadlock.
• Recovery from Deadlock :
• Inform the operator that a deadlock has occurred and to let the
operator deal with the deadlock manually.
• Let the system recover from the deadlock automatically.
Recovery from Deadlock: Process Termination
• Abort all deadlocked processes :The deadlocked processes may have
computed for a long time, and the results of these partial
computations must be discarded and probably will have to be
recomputed later.
• Abort one process at a time until the deadlock cycle is eliminated
:each process is aborted, a deadlock-detection algorithm must be
invoked to determine whether any processes are still deadlocked.
• In which order should we choose to abort?
1. Priority of the process.
2. How long process has computed, and how much longer to
completion.
3. How many resources that the process has used.
4. How many resources that process needs in order to complete.
5. How many processes will need to be terminated.
Recovery from Deadlock: Resource Preemption
 Resource Preemption : We successively preempt some resources
from processes and give these resources to other processes till
the deadlock cycle is broken.
• Selecting a victim – Which resources and which processes are to
be preempted? As in process termination, we must determine
the order of preemption to minimize cost.
• Rollback – If we preempt a resource from a process, what
should be done with that process? If it is missing some needed
resource. We must roll back the process to some safe state and
restart it from that state.
• Starvation – How can we guarantee that resources will not
always be preempted from the same process?
Chapter 8
Memory management
Background
• Memory consists of large array of words each is having own
address.
• Instruction execution cycle : Fetches an instruction from memory.
The instruction is then decoded and may cause operands to be
fetched from memory.
• After the instruction has been executed on the operands, results
may be stored back in to memory.
• Program must be brought (from disk) into memory and placed
within a process for it to be run.
• Main memory and registers built in to the processor itself are
the only storage that the CPU can access directly.
• If the data are not in memory they must be moved before CPU
can operate on them.
• Make sure that each process has a separate memory space.
• To do this, we need the ability to determine the range of legal
addresses that the process may access and to ensure that the
process can access only these legal addresses.
• Legal address – Providing protection through two registers : base
and limit register.
Basic Hardware
Basic Hardware
• A pair of base and limit registers define the logical address
space.
• Logical address : It is the address generated by the CPU.
• Base register holds the smallest legal physical memory address.
• Limit register specifies the size of the range.
• CPU must check every memory access generated in user mode
to be sure that it is between base and limit for that user.
Hardware Address Protection
Base – Smallest legal physical address
Limit -- Size of Range
Eg : CPU Address --- 256002
Base --- 256000 (256002>256000)
Limit---- (300040- 256000 )= 44040
Base + Limit = 300040
• The base and limit registers can be loaded only by the
operating system, which uses a special privileged instruction.
• These privileged instructions and operating system executes
in kernel mode.
• Only the operating system can load base and limit registers.
• OS can change the value of the registers but user programs
cannot change the register contents.
Binding of Instructions and Data to Memory
• Binding of instructions and data to memory addresses can
happen at three different stages.
• Compile time: If memory location known at compile time , where
the process will reside then absolute code can be generated . Eg :
MSDOS.
• Load time: If it is not known at compile time where the process
will reside in memory, then the compiler must generate
relocatable code.
• Execution time: If the process can be moved during its execution
from one memory segment to another then binding will be
delayed until run time.
Multistep Processing of a User Program
Memory-Management Unit (MMU)
• The run time mapping from virtual to physical address is done by
hardware device called MMU.
Dynamic relocation using a relocation register
Logical versus Physical address space
• The address generated by CPU is known as logical address(virtual
address).
• The address which is seen by the memory unit is known as
physical address.
• The compile time and load time address time binding methods
generate identical physical and logical address.
• The set of all logical addresses generated by a program is a logical
address space.
• The set of all physical addresses corresponding to these logical
addresses is a physical address space.
Swapping
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for continued
execution.
• Backing store : Fast disk which is large enough to accommodate
copies of all memory images for all users; must provide direct
access to these memory images.
• Round robin scheduling algorithm : When a time quantum
expires, the memory manager will start to swap out the process
that just finished and to swap another process into the memory
space that has been freed.
Schematic View of Swapping
• Roll out, roll in – It is a variant of swapping used for priority-
based scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed.
Contiguous Allocation
• The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.
• Contiguous allocation is one of the early method . Resident
operating system, usually held in low memory .User process reside
in high memory.
• Consider how to allocate available memory to the processes that
are in the input queue waiting to be brought into memory.
• In contiguous memory allocation, each process is contained in a
single contiguous section of memory.
Memory Allocation
• Simplest method for allocating memory is to divide memory into
several fixed-sized partitions.
• Each partition may contain exactly one process . Degree of
multiprogramming is bound by the number of partitions.
• In multiple partition when a partition is free, a process is selected
from the input queue and is loaded into the free partition. When
the process terminates, the partition becomes available for another
process.
• In variable partition scheme, the operating system keeps a table
indicating which parts of memory are available and which are
occupied.
• Initially, all memory is available for user processes and is considered
one large block of available memory called hole.
Dynamic Storage-Allocation Problem
• First-fit: Allocate the first hole that is big enough. Searching can
start either at the beginning of the set of holes or at the location
where the previous first-fit search ended.
• Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size
– Produces the smallest leftover hole.
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole.
– First-fit and best-fit better than worst-fit in terms of speed
and storage utilization but first fit is generally faster.
How to satisfy a request of size n from a list of free holes?
Fragmentation
• External Fragmentation – It exists when there is a total memory
space exists to satisfy a request, but it is not contiguous.
• Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is the
memory internal to a partition, but not being used.
Eg: Multiple partition scheme with a hole of 18464 bytes. Suppose
process requests 18462 bytes, if we allocate the requested block
we are left with a hole of 2 bytes(unused memory).
• First fit analysis reveals that given N blocks allocated, 0.5 N
blocks lost to fragmentation
– 1/3 may be unusable -> 50-percent rule
Both first fit and best fit suffers from external fragmentation
Fragmentation (Cont.)
• Reduce external fragmentation by compaction.
– Shuffle the memory contents to place all free memory together
in one large block.
– Compaction is done at execution time and if the relocation is
dynamic.
– Relocation requires only moving the program n data and then
changing the base register to reflect the new base address.
– Permit the logical address space to be non contiguous thus
allowing the physical memory to be allocated whenever such
memory is available.
Paging
• Paging is a memory management scheme that permits the
physical address space of a process can be noncontiguous.
– Avoids external fragmentation
– Avoids problem of varying sized memory chunks.
• Basic method :
--Divide physical memory into fixed-sized blocks called frames.
--Divide logical memory into blocks of same size called pages.
 When a process is to be executed its pages are loaded in to
available memory frames from their source.
Paging
• Every address generated by the CPU is divided in to two parts :
page number(p) and page offset (d).
• The page number is used as an index to page table. The page
table contains the base address of each page in physical
memory.
• The base address is combined with the page offset to define
the physical address that is sent to memory unit.
• If the size of the logical address space is 2m , and a page size 2n
then the high-order m- n bits of a logical address designate the
page number, and the n low-order bits designate the page
offset.
page number page offset
p d
m -n n
Paging Hardware
Page no| page offset
Base address
of page
page
Paging Model of Logical and Physical Memory
Paging Example
n=2 and m=4 32-byte memory and 4-byte pages
m=4 n=2
Logical address 2^m= 2^4= 16 (0 to 15)
Page size 2^n=2^2=4
Paging
• When a process arrives in the system to be executed, its size is
expressed in pages. Each page of the process needs one frame.
• The first page of a process is loaded in to one of the physical
frames and frame number is put in to page table .
• Logical address are mapped in to physical address and this
mapping is hidden from the user and is controlled by the
operating system.
• OS manages physical memory it must be aware of allocation
details and this information is kept in a data structure called a
frame table.
Free Frames
Before allocation After allocation
Implementation of Page Table
• The page table is implemented as a set of dedicated registers .
These registers should be built with very high-speed logic to
make the paging-address translation efficient.
• If the page table is large (million of entries) use of fast registers is
not feasible.
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table.
• Changing page tables requires changing only this one register,
substantially reducing context-switch time.
• The problem with this approach is the time required to access
a user memory location.
• If we want to access location i, we must first index in to the
page table, using the value in the PTBR offset by the page
number for i . This task requires a memory access.
• It provides us with the frame number, which is combined with
the page offset to produce the actual address. We can then
access the desired place in memory.
• The standard solution to this problem is to use a special,
small, fast lookup hardware cache, called translation look
aside buffer. The TLB is associative, high-speed memory.
Paging Hardware With TLB
Effective Access Time
• Hit ratio (): Percentage of times that a page number is found in
the TLB.
• Consider  = 80%, 20ns for TLB search, 100ns for memory
access.
• Miss ratio(ẞ): Percentage of times that a page number is not
found in the TLB.
• Consider ẞ = 20%, fail to find the page number in TLB(20ns),
Access memory for the page table(100ns), 100ns to access the
desired byte in memory .
• Effective Access Time (EAT):
EAT= HIT RATIO*(TLB Search + Memory Access)+MISS RATIO*(TLB Search +page
table+Memory Access)
EAT = 0.80 x (20+100) + 0.20 x(20+100 +100) = 140ns
Memory Protection
• Memory protection is a paging environment in which protection
bits are associated with each frame. These bits are kept in page
table.
• Valid-invalid bit attached to each entry in the page table:
– “valid” indicates that the associated page is in the process’
logical address space, and is thus a legal page.
– “invalid” indicates that the page is not in the process’ logical
address space
• The OS sets this bit for each page to allow or disallow access to
the page.
Valid (v) or Invalid (i) Bit in a Page Table
Shared Pages
• Advantage of paging is the possibility of sharing common code.
• System supports 40 users ,each of whom executes a text editor. If
the text editor consists of 150KB of code and 50KB of data space,
so we need 200KB*40=8000 KB to support 40 users.
• Made code as reentrant code(sharable) that never changes during
execution time.
• To support 40 users, we need only 1 copy of editor(150KB)+ 40
copies of 50KB= 2150 KB.A significant savings.
• In the figure use three-page editors each page 50 KB in size (being
shared among three processes). Each process has its own data
page.
Shared Pages Example
Segmentation
• Segmentation is a memory-management scheme that supports
user view of memory .
• A program is a collection of segments
– A segment is a logical unit such as: main program ,procedure
function , method , object , local variables, global variables
common block , stack , symbol table ,arrays.
User’s view of a program
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>
• Segment table – maps two-dimensional physical addresses;
each table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment
table’s location in memory
• Segment-table length register (STLR) indicates number of
segments used by a program.
Segmentation Hardware
Structure of the Page Table
• Memory structures for paging can get huge using straight-
forward methods
– Consider a 32-bit logical address space as on modern
computers.
– Page size of 4 KB (212).
– Page table would have 1 million entries (232 / 212).
– If each entry is 4 bytes -> 4 MB of physical address space /
memory for page table alone.
• That amount of memory used to cost a lot.
• Don’t want to allocate that contiguously in main memory
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided
into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits
• Since the page table is paged, the page number is further divided
into:
– a 12-bit page number
– a 10-bit page offset
• Thus, a logical address is as follows:
where p1 is an index into the outer page table, and p2 is the
displacement within the page of the inner page table.
• Known as forward-mapped page table
Two-Level Page-Table Scheme
Address-Translation Scheme
64-bit Logical Address Space
• Even two-level paging scheme not sufficient
• If page size is 4 KB (212)
– Then page table has 252 entries
– If two level scheme, inner page tables could be 210 4-byte
entries
– Address would look like
– Outer page table has 242 entries or 244 bytes
– One solution is to add a 2nd outer page table
– But in the following example the 2nd outer page table is still
234 bytes in size
• And possibly 4 memory access to get to one physical
memory location
Three-level Paging Scheme
Hashed Page Table
• The virtual page number in the virtual address is hashed into
the hash table.
• The virtual page number is compared with field 1 in the first
element in the linked list.
• If there is a match, the corresponding page frame (field 2) is
used to form the desired physical address.
• If there is no match, subsequent entries in the linked list are
searched for a matching virtual page number.
Hashed Page Table
Inverted Page Table
• Rather than each process having a page table and keeping track of all
possible logical pages, track all physical pages.
• One entry for each real page of memory.
• Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns the
page.
• Each virtual address in the system consists of a triple:
<process-id, page-number, offset>.
• Each inverted page-table entry is a pair <process-id, page-number>
where the process-id assumes the role of the address-space
identifier.
Inverted Page Table

More Related Content

Similar to Module 3 Deadlocks.pptx

Similar to Module 3 Deadlocks.pptx (20)

Os5
Os5Os5
Os5
 
Deadlock (1).ppt
Deadlock (1).pptDeadlock (1).ppt
Deadlock (1).ppt
 
Chapter 4
Chapter 4Chapter 4
Chapter 4
 
Module-2Deadlock.ppt
Module-2Deadlock.pptModule-2Deadlock.ppt
Module-2Deadlock.ppt
 
Mch7 deadlock
Mch7 deadlockMch7 deadlock
Mch7 deadlock
 
DeadlockMar21.ppt
DeadlockMar21.pptDeadlockMar21.ppt
DeadlockMar21.ppt
 
OS Module-3 (2).pptx
OS Module-3 (2).pptxOS Module-3 (2).pptx
OS Module-3 (2).pptx
 
Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf
 
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptxosvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
osvzjsjjdndnnssnnsnsndndndnndeadlock.pptx
 
Deadlock.ppt
Deadlock.pptDeadlock.ppt
Deadlock.ppt
 
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
 
Ch8 OS
Ch8 OSCh8 OS
Ch8 OS
 
Os unit 4
Os unit 4Os unit 4
Os unit 4
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS
 
7 Deadlocks
7 Deadlocks7 Deadlocks
7 Deadlocks
 
Ice
IceIce
Ice
 
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptxOS - Unit 3 Deadlock (Bankers Algorithm).pptx
OS - Unit 3 Deadlock (Bankers Algorithm).pptx
 
CH07.pdf
CH07.pdfCH07.pdf
CH07.pdf
 
Gp1242 007 oer ppt
Gp1242 007 oer pptGp1242 007 oer ppt
Gp1242 007 oer ppt
 
Methods for handling deadlock
Methods for handling deadlockMethods for handling deadlock
Methods for handling deadlock
 

Recently uploaded

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 

Recently uploaded (20)

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 

Module 3 Deadlocks.pptx

  • 2. System Model • A system consists of a finite number of resources to be distributed among a number of competing processes. • If a system has two CPUs, then the resource type CPU has two instances. • A process must request a resource before using it and must release the resource after using it. • Each process utilizes a resource as follows: -- Request: The process requests the resource. If the request cannot be granted immediately then the requesting process must wait until it can acquire the resource. – Use : The process can operate on the resource. – Release : The process releases the resource.
  • 3. Deadlock Example • Deadlock : It is a situation when a process in the system has acquired some resources and waiting for more resources acquired by some other process which in turn is waiting for the resources acquired by this process. None of them can process and operating system cant do any work. • Different resource type : P1 DVD and P2 Printer P1 requests printer and P2 requests DVD P1 waiting for the release of printer(P2) and P2 is waiting for the release of DVD(P1). • Same resource type : P1,P2,P3 having three CD RW drives.
  • 4. Deadlock Characterization • Mutual exclusion: Only one process at a time can use a resource. If a process requests that resource, the requesting process must be delayed until the resource has been released. • Hold and wait: A process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption: A resource can be released voluntarily by the process holding it, after that process has completed its task. • Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0. Four conditions for a deadlock to occur
  • 5. Resource-Allocation Graph • It consists of set of vertices V and a set of edges E. • V is partitioned into two types: – P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R = {R1, R2, …, Rm}, the set consisting of all resource types in the system • Request edge – Directed edge Pi  Rj , process Pi has requested an instance of resource type Rj and is currently waiting for that resource. • Assignment edge – Directed edge Rj  Pi, instance of resource type Rj has been allocated to process Pi.
  • 6. Resource-Allocation Graph (Cont.) • Process • Resource Type with 4 instances • Pi requests instance of Rj • Pi is holding an instance of Rj Pi Pi Rj Rj
  • 7. Resource Allocation Graph with a Deadlock Resource Allocation Graph with no deadlock Resource-Allocation Graph (Cont.)
  • 8.  Process states: • Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type R1 . • Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3. • Process P3 is holding an instance of R3 .  Methods for Handling Deadlocks : • Ensure that the system will never enter a deadlock state: – Deadlock prevention – Deadlock avoidance • Allow the system to enter a deadlock state and then recover. • Ignore the problem and pretend that deadlocks never occur in the system;
  • 9. Deadlock Prevention  Mutual Exclusion –It must hold for non-sharable resources. We cannot prevent deadlocks by denying the mutual-exclusion condition, because some resources are intrinsically non sharable. E.g.: Printer cannot be simultaneously shared by several process(non sharable).  Hold and Wait – To ensure that this condition doesnot occurs in the system, we must guarantee that whenever a process requests a resource, it does not hold any other resources. – One protocol requires process to request and be allocated all its resources before it begins execution. – Another protocol allows process to request resources only when the process has none allocated to it.
  • 10. Deadlock Prevention (Cont.) • No Preemption : To ensure that this condition doesnot hold, use the following protocol. – If a process is holding some resources, requests another resource that cannot be immediately allocated to it, then all resources the process is currently holding are preempted. – Preempted resources are added to list of resources for which process is waiting. – Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. – This protocol is used whose state can be easily saved and restored later.
  • 11. Deadlock Prevention (Cont.) • Circular Wait - Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. - Initially process requests any number of instances of resource type Ri. After that process can request instance of resource type Rj if and only if F(Rj) >F(Ri). - E.g.: We define a one-to-one function F: R N, where N is the set of natural numbers, R is the set of resource types includes tape drives, disk drives, and printers. - Alternatively, we can require that a process requesting an instance of resource type Rj must have released any resources Ri; such that F(Ri) >=F(Rj).
  • 12. Deadlock Example with Lock Ordering void transaction(Account from, Account to, double amount) { mutex lock1, lock2; lock1 = get_lock(from); lock2 = get_lock(to); acquire(lock1); acquire(lock2); withdraw(from, amount); deposit(to, amount); release(lock2); release(lock1); } Transactions 1 and 2 execute concurrently. Transaction1 transfers $25 from account A to account B, and Transaction2 transfers $50 from account B to account A
  • 13. Deadlock Avoidance • Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. • Given this a priori information, it is possible to construct an algorithm which ensures that the system will never enter a deadlocked state. • Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. • The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.
  • 14. Safe State • A state is in safe if the system can allocate resources to a process in some order and still avoid a deadlock. • System is in safe state if there exists a sequence <P1, P2, …, Pn> of all the processes in the system such that if Pi resource needs are not immediately available, then Pi can wait until all Pj have finished. – When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. – When Pi terminates, Pi +1 can obtain its needed resources, and so on .
  • 15. Basic Facts • If a system is in safe state  no deadlocks If a system is in unsafe state  possibility of deadlock. • Avoidance  ensure that a system will never enter an unsafe state. Avoidance Algorithms : • Single instance of a resource type Use a resource-allocation graph algorithm • Multiple instances of a resource type Use the banker’s algorithm
  • 16. Resource-Allocation Graph Algorithm • Claim edge Pi  Rj indicates that process Pj may request resource Rj at sometime in the future; represented by a dashed line. • Claim edge converts to request edge when a process requests a resource. • Request edge converted to an assignment edge when the resource is allocated to the process. • When a resource is released by a process, assignment edge reconverts to a claim edge. • Resources must be claimed a priori in the system.
  • 17. Deadlock Avoidance Unsafe State In Resource-Allocation Graph
  • 18. Banker’s Algorithm • It is applicable to resource allocation system with multiple instances of each resource type. • The name was chosen because the algorithm could be used in a banking system to ensure that the bank never allocated its available cash in such a way that it could no longer satisfy the needs of all its customers.
  • 19. Banker’s Algorithm • Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available • Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj • Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj • Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task Need [i,j] = Max[i,j] – Allocation [i,j] Let n = number of processes, and m = number of resources types.
  • 20. Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i = 0, 1, …, n- 1 2. Find an i such that both: (a) Finish [i] = false (b) Needi  Work If no such i exists, go to step 4. 3.Work=Work + Allocation Finish [i] = true . Go to step 2. 4. If Finish [i] == true for all i, then the system is in a safe state
  • 21. Resource-Request Algorithm for Process Pi Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj 1.If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available = Available – Requesti; Allocationi = Allocationi + Requesti; Needi = Needi – Requesti;  If safe  the resources are allocated to Pi  If unsafe  Pi must wait, and the old resource-allocation state is restored
  • 22. Deadlock Detection • Allow system to enter deadlock state • Detection algorithm • Recovery scheme
  • 23. Single Instance of Each Resource Type • If all the resources have single instance then we use a variant of resource allocation graph called wait for graph. • We obtain this graph by removing the resource nodes and collapsing the appropriate edges. • Maintains wait-for graph – Nodes are processes – Pi  Pj means Pi is waiting for Pj to release a resource that Pi needs. • Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock.
  • 24. Resource-Allocation Graph and Wait-for Graph Resource-Allocation Graph Corresponding wait-for graph
  • 25. Several instances of resource type :Detection Algorithm Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b) For i = 1,2, …, n, if Allocationi  0, then Finish[i] = false; otherwise, Finish[i] = true 2. Find an index i such that both: (a) Finish[i] == false (b) Requesti  Work If no such i exists, go to step 4 3. Work = Work + Allocationi Finish[i] = true go to step 2 4. If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked
  • 26. Detection-Algorithm Usage • When, and how often, to invoke depends on: – How often a deadlock is likely to occur? – How many processes will be affected by deadlock when it happens? • If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock. • Recovery from Deadlock : • Inform the operator that a deadlock has occurred and to let the operator deal with the deadlock manually. • Let the system recover from the deadlock automatically.
  • 27. Recovery from Deadlock: Process Termination • Abort all deadlocked processes :The deadlocked processes may have computed for a long time, and the results of these partial computations must be discarded and probably will have to be recomputed later. • Abort one process at a time until the deadlock cycle is eliminated :each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlocked. • In which order should we choose to abort? 1. Priority of the process. 2. How long process has computed, and how much longer to completion. 3. How many resources that the process has used. 4. How many resources that process needs in order to complete. 5. How many processes will need to be terminated.
  • 28. Recovery from Deadlock: Resource Preemption  Resource Preemption : We successively preempt some resources from processes and give these resources to other processes till the deadlock cycle is broken. • Selecting a victim – Which resources and which processes are to be preempted? As in process termination, we must determine the order of preemption to minimize cost. • Rollback – If we preempt a resource from a process, what should be done with that process? If it is missing some needed resource. We must roll back the process to some safe state and restart it from that state. • Starvation – How can we guarantee that resources will not always be preempted from the same process?
  • 30. Background • Memory consists of large array of words each is having own address. • Instruction execution cycle : Fetches an instruction from memory. The instruction is then decoded and may cause operands to be fetched from memory. • After the instruction has been executed on the operands, results may be stored back in to memory. • Program must be brought (from disk) into memory and placed within a process for it to be run.
  • 31. • Main memory and registers built in to the processor itself are the only storage that the CPU can access directly. • If the data are not in memory they must be moved before CPU can operate on them. • Make sure that each process has a separate memory space. • To do this, we need the ability to determine the range of legal addresses that the process may access and to ensure that the process can access only these legal addresses. • Legal address – Providing protection through two registers : base and limit register. Basic Hardware
  • 32. Basic Hardware • A pair of base and limit registers define the logical address space. • Logical address : It is the address generated by the CPU. • Base register holds the smallest legal physical memory address. • Limit register specifies the size of the range. • CPU must check every memory access generated in user mode to be sure that it is between base and limit for that user.
  • 33. Hardware Address Protection Base – Smallest legal physical address Limit -- Size of Range Eg : CPU Address --- 256002 Base --- 256000 (256002>256000) Limit---- (300040- 256000 )= 44040 Base + Limit = 300040
  • 34. • The base and limit registers can be loaded only by the operating system, which uses a special privileged instruction. • These privileged instructions and operating system executes in kernel mode. • Only the operating system can load base and limit registers. • OS can change the value of the registers but user programs cannot change the register contents.
  • 35. Binding of Instructions and Data to Memory • Binding of instructions and data to memory addresses can happen at three different stages. • Compile time: If memory location known at compile time , where the process will reside then absolute code can be generated . Eg : MSDOS. • Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. • Execution time: If the process can be moved during its execution from one memory segment to another then binding will be delayed until run time.
  • 36. Multistep Processing of a User Program
  • 37. Memory-Management Unit (MMU) • The run time mapping from virtual to physical address is done by hardware device called MMU. Dynamic relocation using a relocation register
  • 38. Logical versus Physical address space • The address generated by CPU is known as logical address(virtual address). • The address which is seen by the memory unit is known as physical address. • The compile time and load time address time binding methods generate identical physical and logical address. • The set of all logical addresses generated by a program is a logical address space. • The set of all physical addresses corresponding to these logical addresses is a physical address space.
  • 39. Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store : Fast disk which is large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Round robin scheduling algorithm : When a time quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed.
  • 40. Schematic View of Swapping • Roll out, roll in – It is a variant of swapping used for priority- based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed.
  • 41. Contiguous Allocation • The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. • Contiguous allocation is one of the early method . Resident operating system, usually held in low memory .User process reside in high memory. • Consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. • In contiguous memory allocation, each process is contained in a single contiguous section of memory.
  • 42. Memory Allocation • Simplest method for allocating memory is to divide memory into several fixed-sized partitions. • Each partition may contain exactly one process . Degree of multiprogramming is bound by the number of partitions. • In multiple partition when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. • In variable partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied. • Initially, all memory is available for user processes and is considered one large block of available memory called hole.
  • 43. Dynamic Storage-Allocation Problem • First-fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size – Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list – Produces the largest leftover hole. – First-fit and best-fit better than worst-fit in terms of speed and storage utilization but first fit is generally faster. How to satisfy a request of size n from a list of free holes?
  • 44. Fragmentation • External Fragmentation – It exists when there is a total memory space exists to satisfy a request, but it is not contiguous. • Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is the memory internal to a partition, but not being used. Eg: Multiple partition scheme with a hole of 18464 bytes. Suppose process requests 18462 bytes, if we allocate the requested block we are left with a hole of 2 bytes(unused memory). • First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation – 1/3 may be unusable -> 50-percent rule Both first fit and best fit suffers from external fragmentation
  • 45. Fragmentation (Cont.) • Reduce external fragmentation by compaction. – Shuffle the memory contents to place all free memory together in one large block. – Compaction is done at execution time and if the relocation is dynamic. – Relocation requires only moving the program n data and then changing the base register to reflect the new base address. – Permit the logical address space to be non contiguous thus allowing the physical memory to be allocated whenever such memory is available.
  • 46. Paging • Paging is a memory management scheme that permits the physical address space of a process can be noncontiguous. – Avoids external fragmentation – Avoids problem of varying sized memory chunks. • Basic method : --Divide physical memory into fixed-sized blocks called frames. --Divide logical memory into blocks of same size called pages.  When a process is to be executed its pages are loaded in to available memory frames from their source.
  • 47. Paging • Every address generated by the CPU is divided in to two parts : page number(p) and page offset (d). • The page number is used as an index to page table. The page table contains the base address of each page in physical memory. • The base address is combined with the page offset to define the physical address that is sent to memory unit. • If the size of the logical address space is 2m , and a page size 2n then the high-order m- n bits of a logical address designate the page number, and the n low-order bits designate the page offset. page number page offset p d m -n n
  • 48. Paging Hardware Page no| page offset Base address of page page
  • 49. Paging Model of Logical and Physical Memory
  • 50. Paging Example n=2 and m=4 32-byte memory and 4-byte pages m=4 n=2 Logical address 2^m= 2^4= 16 (0 to 15) Page size 2^n=2^2=4
  • 51. Paging • When a process arrives in the system to be executed, its size is expressed in pages. Each page of the process needs one frame. • The first page of a process is loaded in to one of the physical frames and frame number is put in to page table . • Logical address are mapped in to physical address and this mapping is hidden from the user and is controlled by the operating system. • OS manages physical memory it must be aware of allocation details and this information is kept in a data structure called a frame table.
  • 52. Free Frames Before allocation After allocation
  • 53. Implementation of Page Table • The page table is implemented as a set of dedicated registers . These registers should be built with very high-speed logic to make the paging-address translation efficient. • If the page table is large (million of entries) use of fast registers is not feasible. • Page table is kept in main memory. • Page-table base register (PTBR) points to the page table • Page-table length register (PTLR) indicates size of the page table. • Changing page tables requires changing only this one register, substantially reducing context-switch time.
  • 54. • The problem with this approach is the time required to access a user memory location. • If we want to access location i, we must first index in to the page table, using the value in the PTBR offset by the page number for i . This task requires a memory access. • It provides us with the frame number, which is combined with the page offset to produce the actual address. We can then access the desired place in memory. • The standard solution to this problem is to use a special, small, fast lookup hardware cache, called translation look aside buffer. The TLB is associative, high-speed memory.
  • 56. Effective Access Time • Hit ratio (): Percentage of times that a page number is found in the TLB. • Consider  = 80%, 20ns for TLB search, 100ns for memory access. • Miss ratio(ẞ): Percentage of times that a page number is not found in the TLB. • Consider ẞ = 20%, fail to find the page number in TLB(20ns), Access memory for the page table(100ns), 100ns to access the desired byte in memory . • Effective Access Time (EAT): EAT= HIT RATIO*(TLB Search + Memory Access)+MISS RATIO*(TLB Search +page table+Memory Access) EAT = 0.80 x (20+100) + 0.20 x(20+100 +100) = 140ns
  • 57. Memory Protection • Memory protection is a paging environment in which protection bits are associated with each frame. These bits are kept in page table. • Valid-invalid bit attached to each entry in the page table: – “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page. – “invalid” indicates that the page is not in the process’ logical address space • The OS sets this bit for each page to allow or disallow access to the page.
  • 58. Valid (v) or Invalid (i) Bit in a Page Table
  • 59. Shared Pages • Advantage of paging is the possibility of sharing common code. • System supports 40 users ,each of whom executes a text editor. If the text editor consists of 150KB of code and 50KB of data space, so we need 200KB*40=8000 KB to support 40 users. • Made code as reentrant code(sharable) that never changes during execution time. • To support 40 users, we need only 1 copy of editor(150KB)+ 40 copies of 50KB= 2150 KB.A significant savings. • In the figure use three-page editors each page 50 KB in size (being shared among three processes). Each process has its own data page.
  • 61. Segmentation • Segmentation is a memory-management scheme that supports user view of memory . • A program is a collection of segments – A segment is a logical unit such as: main program ,procedure function , method , object , local variables, global variables common block , stack , symbol table ,arrays. User’s view of a program
  • 62. Segmentation Architecture • Logical address consists of a two tuple: <segment-number, offset> • Segment table – maps two-dimensional physical addresses; each table entry has: – base – contains the starting physical address where the segments reside in memory – limit – specifies the length of the segment • Segment-table base register (STBR) points to the segment table’s location in memory • Segment-table length register (STLR) indicates number of segments used by a program.
  • 64.
  • 65. Structure of the Page Table • Memory structures for paging can get huge using straight- forward methods – Consider a 32-bit logical address space as on modern computers. – Page size of 4 KB (212). – Page table would have 1 million entries (232 / 212). – If each entry is 4 bytes -> 4 MB of physical address space / memory for page table alone. • That amount of memory used to cost a lot. • Don’t want to allocate that contiguously in main memory
  • 66.
  • 67. Two-Level Paging Example • A logical address (on 32-bit machine with 1K page size) is divided into: – a page number consisting of 22 bits – a page offset consisting of 10 bits • Since the page table is paged, the page number is further divided into: – a 12-bit page number – a 10-bit page offset • Thus, a logical address is as follows: where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page table. • Known as forward-mapped page table
  • 69.
  • 70. 64-bit Logical Address Space • Even two-level paging scheme not sufficient • If page size is 4 KB (212) – Then page table has 252 entries – If two level scheme, inner page tables could be 210 4-byte entries – Address would look like – Outer page table has 242 entries or 244 bytes – One solution is to add a 2nd outer page table – But in the following example the 2nd outer page table is still 234 bytes in size • And possibly 4 memory access to get to one physical memory location
  • 72. Hashed Page Table • The virtual page number in the virtual address is hashed into the hash table. • The virtual page number is compared with field 1 in the first element in the linked list. • If there is a match, the corresponding page frame (field 2) is used to form the desired physical address. • If there is no match, subsequent entries in the linked list are searched for a matching virtual page number.
  • 74. Inverted Page Table • Rather than each process having a page table and keeping track of all possible logical pages, track all physical pages. • One entry for each real page of memory. • Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns the page. • Each virtual address in the system consists of a triple: <process-id, page-number, offset>. • Each inverted page-table entry is a pair <process-id, page-number> where the process-id assumes the role of the address-space identifier.