Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Chapter 9: Virtual Memory
9.2 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Objectives
 To describe the benefits of a virtual memory system
 To explain the concepts of demand paging, page-replacement algorithms,
and allocation of page frames
 To discuss the principle of the working-set model
9.3 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Motivation
 I want to run more than one program at once
 Problem:
 Program A puts its variable X at address 0x1000
 Program B puts its variable Q at address 0x1000
 Conflict!
 Unlikely solution:
 Get all programmers on the planet to use different memory
addresses
 Better solution:
 Allow each running program to have its own “view” of memory (e.g.
“my address 0x1000 is different from your address 0x1000”)
 How? Add a layer of indirection to memory addressing:
 Virtual memory paging
9.4 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Motivation
 Hey, while you’re messing with memory addressing...
 can we improve efficiency, too?
 Code/data must be in memory to execute
 Most code/data not needed at a given instant
 Wasteful use of DRAM to hold stuff we don’t need
 Solution:
 Don’t bother to load code/data from disk that we don’t need
immediately
 When memory gets tight, shove loaded stuff we don’t need any
more back to disk.
• Virtual memory swapping (an add-on to paging)
12
Program B’s
Page Table
Program A’s
Page Table
Every program has its own page mapping
A’s PAGE
B’s PAGE
A’s PAGE
B’s PAGE
B’s PAGE
A’s PAGE
A’s PAGE
FREE PAGE
C’s PAGE
B’s PAGE
A’s PAGE
B’s PAGE
A’s PAGE
A’s PAGE
B’s PAGE
A’s PAGE
A’s PAGE
A’s PAGE
FREE PAGE
Q’s PAGE
Program A’s virtual Actual physical Program B’s virtual
Text, static, and
heap pages of
Program A
Stack pages of
Program A
Unused area of
Program A memory
map (stack/heap can
grow into here)
Unused area of
Program A memory
map (the “hole” that
makes segfaults
happen on null
pointers)
Stack pages of
Program B
Unused area of
Program B memory
map (stack/heap can
grow into here)
Text, static, and heap
pages of Program B
Unused area of
Program B memory
map (the “hole” that
makes segfaults
happen on null
pointers)
Memory as a finite resource
Limited resource, managed by OS
30
9.7 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Virtual Memory That is Larger Than Physical Memory
9.8 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Virtual Memory
9.9 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Background (Cont.)
 Virtual memory – separation of user logical memory from physical memory
 Virtual memory is a feature of an operating system that enables a computer to run large
process with smaller available memory.
 A scheme in which secondary memory can be addressed as though it were part of main memory.
This process is done temporarily and is designed to work as a combination of RAM and space on
the hard disk.
 This means that when RAM runs low, virtual memory can move data from it to a space called a
paging file. This process allows for RAM to be freed up so that a computer can complete the
task.
 Only part of the program needs to be in memory for execution.
9.10 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Background (Cont.)
 Virtual Address Space – logical view of how process is stored in memory
 Usually start at address 0, contiguous addresses until end of space
 Meanwhile, physical memory is organized in page frames
 MMU must map logical to physical
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
9.11 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Shared Library Using Virtual Memory
9.12 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Demand Paging
 Could bring the entire process into memory
at load time
 Or bring a page into memory only when it is
needed
 Less memory needed
 Faster response
 More users
 Similar to the paging system with swapping
(diagram on right)
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
 Lazy swapper – never swaps a page into
memory unless the page will be needed
 Swapper that deals with pages is a
pager
9.13 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Valid-Invalid Bit
 With each page table entry, a valid–invalid bit is associated
(v  in-memory – memory resident, i  not-in-memory)
 Initially valid–invalid bit is set to i on all entries
 Example of a page table snapshot:
 During MMU address translation, if valid–invalid bit in page table entry is i 
page fault
9.14 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Page Table When Some Pages Are Not in Main Memory
9.15 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Page Fault
 If there is a reference to a page, the first reference to that page will trap to
operating system:
page fault
1. Operating system looks at another table to decide:
 Invalid reference  abort
 Just not in memory
2. Find a free frame
3. Swap page into the frame via scheduled disk operation
4. Reset tables to indicate page now in memory Set validation bit = v
5. Restart the instruction that caused the page fault
9.16 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Steps in Handling a Page Fault
9.17 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Performance of Demand Paging
 Demand paging can significantly affect the performance of a computer
system.
 For most computer systems, the memory-access time, denoted ma,
ranges from 10 to 200 nanoseconds.
 As long as we have no page faults, the effective access time is equal to
the memory access time.
 If, however, a page fault occurs, we must first read the relevant page
from disk and then access the desired word.
 Let p be the probability of a page fault (0 ≤ p ≤ 1). We would expect p to
be close to zero—that is, we would expect to have only a few page
faults. The effective access time is then
effective access time = (1 − p) × ma + p × page fault time
 To compute the effective access time, we must know how much time is
needed to service a page fault.
9.18 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Performance of Demand Paging (Cont.)
 Page Fault Rate 0  p  1
 if p = 0 no page faults
 if p = 1, every reference is a fault
 Effective Access Time (EAT)
EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in )
9.19 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Demand Paging Example
 Memory access time = 200 nanoseconds
 Average page-fault service time = 8 milliseconds
 EAT = (1 – p) x 200 + p (8 milliseconds)
= ((1 – p) x 200 ) + (p x 8,000,000)
= 200 + p x 7,999,800
 If want performance degradation < 10 percent
 220 > 200 + 7,999,800 x p
20 > 7,999,800 x p
 p < .0000025
 < one-page fault in every 400,000 memory accesses
It is important to keep the page-fault rate low in a demand-paging system.
Otherwise, the effective access time increases, slowing process execution
dramatically.
9.20 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Basic Page Replacement
1. Find the location of the desired page on the disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to
select a victim frame
- Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Continue the process by restarting the instruction that caused the trap
9.21 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Page Replacement
9.22 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Page and Frame Replacement Algorithms
 If we have multiple processes in memory, we must decide how many frames to
allocate to each process; and when page replacement is required, we must
select the frames that are to be replaced.
 Frame-allocation algorithm determines
 How many frames to give each process
 Which frames to replace
 Page-replacement algorithm
 Want the lowest page-fault rate on both first access and re-access
 Evaluate the algorithm by running it on a particular string of memory references
(reference string) and computing the number of page faults on that string
 String is just page numbers, not full addresses
 Repeated access to the same page does not cause a page fault
 Results depend on the number of frames available
 In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
9.23 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Graph of Page Faults Versus The Number of Frames
9.24 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
First-In-First-Out (FIFO) Algorithm
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0
 3 frames (3 pages can be in memory at a time per process)
Page hit?
Page Fault?
Hit Ratio = No of Hits / No of References
Fault %?
F3
F2
F1
9.25 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
First-In-First-Out (FIFO) Algorithm
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)
 Adding more frames can cause more page faults!
 Belady’s Anomaly: For some page-replacement
algorithms, the page-fault rate may increase as the number
of allocated frames increases.
15-page faults
9.26 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Belady’s Anomaly
9.27 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Optimal Page Replacement Algorithm
9.28 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Optimal Page Replacement Algorithm
 Replace page that will not be used for longest period of time
 9 is optimal page fault for the following example
 How do you know this?
 Can’t read the future
 Unfortunately, the optimal page-replacement algorithm is difficult to
implement, because it requires future knowledge of the reference string.
As a result, the optimal algorithm is used mainly for comparison studies
 Used for measuring how well your algorithm performs
9.29 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Least Recently Used (LRU) Algorithm
9.30 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Least Recently Used (LRU) Algorithm
 Use past knowledge rather than future
 Replace page that has not been used in the most amount of time
 Associate time of last use with each page
 12 faults – better than FIFO but worse than OPT
 Generally good algorithm and frequently used
9.31 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
LRU Approximation Algorithms
 LRU needs special hardware and still slow
 Reference bit
 With each page associate a bit, initially = 0
 When page is referenced bit set to 1
 Replace any with reference bit = 0 (if one exists)
 We do not know the order, however
 Second-chance algorithm
 Generally FIFO, plus hardware-provided reference bit
 Clock replacement
 If page to be replaced has
 Reference bit = 0 -> replace it
 reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page, subject to same rules
9.32 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Second-Chance (clock) Page-Replacement Algorithm
9.33 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Enhanced Second-Chance Algorithm
 Improve algorithm by using reference bit and modify bit (if
available) in concert
 Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to replace
2. (0, 1) not recently used but modified – not quite as good, must
write out before replacement
3. (1, 0) recently used but clean – probably will be used again soon
4. (1, 1) recently used and modified – probably will be used again
soon and need to write out before replacement
9.34 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Counting Algorithms
 Keep a counter of the number of references that have been made
to each page
 Not common
 Least Frequently Used (LFU) Algorithm: replaces page with
smallest count
 Most Frequently Used (MFU) Algorithm: based on the argument
that the page with the smallest count was probably just brought in
and has yet to be used
9.35 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Page-Buffering Algorithms
 Keep a pool of free frames, always
 Then frame available when needed, not found at fault time
 Read page into free frame and select victim to evict and add
to free pool
 When convenient, evict victim
 Possibly, keep list of modified pages
 When backing store otherwise idle, write pages there and set
to non-dirty
 Possibly, keep free frame contents intact and note what is in them
 If referenced again before reused, no need to load contents
again from disk
 Generally useful to reduce penalty if wrong victim frame
selected
9.36 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Applications and Page Replacement
 All of these algorithms have OS guessing about future page
access
 Some applications have better knowledge – i.e. databases
 Memory intensive applications can cause double buffering
 OS keeps copy of page in memory as I/O buffer
 Application keeps page in memory for its own work
 Operating system can give direct access to the disk, getting out
of the way of the applications
 Raw disk mode
 Bypasses buffering, locking, etc
9.37 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Allocation of Frames
 Each process needs minimum number of frames for its operations
 Maximum of course is total frames in the system
 Two major allocation schemes
 Fixed allocation
 Priority allocation
 Many variations
9.38 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Fixed Allocation
 Equal allocation – For example, if there are 100 frames (after
allocating frames for the OS) and 5 processes, give each process
20 frames
 Keep some as free frame buffer pool
 Proportional allocation – Allocate according to the size of process
 Dynamic as degree of multiprogramming, process sizes
change
m
S
s
p
a
m
s
S
p
s
i
i
i
i
i
i







for
allocation
frames
of
number
total
process
of
size
9.39 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Priority Allocation
 Use a proportional allocation scheme using priorities rather
than size
 If process Pi generates a page fault,
 select for replacement one of its frames
 select for replacement a frame from a process with lower
priority number
9.40 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Global vs. Local Allocation
 Global replacement – process selects a replacement frame
from the set of all frames; one process can take a frame from
another
 But then process execution time can vary greatly
 But greater throughput so more common
 Local replacement – each process selects from only its own
set of allocated frames
 More consistent per-process performance
 But possibly underutilized memory
9.41 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Non-Uniform Memory Access
 So far all memory accessed equally
 Many systems are NUMA – speed of access to memory varies
 Consider system boards containing CPUs and memory,
interconnected over a system bus
 Optimal performance comes from allocating memory “close to”
the CPU on which the thread is scheduled
 And modifying the scheduler to schedule the thread on the
same system board when possible
 Solved by Solaris by creating lgroups
 Structure to track CPU / Memory low latency groups
 Used my schedule and pager
 When possible schedule all threads of a process and
allocate all memory for that process within the lgroup
9.42 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Thrashing
 If a process does not have “enough” pages, the page-fault rate is
very high
 Page fault to get page
 Replace existing frame
 But quickly need replaced frame back
 This leads to:
 Low CPU utilization
 Operating system thinking that it needs to increase the
degree of multiprogramming
 Another process added to the system
 Thrashing  a process is busy swapping pages in and out
9.43 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Thrashing (Cont.)
9.44 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
9.45 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
9.46 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
9.47 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th
Edition
End of Chapter 9

ch9 Virtual Memory Manipal UNiversitgy Jaipurt

  • 1.
    Silberschatz, Galvin andGagne ©2013 Operating System Concepts – 9th Edition Chapter 9: Virtual Memory
  • 2.
    9.2 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Objectives  To describe the benefits of a virtual memory system  To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames  To discuss the principle of the working-set model
  • 3.
    9.3 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Motivation  I want to run more than one program at once  Problem:  Program A puts its variable X at address 0x1000  Program B puts its variable Q at address 0x1000  Conflict!  Unlikely solution:  Get all programmers on the planet to use different memory addresses  Better solution:  Allow each running program to have its own “view” of memory (e.g. “my address 0x1000 is different from your address 0x1000”)  How? Add a layer of indirection to memory addressing:  Virtual memory paging
  • 4.
    9.4 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Motivation  Hey, while you’re messing with memory addressing...  can we improve efficiency, too?  Code/data must be in memory to execute  Most code/data not needed at a given instant  Wasteful use of DRAM to hold stuff we don’t need  Solution:  Don’t bother to load code/data from disk that we don’t need immediately  When memory gets tight, shove loaded stuff we don’t need any more back to disk. • Virtual memory swapping (an add-on to paging)
  • 5.
    12 Program B’s Page Table ProgramA’s Page Table Every program has its own page mapping A’s PAGE B’s PAGE A’s PAGE B’s PAGE B’s PAGE A’s PAGE A’s PAGE FREE PAGE C’s PAGE B’s PAGE A’s PAGE B’s PAGE A’s PAGE A’s PAGE B’s PAGE A’s PAGE A’s PAGE A’s PAGE FREE PAGE Q’s PAGE Program A’s virtual Actual physical Program B’s virtual Text, static, and heap pages of Program A Stack pages of Program A Unused area of Program A memory map (stack/heap can grow into here) Unused area of Program A memory map (the “hole” that makes segfaults happen on null pointers) Stack pages of Program B Unused area of Program B memory map (stack/heap can grow into here) Text, static, and heap pages of Program B Unused area of Program B memory map (the “hole” that makes segfaults happen on null pointers)
  • 6.
    Memory as afinite resource Limited resource, managed by OS 30
  • 7.
    9.7 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Virtual Memory That is Larger Than Physical Memory
  • 8.
    9.8 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Virtual Memory
  • 9.
    9.9 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Background (Cont.)  Virtual memory – separation of user logical memory from physical memory  Virtual memory is a feature of an operating system that enables a computer to run large process with smaller available memory.  A scheme in which secondary memory can be addressed as though it were part of main memory. This process is done temporarily and is designed to work as a combination of RAM and space on the hard disk.  This means that when RAM runs low, virtual memory can move data from it to a space called a paging file. This process allows for RAM to be freed up so that a computer can complete the task.  Only part of the program needs to be in memory for execution.
  • 10.
    9.10 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Background (Cont.)  Virtual Address Space – logical view of how process is stored in memory  Usually start at address 0, contiguous addresses until end of space  Meanwhile, physical memory is organized in page frames  MMU must map logical to physical  Virtual memory can be implemented via:  Demand paging  Demand segmentation
  • 11.
    9.11 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Shared Library Using Virtual Memory
  • 12.
    9.12 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Demand Paging  Could bring the entire process into memory at load time  Or bring a page into memory only when it is needed  Less memory needed  Faster response  More users  Similar to the paging system with swapping (diagram on right)  Page is needed  reference to it  invalid reference  abort  not-in-memory  bring to memory  Lazy swapper – never swaps a page into memory unless the page will be needed  Swapper that deals with pages is a pager
  • 13.
    9.13 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Valid-Invalid Bit  With each page table entry, a valid–invalid bit is associated (v  in-memory – memory resident, i  not-in-memory)  Initially valid–invalid bit is set to i on all entries  Example of a page table snapshot:  During MMU address translation, if valid–invalid bit in page table entry is i  page fault
  • 14.
    9.14 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Page Table When Some Pages Are Not in Main Memory
  • 15.
    9.15 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Page Fault  If there is a reference to a page, the first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide:  Invalid reference  abort  Just not in memory 2. Find a free frame 3. Swap page into the frame via scheduled disk operation 4. Reset tables to indicate page now in memory Set validation bit = v 5. Restart the instruction that caused the page fault
  • 16.
    9.16 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Steps in Handling a Page Fault
  • 17.
    9.17 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Performance of Demand Paging  Demand paging can significantly affect the performance of a computer system.  For most computer systems, the memory-access time, denoted ma, ranges from 10 to 200 nanoseconds.  As long as we have no page faults, the effective access time is equal to the memory access time.  If, however, a page fault occurs, we must first read the relevant page from disk and then access the desired word.  Let p be the probability of a page fault (0 ≤ p ≤ 1). We would expect p to be close to zero—that is, we would expect to have only a few page faults. The effective access time is then effective access time = (1 − p) × ma + p × page fault time  To compute the effective access time, we must know how much time is needed to service a page fault.
  • 18.
    9.18 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Performance of Demand Paging (Cont.)  Page Fault Rate 0  p  1  if p = 0 no page faults  if p = 1, every reference is a fault  Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in )
  • 19.
    9.19 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Demand Paging Example  Memory access time = 200 nanoseconds  Average page-fault service time = 8 milliseconds  EAT = (1 – p) x 200 + p (8 milliseconds) = ((1 – p) x 200 ) + (p x 8,000,000) = 200 + p x 7,999,800  If want performance degradation < 10 percent  220 > 200 + 7,999,800 x p 20 > 7,999,800 x p  p < .0000025  < one-page fault in every 400,000 memory accesses It is important to keep the page-fault rate low in a demand-paging system. Otherwise, the effective access time increases, slowing process execution dramatically.
  • 20.
    9.20 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Basic Page Replacement 1. Find the location of the desired page on the disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Continue the process by restarting the instruction that caused the trap
  • 21.
    9.21 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Page Replacement
  • 22.
    9.22 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Page and Frame Replacement Algorithms  If we have multiple processes in memory, we must decide how many frames to allocate to each process; and when page replacement is required, we must select the frames that are to be replaced.  Frame-allocation algorithm determines  How many frames to give each process  Which frames to replace  Page-replacement algorithm  Want the lowest page-fault rate on both first access and re-access  Evaluate the algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string  String is just page numbers, not full addresses  Repeated access to the same page does not cause a page fault  Results depend on the number of frames available  In all our examples, the reference string of referenced page numbers is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
  • 23.
    9.23 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Graph of Page Faults Versus The Number of Frames
  • 24.
    9.24 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition First-In-First-Out (FIFO) Algorithm  Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0  3 frames (3 pages can be in memory at a time per process) Page hit? Page Fault? Hit Ratio = No of Hits / No of References Fault %? F3 F2 F1
  • 25.
    9.25 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition First-In-First-Out (FIFO) Algorithm  Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1  3 frames (3 pages can be in memory at a time per process)  Adding more frames can cause more page faults!  Belady’s Anomaly: For some page-replacement algorithms, the page-fault rate may increase as the number of allocated frames increases. 15-page faults
  • 26.
    9.26 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Belady’s Anomaly
  • 27.
    9.27 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Optimal Page Replacement Algorithm
  • 28.
    9.28 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Optimal Page Replacement Algorithm  Replace page that will not be used for longest period of time  9 is optimal page fault for the following example  How do you know this?  Can’t read the future  Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it requires future knowledge of the reference string. As a result, the optimal algorithm is used mainly for comparison studies  Used for measuring how well your algorithm performs
  • 29.
    9.29 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Least Recently Used (LRU) Algorithm
  • 30.
    9.30 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Least Recently Used (LRU) Algorithm  Use past knowledge rather than future  Replace page that has not been used in the most amount of time  Associate time of last use with each page  12 faults – better than FIFO but worse than OPT  Generally good algorithm and frequently used
  • 31.
    9.31 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition LRU Approximation Algorithms  LRU needs special hardware and still slow  Reference bit  With each page associate a bit, initially = 0  When page is referenced bit set to 1  Replace any with reference bit = 0 (if one exists)  We do not know the order, however  Second-chance algorithm  Generally FIFO, plus hardware-provided reference bit  Clock replacement  If page to be replaced has  Reference bit = 0 -> replace it  reference bit = 1 then: – set reference bit 0, leave page in memory – replace next page, subject to same rules
  • 32.
    9.32 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Second-Chance (clock) Page-Replacement Algorithm
  • 33.
    9.33 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Enhanced Second-Chance Algorithm  Improve algorithm by using reference bit and modify bit (if available) in concert  Take ordered pair (reference, modify) 1. (0, 0) neither recently used not modified – best page to replace 2. (0, 1) not recently used but modified – not quite as good, must write out before replacement 3. (1, 0) recently used but clean – probably will be used again soon 4. (1, 1) recently used and modified – probably will be used again soon and need to write out before replacement
  • 34.
    9.34 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Counting Algorithms  Keep a counter of the number of references that have been made to each page  Not common  Least Frequently Used (LFU) Algorithm: replaces page with smallest count  Most Frequently Used (MFU) Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used
  • 35.
    9.35 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Page-Buffering Algorithms  Keep a pool of free frames, always  Then frame available when needed, not found at fault time  Read page into free frame and select victim to evict and add to free pool  When convenient, evict victim  Possibly, keep list of modified pages  When backing store otherwise idle, write pages there and set to non-dirty  Possibly, keep free frame contents intact and note what is in them  If referenced again before reused, no need to load contents again from disk  Generally useful to reduce penalty if wrong victim frame selected
  • 36.
    9.36 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Applications and Page Replacement  All of these algorithms have OS guessing about future page access  Some applications have better knowledge – i.e. databases  Memory intensive applications can cause double buffering  OS keeps copy of page in memory as I/O buffer  Application keeps page in memory for its own work  Operating system can give direct access to the disk, getting out of the way of the applications  Raw disk mode  Bypasses buffering, locking, etc
  • 37.
    9.37 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Allocation of Frames  Each process needs minimum number of frames for its operations  Maximum of course is total frames in the system  Two major allocation schemes  Fixed allocation  Priority allocation  Many variations
  • 38.
    9.38 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Fixed Allocation  Equal allocation – For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 20 frames  Keep some as free frame buffer pool  Proportional allocation – Allocate according to the size of process  Dynamic as degree of multiprogramming, process sizes change m S s p a m s S p s i i i i i i        for allocation frames of number total process of size
  • 39.
    9.39 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Priority Allocation  Use a proportional allocation scheme using priorities rather than size  If process Pi generates a page fault,  select for replacement one of its frames  select for replacement a frame from a process with lower priority number
  • 40.
    9.40 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Global vs. Local Allocation  Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another  But then process execution time can vary greatly  But greater throughput so more common  Local replacement – each process selects from only its own set of allocated frames  More consistent per-process performance  But possibly underutilized memory
  • 41.
    9.41 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Non-Uniform Memory Access  So far all memory accessed equally  Many systems are NUMA – speed of access to memory varies  Consider system boards containing CPUs and memory, interconnected over a system bus  Optimal performance comes from allocating memory “close to” the CPU on which the thread is scheduled  And modifying the scheduler to schedule the thread on the same system board when possible  Solved by Solaris by creating lgroups  Structure to track CPU / Memory low latency groups  Used my schedule and pager  When possible schedule all threads of a process and allocate all memory for that process within the lgroup
  • 42.
    9.42 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Thrashing  If a process does not have “enough” pages, the page-fault rate is very high  Page fault to get page  Replace existing frame  But quickly need replaced frame back  This leads to:  Low CPU utilization  Operating system thinking that it needs to increase the degree of multiprogramming  Another process added to the system  Thrashing  a process is busy swapping pages in and out
  • 43.
    9.43 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition Thrashing (Cont.)
  • 44.
    9.44 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition
  • 45.
    9.45 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition
  • 46.
    9.46 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition
  • 47.
    9.47 Silberschatz, Galvinand Gagne ©2013 Operating System Concepts – 9th Edition
  • 48.
    Silberschatz, Galvin andGagne ©2013 Operating System Concepts – 9th Edition End of Chapter 9