Virtual Memory
OPERTING SYSTEMS CONCEPTS
Dr.K.Kalaiselvi
Dept of Computer Science
Kristu Jayanti College
Bangalore
Background
 Virtual memory – separation of user logical memory from
physical memory.
 Only part of the program needs to be in memory for execution
 Logical address space can therefore be much larger than
physical address space
 Allows address spaces to be shared by several processes
 Allows for more efficient process creation
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
Virtual Memory That is Larger Than
Physical Memory
Demand Paging
 Bring a page into memory only when it is needed
 Less I/O needed
 Less memory needed
 Faster response
 More users
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
 Lazy swapper – never swaps a page into memory unless
page will be needed
 Swapper that deals with pages is a pager
Transfer of a Paged Memory to
Contiguous Disk Space
Valid-Invalid Bit
 With each page table entry a valid–invalid bit is associated
(v  in-memory, i  not-in-memory)
 Initially valid–invalid bit is set to i on all entries
 Example of a page table snapshot:
 During address translation, if valid–invalid bit in page table entry
is I  page fault
….
v
v
v
v
i
i
i
Frame # valid-invalid bit
page table
Page Table When Some Pages Are
Not in Main Memory
Page Fault
 If there is a reference to a page, first reference to that page
will trap to operating system:
page fault
1. Operating system looks at another table to decide:
 - Invalid reference  abort
- Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
Fault
Paging
 Page Fault Rate 0  p  1.0
 if p = 0 no page faults
 if p = 1, every reference is a fault
 Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p *page fault time(page fault
overhead
+ swap page out
+ swap page in
+ restart overhead
)
free frame?
 Page replacement – find some page in memory, but
not really in use, swap it out
 algorithm
 performance – want an algorithm which will result in
minimum number of page faults
 Same page may be brought into memory several
times
Page Replacement
 Prevent over-allocation of memory by modifying page-fault
service routine to include page replacement
 Use modify (dirty) bit to reduce overhead of page transfers –
only modified pages are written to disk
 Page replacement completes separation between logical
memory and physical memory – large virtual memory can be
provided on a smaller physical memory
Need For Page Replacement
Basic Page Replacement
1. Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame
3. Bring the desired page into the (newly) free frame; update
the page and frame tables
4. Restart the process
Page Replacement
Algorithms
 Want lowest page-fault rate
 Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string
 In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 3 frames (3 pages can be in memory at a time per process)
 4 frames
 Belady’s Anomaly: more frames  more page faults
1
2
3
1
2
3
4
1
2
5
3
4
9 page faults
1
2
3
1
2
3
5
1
2
4
5 10 page faults
4
4 3
FIFO Page Replacement
Optimal Algorithm
 Replace page that will not be used for longest period of time
 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 How do you know this?
 Used for measuring how well your algorithm performs
1
2
3
4
6 page faults
4 5
Optimal Page Replacement
Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 Counter implementation
 Every page entry has a counter; every time page is
referenced through this entry, copy the clock into the counter
 When a page needs to be changed, look at the counters to
determine which are to change
5
2
4
3
1
2
3
4
1
2
5
4
1
2
5
3
1
2
4
3
LRU Page Replacement
LRU Algorithm (Cont.)
 Stack implementation – keep a stack of page numbers in a
double link form:
 Page referenced:
 move it to the top
 requires 6 pointers to be changed
 No search for replacement
Counting Algorithms
 Keep a counter of the number of references that have been
made to each page
 LFU Algorithm: replaces page with smallest count
 MFU Algorithm: based on the argument that the page with
the smallest count was probably just brought in and has yet
to be used
 The number of frames allocated must not exceed the total
number of frames available.
 At least minimum number of frames must be allocated.
 As the number of frames allocated to each process decreases,
number of page faults increases.
Frame allocation:
 Equal allocation: divide m frames among n processes, left out
frames can be used as buffer pool.
 Proportional allocation: total frames are proportionally split
among n processes depending on their requirement
 Ai=si/S*m and S=∑si
,where size of virtual memory of process pi is si, m is total
number of frames,ai is the number of frames allocated to process
pi
Allocation of frames
Global vs. Local Allocation
 Global replacement – process selects a replacement frame
from the set of all frames; one process can take a frame from
another
 Local replacement – each process selects from only its own
set of allocated frames
Thrashing
 If a process does not have “enough” pages, the page-fault
rate is very high(high paging activity). This leads to:
 low CPU utilization
 operating system thinks that it needs to increase the degree
of multiprogramming
 another process added to the system
 Thrashing  a process is busy swapping pages in and out
 Effect of thrashing can be reduced by local replacement
algorithm.
Thrashing (Cont.)
 Locality model
 Process migrates from one locality to another, there are two
types of locality
 Spatial locality : Once memory is referenced, it is highly
possible that nearby locations will be referenced.
 Temporal locality: memory locations referenced recently are
likely to be referenced again.
 Localities may overlap
 Set of pages the process is currently using is called its working set.
 If the page is not used then it is dropped from its working set after
certain time (∆)
 The working set accuracy depends on the value of (∆)
If (∆) is too small, it does not cover entire locality.
If (∆) is too large, it could overlap several localities.
 Usage of working set model
• OS allocates enough number of frames to working set of each
process.
• If extra frames are available, another process can be started.
• If sum of working set size exceeds the total number of frames
available, the OS selects and suspends process.
• Suspended process is restarted later.
Advantages:
Prevents thrashing, optimizes CPU utilization.
Working set model
 Main reason for thrashing is high PFF
 Upper and lower bound on the desired page
fault rate must be established.
 If page fault rate is above upper bound then
another frame is allocated to the process.
 If page fault rate is less than lower bound,
frame is taken away from process.
 If page fault rate increases and there are no free
frames then process must be suspended.
Page fault frequency(PFF)
 Memory is allocated in segments instead of
pages.
 Each segment has segment descriptor which
keeps track of segment size
 Segment descriptor consist of valid bit which
indicate if the segment is in memory or not.
 If the segment is not in memory, trap is
generated to OS segment fault occurs,OS the
swaps required segment.
 Accesses bit is set when segment is either read
or written.
Demand segmentation

Unit 2chapter 2 memory mgmt complete

  • 1.
    Virtual Memory OPERTING SYSTEMSCONCEPTS Dr.K.Kalaiselvi Dept of Computer Science Kristu Jayanti College Bangalore
  • 2.
    Background  Virtual memory– separation of user logical memory from physical memory.  Only part of the program needs to be in memory for execution  Logical address space can therefore be much larger than physical address space  Allows address spaces to be shared by several processes  Allows for more efficient process creation  Virtual memory can be implemented via:  Demand paging  Demand segmentation
  • 3.
    Virtual Memory Thatis Larger Than Physical Memory
  • 4.
    Demand Paging  Bringa page into memory only when it is needed  Less I/O needed  Less memory needed  Faster response  More users  Page is needed  reference to it  invalid reference  abort  not-in-memory  bring to memory  Lazy swapper – never swaps a page into memory unless page will be needed  Swapper that deals with pages is a pager
  • 5.
    Transfer of aPaged Memory to Contiguous Disk Space
  • 6.
    Valid-Invalid Bit  Witheach page table entry a valid–invalid bit is associated (v  in-memory, i  not-in-memory)  Initially valid–invalid bit is set to i on all entries  Example of a page table snapshot:  During address translation, if valid–invalid bit in page table entry is I  page fault …. v v v v i i i Frame # valid-invalid bit page table
  • 7.
    Page Table WhenSome Pages Are Not in Main Memory
  • 8.
    Page Fault  Ifthere is a reference to a page, first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide:  - Invalid reference  abort - Just not in memory 2. Get empty frame 3. Swap page into frame 4. Reset tables 5. Set validation bit = v 6. Restart the instruction that caused the page fault
  • 9.
  • 10.
    Paging  Page FaultRate 0  p  1.0  if p = 0 no page faults  if p = 1, every reference is a fault  Effective Access Time (EAT) EAT = (1 – p) x memory access + p *page fault time(page fault overhead + swap page out + swap page in + restart overhead )
  • 11.
    free frame?  Pagereplacement – find some page in memory, but not really in use, swap it out  algorithm  performance – want an algorithm which will result in minimum number of page faults  Same page may be brought into memory several times
  • 12.
    Page Replacement  Preventover-allocation of memory by modifying page-fault service routine to include page replacement  Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk  Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory
  • 13.
    Need For PageReplacement
  • 14.
    Basic Page Replacement 1.Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Restart the process
  • 15.
  • 16.
    Algorithms  Want lowestpage-fault rate  Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string  In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
  • 17.
    Algorithm  Reference string:1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5  3 frames (3 pages can be in memory at a time per process)  4 frames  Belady’s Anomaly: more frames  more page faults 1 2 3 1 2 3 4 1 2 5 3 4 9 page faults 1 2 3 1 2 3 5 1 2 4 5 10 page faults 4 4 3
  • 18.
  • 19.
    Optimal Algorithm  Replacepage that will not be used for longest period of time  4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5  How do you know this?  Used for measuring how well your algorithm performs 1 2 3 4 6 page faults 4 5
  • 20.
  • 21.
    Algorithm  Reference string:1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5  Counter implementation  Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter  When a page needs to be changed, look at the counters to determine which are to change 5 2 4 3 1 2 3 4 1 2 5 4 1 2 5 3 1 2 4 3
  • 22.
  • 23.
    LRU Algorithm (Cont.) Stack implementation – keep a stack of page numbers in a double link form:  Page referenced:  move it to the top  requires 6 pointers to be changed  No search for replacement
  • 24.
    Counting Algorithms  Keepa counter of the number of references that have been made to each page  LFU Algorithm: replaces page with smallest count  MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used
  • 25.
     The numberof frames allocated must not exceed the total number of frames available.  At least minimum number of frames must be allocated.  As the number of frames allocated to each process decreases, number of page faults increases. Frame allocation:  Equal allocation: divide m frames among n processes, left out frames can be used as buffer pool.  Proportional allocation: total frames are proportionally split among n processes depending on their requirement  Ai=si/S*m and S=∑si ,where size of virtual memory of process pi is si, m is total number of frames,ai is the number of frames allocated to process pi Allocation of frames
  • 26.
    Global vs. LocalAllocation  Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another  Local replacement – each process selects from only its own set of allocated frames
  • 27.
    Thrashing  If aprocess does not have “enough” pages, the page-fault rate is very high(high paging activity). This leads to:  low CPU utilization  operating system thinks that it needs to increase the degree of multiprogramming  another process added to the system  Thrashing  a process is busy swapping pages in and out  Effect of thrashing can be reduced by local replacement algorithm.
  • 28.
  • 29.
     Locality model Process migrates from one locality to another, there are two types of locality  Spatial locality : Once memory is referenced, it is highly possible that nearby locations will be referenced.  Temporal locality: memory locations referenced recently are likely to be referenced again.  Localities may overlap
  • 30.
     Set ofpages the process is currently using is called its working set.  If the page is not used then it is dropped from its working set after certain time (∆)  The working set accuracy depends on the value of (∆) If (∆) is too small, it does not cover entire locality. If (∆) is too large, it could overlap several localities.  Usage of working set model • OS allocates enough number of frames to working set of each process. • If extra frames are available, another process can be started. • If sum of working set size exceeds the total number of frames available, the OS selects and suspends process. • Suspended process is restarted later. Advantages: Prevents thrashing, optimizes CPU utilization. Working set model
  • 31.
     Main reasonfor thrashing is high PFF  Upper and lower bound on the desired page fault rate must be established.  If page fault rate is above upper bound then another frame is allocated to the process.  If page fault rate is less than lower bound, frame is taken away from process.  If page fault rate increases and there are no free frames then process must be suspended. Page fault frequency(PFF)
  • 32.
     Memory isallocated in segments instead of pages.  Each segment has segment descriptor which keeps track of segment size  Segment descriptor consist of valid bit which indicate if the segment is in memory or not.  If the segment is not in memory, trap is generated to OS segment fault occurs,OS the swaps required segment.  Accesses bit is set when segment is either read or written. Demand segmentation