10 Virtual Memory 2 Contents

5,327 views
5,484 views

Published on

2 Comments
2 Likes
Statistics
Notes
No Downloads
Views
Total views
5,327
On SlideShare
0
From Embeds
0
Number of Embeds
1,313
Actions
Shares
0
Downloads
190
Comments
2
Likes
2
Embeds 0
No embeds

No notes for slide

10 Virtual Memory 2 Contents

  1. 1. 10 Virtual Memory
  2. 2. Contents <ul><li>Background </li></ul><ul><li>Demand Paging </li></ul><ul><li>Process Creation </li></ul><ul><li>Page Replacement </li></ul><ul><li>Allocation of Frames </li></ul><ul><li>Thrashing </li></ul><ul><li>Operating System Examples </li></ul><ul><li>Other Considerations </li></ul>
  3. 3. 10.1 Background <ul><li>all memory management algorithms outlined so far require that all of the processes are kept in physical memory </li></ul><ul><ul><li>contiguous allocation </li></ul></ul><ul><ul><li>paging </li></ul></ul><ul><ul><li>segmentation </li></ul></ul><ul><li>overlay is an exception, but it requires special effort of the programmer </li></ul><ul><ul><li>explicit memory allocation and free operations in an application program </li></ul></ul>
  4. 4. Background <ul><li>real programs, in many cases, do not need the entire program be kept in memory </li></ul><ul><ul><li>code to handle unusual error conditions </li></ul></ul><ul><ul><li>arrays, lists and tables are often sparse </li></ul></ul><ul><ul><li>even when the entire programs is needed, it may not all be needed at the same time (such is the case with overlays) </li></ul></ul>
  5. 5. Background <ul><li>conclusion : if we can keep the program only partially in memory </li></ul><ul><ul><li>we can run a program larger than the physical memory </li></ul></ul><ul><ul><li>more programs could be run at the same time </li></ul></ul>
  6. 6. Virtual Memory <ul><li>virtual memory – separation of user logical memory from physical memory, allows an extremely large virtual memory to be provided to programmers when only a smaller physical memory is available </li></ul><ul><ul><li>only part of the program needs to be in memory for execution </li></ul></ul><ul><ul><li>logical address space can therefore be much larger than physical address space </li></ul></ul><ul><ul><li>allows address spaces to be shared by several processes </li></ul></ul><ul><ul><li>allows for more efficient process creation </li></ul></ul>
  7. 7. virtual memory is larger than physical memory
  8. 8. common implementation <ul><li>virtual memory can be implemented via: </li></ul><ul><ul><li>demand paging </li></ul></ul><ul><ul><ul><li>several system provide a paged segmentation scheme, i.e., the user view is segmentation, but the OS can implement this view with demand paging </li></ul></ul></ul><ul><ul><li>demand segmentation </li></ul></ul>
  9. 9. 10.2 Demand Paging <ul><li>a demand-paging system is similar to a paging system with swapping </li></ul><ul><li>a lazy swapper – a swapper that never swaps a page into memory unless that page will be needed </li></ul><ul><li>compare: </li></ul><ul><ul><li>a swapper manipulates entire processes </li></ul></ul><ul><ul><li>a pager is concerned with the individual pages of a process </li></ul></ul>
  10. 10. 10.2.1 Basic Concepts <ul><li>use disk space to emulate memory </li></ul><ul><ul><li>a special partition/volume, e.g. Unix, Linux </li></ul></ul><ul><ul><li>a special file, e.g. Windows family </li></ul></ul>
  11. 11. Transfer of a Paged Memory to Contiguous Disk Space
  12. 12. valid-invalid bit in page table <ul><li>hardware support: whether a specific page is in memory? </li></ul><ul><ul><li>if it is in memory, valid bit is set; otherwise, valid bit is reset (i.e. invalid) </li></ul></ul>
  13. 13. Page Table When Some Pages Are Not in Main Memory <ul><li>6 logical pages with three pages in memory and three not </li></ul>
  14. 14. Page Fault <ul><li>if all reference attempts to access pages already in memory, the process will run exactly as though we bring in all pages </li></ul><ul><li>if there is ever a reference to a page not in memory, first reference will trap to OS  page fault </li></ul>
  15. 15. Page Fault Handling <ul><li>OS looks at another table to decide: </li></ul><ul><ul><li>invalid reference  abort </li></ul></ul><ul><ul><li>just not in memory </li></ul></ul><ul><li>find a free frame </li></ul><ul><li>swap page into frame </li></ul><ul><li>reset tables, validation bit = 1 </li></ul><ul><li>restart instruction </li></ul>
  16. 16. Steps in Handling a Page Fault
  17. 17. two type of demand paging <ul><li>pure demand paging </li></ul><ul><ul><li>never bring a page into memory until it is required </li></ul></ul><ul><li>demand paging with anticipation </li></ul><ul><ul><li>some pre-load with anticipation </li></ul></ul>
  18. 18. 10.2.2 Performance of demand paging <ul><li>Page Fault Rate 0  p  1.0 </li></ul><ul><ul><li>if p = 0, no page faults </li></ul></ul><ul><ul><li>if p = 1, every reference is a fault </li></ul></ul><ul><li>Effective Access Time (EAT) </li></ul><ul><li>EAT = (1 – p ) * memory access </li></ul><ul><li>+ p * (service the page fault interrupt </li></ul><ul><li>+ [swap a page out] </li></ul><ul><li>+ swap the page in </li></ul><ul><li>+ restart overhead) </li></ul><ul><ul><li>(see p.326 for detail) </li></ul></ul>
  19. 19. Performance Example <ul><li>memory access time = 100 nanosecond </li></ul><ul><li>swap page time = 25 millisecond </li></ul><ul><li>effective access time </li></ul><ul><li>EAT = (1 – p )*100 + p *25,000,000 </li></ul><ul><li>= 100 + 24,999,900* p </li></ul><ul><ul><li>performance depends heavily on the page fault ratio (probability) </li></ul></ul><ul><ul><li>e.g. less then 10% performance loss => p < 0.000 000 4 (see p.327 for detail) </li></ul></ul>
  20. 20. 10.3 Process Creation <ul><li>in the extreme case, a process may be started with no pages in memory </li></ul><ul><ul><li>fast start </li></ul></ul><ul><li>virtual memory allows other benefits during process creation </li></ul><ul><ul><li>copy-on-write </li></ul></ul><ul><ul><li>memory-mapped file </li></ul></ul>
  21. 21. 10.3.1 Copy-on-Write <ul><li>Copy-on-Write allows both parent and child processes to initially share the same pages in memory </li></ul><ul><li>if either process modifies a shared page, only then is the page copied </li></ul><ul><ul><li>Unix, Linux: fork ( ) followed by exec ( ) </li></ul></ul><ul><li>copy-on-write allows more efficient process creation as only modified pages are copied </li></ul><ul><li>currently used by Windows 2000, Linux, and Solaris 2 </li></ul>
  22. 22. Copy-on-Write <ul><li>free pages are allocated from a pool of zeroed-out pages, like stack or heap. </li></ul><ul><li>vfork ( ) system call in some versions of UNIX (e.g. Solaris 2) </li></ul><ul><ul><li>study the manual page of vfork ( ) </li></ul></ul>
  23. 23. 10.3.2 Memory-Mapped Files <ul><li>Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory. </li></ul><ul><li>A file is initially read using demand paging. A page-sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses . </li></ul><ul><li>Simplifies file access by treating file I/O through memory rather than read ( ) write ( ) system calls. </li></ul><ul><li>Also allows several processes to map the same file allowing the pages in memory to be shared. </li></ul>
  24. 24. memory-mapped files
  25. 25. 10.4 Page Replacement <ul><li>one process may be bigger than the physical memory </li></ul><ul><li>total memory of all processes may be bigger than the physical memory </li></ul><ul><ul><li>over-allocating memory </li></ul></ul><ul><li>solution </li></ul><ul><ul><li>swapping </li></ul></ul><ul><ul><li>page replacement : prevent over-allocation of memory by modifying page-fault service routine to include page replacement </li></ul></ul>
  26. 26. Need for page replacement
  27. 27. 10.4.1 Basic Scheme <ul><li>find the location of the desired page on disk </li></ul><ul><li>find a free frame: - if there is a free frame, use it. - if there is no free frame, use a page replacement algorithm to select a victim frame - write the victim page to the disk </li></ul><ul><li>read the desired page into the (newly) free frame; update the page and frame tables </li></ul><ul><li>restart the process </li></ul>
  28. 28. Page Replacement
  29. 29. Page Replacement <ul><li>if no frames are free, two page transfers are required </li></ul><ul><li>use modify ( dirty ) bit to reduce overhead of page transfers – only modified pages are written to disk </li></ul><ul><ul><li>the modify bit for a page is set by the hardware whenever any word or byte in the page is written into </li></ul></ul>
  30. 30. Page Replacement Algorithm <ul><li>two problems to implement demand paging </li></ul><ul><ul><li>frame-allocation algorithm : how many frames to allocate to each process </li></ul></ul><ul><ul><li>page-replacement algorithm : how to select the frames that are to be replaced </li></ul></ul><ul><ul><li>expensive disk I/O makes a good design of these algorithms an important task: we want lowest page-fault rate </li></ul></ul>
  31. 31. Reference String <ul><li>many algorithms, how to evaluate? </li></ul><ul><ul><li>lowest page-fault rate </li></ul></ul><ul><li>we must evaluate an algorithm by running it on a particular string of memory references </li></ul><ul><ul><li>(page) reference string : page numbers only, with adjacent duplications eliminated </li></ul></ul>
  32. 32. reference string example <ul><li>original memory access sequence: 0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101 , 0611, 0102, 0103, 0104, 0101, 0610, 0102, 0103, 0104, 0101 , 0609, 0102, 0105 </li></ul><ul><li>page reference sequence/string: 1, 4, 1, 6, 1, 1, 1, 1 , 6, 1, 1, 1, 1 , 6, 1, 1, 1, 1 , 6, 1, 1 </li></ul><ul><li>reference string : 1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1 </li></ul>
  33. 33. page-faults versus number of frames <ul><li>for the given reference string : 1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1 </li></ul><ul><li>if we have three (or more) frames: 3 page-faults </li></ul><ul><li>if we have only one frames: 11 page-faults </li></ul>
  34. 34. 10.4.2 FIFO Page Replacement <ul><li>First-In-First-Out Algorithm: </li></ul><ul><ul><li>use a FIFO queue to hold all pages in memory </li></ul></ul><ul><ul><li>when a page is brought into memory, insert it at the tail </li></ul></ul><ul><ul><li>when a free frame is needed, we replace the page at the queue </li></ul></ul>total 15 page faults
  35. 35. Example 2: Belady’s Anomaly <ul><li>reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 </li></ul><ul><li>3 frames (3 pages can be in memory at a time per process) </li></ul><ul><li>4 frames </li></ul><ul><li>FIFO Replacement – Belady’s Anomaly </li></ul><ul><ul><li>we expect: more frames  less page faults </li></ul></ul>1 2 3 1 2 3 4 1 2 5 3 4 9 page faults 1 2 3 1 2 3 5 1 2 4 5 10 page faults 4 4 3
  36. 36. 10.4.3 Optimal Page Replacement <ul><li>replace the page that will not be used for the longest period of time </li></ul><ul><li>difficult to implement, because it requires future knowledge of the reference string </li></ul>total 9 page faults
  37. 37. Example 2 <ul><li>4 frames example </li></ul><ul><li> 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 </li></ul><ul><li>optimal algorithm is used for measuring how well your algorithm performs </li></ul>1 2 3 4 6 page faults 4 5
  38. 38. 10.4.4 Least Recently Used (LRU) <ul><li>LRU replacement associates with each page the time of that page’s last use </li></ul><ul><li>when a page must be replaced, LRU choose the page that has not been used for the longest period of time </li></ul>total 12 page faults
  39. 39. Example 2 <ul><li>reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 </li></ul>1 2 3 5 4 4 3 5 8 page faults
  40. 40. LRU implementation <ul><li>counter implementation </li></ul><ul><ul><li>every page entry has a time-of-use field </li></ul></ul><ul><ul><li>a logical clock, incremented for every memory reference </li></ul></ul><ul><ul><li>every time a page is referenced, copy the clock into the time-of-use field </li></ul></ul><ul><ul><li>when a page needs to be changed, look for the page with smallest time-of-use field </li></ul></ul><ul><li>difficulties </li></ul><ul><ul><li>requires a search of the page table </li></ul></ul><ul><ul><li>a write to memory (time-of-use field in page table) for each memory access </li></ul></ul><ul><ul><li>page-table maintain in context switch </li></ul></ul><ul><ul><li>overflow of the clock </li></ul></ul>
  41. 41. LRU implementation <ul><li>Stack implementation </li></ul><ul><ul><li>keep a stack of page numbers in a double linked list </li></ul></ul><ul><ul><li>when a page is referenced, move it to the top </li></ul></ul><ul><ul><ul><li>requires 6 pointers to be changed </li></ul></ul></ul><ul><ul><li>always replace the page at the bottom </li></ul></ul><ul><ul><ul><li>No search for replacement </li></ul></ul></ul>
  42. 42. stack implementation example
  43. 43. Hardware support needed! <ul><li>neither optimal replacement nor LRU replacement suffers from Belady’s anomaly </li></ul><ul><li>both implementations of LRU need special hardware support , or we will suffer the performance by a factor of at least ten (e.g. via interrupt) </li></ul><ul><ul><li>the updating of the clock fields or stack must be done for every memory reference </li></ul></ul>
  44. 44. 10.4.5 LRU Approximation <ul><li>Reference Bit </li></ul><ul><ul><li>with each page associate a bit, initially = 0 </li></ul></ul><ul><ul><li>when page is referenced, bit set to 1 </li></ul></ul><ul><ul><li>replace the one which is 0 (if one exists) </li></ul></ul><ul><ul><ul><li>we do not know the order, however </li></ul></ul></ul><ul><li>Additional Reference Bits </li></ul><ul><ul><li>record the reference bits at regular intervals, say, every 100ms </li></ul></ul><ul><ul><li>use right-shift history byte for each page, current reference bit shift into the left-most bit </li></ul></ul><ul><ul><li>choose the page with lowest number </li></ul></ul><ul><ul><ul><li>(see Section 10.4.5.1 on page 341 for detail) </li></ul></ul></ul>
  45. 45. LRU Approximation <ul><li>Second-Chance Algorithm </li></ul><ul><ul><li>basically an FIFO algorithm + reference bit </li></ul></ul><ul><ul><li>all frames form a circular queue </li></ul></ul><ul><ul><li>inspect the current frame </li></ul></ul><ul><ul><ul><li>if the reference bit is set, reset it, and skip to next frame </li></ul></ul></ul><ul><ul><ul><li>otherwise, replace it </li></ul></ul></ul><ul><li>if a frame is referenced frequently enough, it will never get replaced </li></ul>
  46. 46. second-chance algorithm
  47. 47. LRU Approximation <ul><li>Enhanced Second-Chance Algorithm </li></ul><ul><ul><li>an FIFO circular queue </li></ul></ul><ul><ul><li>reference bit </li></ul></ul><ul><ul><li>modify bit ( dirty bit ) </li></ul></ul><ul><li>four classes </li></ul><ul><ul><li>(0, 0) neither recently used nor modified </li></ul></ul><ul><ul><li>(0, 1) not recently used, but modified (dirty) </li></ul></ul><ul><ul><li>(1, 0) recently used, but clean </li></ul></ul><ul><ul><li>(1, 1) recently used and modified </li></ul></ul><ul><li>examine the class to which a page belongs </li></ul><ul><li>replace the first page encountered in the lowest nonempty class </li></ul><ul><ul><li>drawback: may have to scan the circular queue several times </li></ul></ul><ul><ul><li>used in Macintosh </li></ul></ul>
  48. 48. 10.4.6 Counting-based Algorithms <ul><li>keep a counter of number of references to each page </li></ul><ul><li>least-frequently-used (LFU) algorithm </li></ul><ul><ul><li>the page with the smallest count is replaced </li></ul></ul><ul><ul><li>counter shift right at regular interval: exponentially decaying average </li></ul></ul><ul><li>most-frequently-used (MFU) algorithm </li></ul><ul><ul><li>based on the argument that the page with the smallest count was probably just brought in and has yet to be used </li></ul></ul>
  49. 49. 10.5 Allocation of Frames <ul><li>Each process needs minimum number of pages </li></ul><ul><ul><li>performance: fewer pages => more page-faults </li></ul></ul><ul><ul><li>architecture limit: machine instructions’ requirements </li></ul></ul><ul><li>example: IBM 370 needs at least 6 pages to handle MVC instruction: </li></ul><ul><ul><li>instruction is 6 bytes, might span 2 pages </li></ul></ul><ul><ul><li>2 pages to handle from </li></ul></ul><ul><ul><li>2 pages to handle to </li></ul></ul><ul><li>two major allocation schemes </li></ul><ul><ul><li>fixed allocation </li></ul></ul><ul><ul><li>priority allocation </li></ul></ul>
  50. 50. 10.5.2 Allocation Algorithms <ul><li>equal allocation : each process gets an equal share of total frames </li></ul><ul><li>proportional allocation : allocate available memory to each process according to its size </li></ul><ul><ul><li>priority-based allocation </li></ul></ul><ul><li>in all schemes, the allocation to each process may vary according to the multiprogramming level </li></ul>
  51. 51. 10.5.3 Global .vs. Local Allocation <ul><li>global replacement – process selects a replacement frame from the set of all frames </li></ul><ul><ul><li>one process can take a frame from another </li></ul></ul><ul><ul><ul><li>e.g. a high priority process can take a frame from a lower priority process </li></ul></ul></ul><ul><ul><li>problem: a process cannot control its own page-fault rate </li></ul></ul><ul><ul><li>results in greater system throughput </li></ul></ul><ul><li>local replacement – each process selects from only its own set of allocated frames </li></ul><ul><ul><li>the number of frames allocated to a process does not change </li></ul></ul>
  52. 52. 10.6 Thrashing <ul><li>If a process does not have “enough” pages, the page-fault rate is very high. This leads to: </li></ul><ul><ul><li>low CPU utilization </li></ul></ul><ul><ul><li>operating system thinks that it needs to increase the degree of multiprogramming </li></ul></ul><ul><ul><li>another process added to the system, and worse the condition </li></ul></ul><ul><li>thrashing  a process is busy swapping pages in and out </li></ul>
  53. 53. 10.6.1 Cause of Thrashing <ul><li>Why does paging work? -- Locality model </li></ul><ul><ul><li>Process migrates from one locality to another </li></ul></ul><ul><ul><li>Localities may overlap </li></ul></ul>
  54. 54. Locality in a Memory-Reference Pattern <ul><li>as a process executes, it moves from locality to locality </li></ul><ul><li>a locality is a set of pages that are actively used together </li></ul><ul><li>e.g. a subroutine defines a new locality </li></ul><ul><li>Why does thrashing occur?  size of locality > total memory size </li></ul>
  55. 55. 10.6.2 Working-Set Model <ul><li>  working-set window: a fixed number of page references Example: 10,000 instruction (example below: 10 pages) </li></ul><ul><li>Working-Set Size WSS i (working set of Process P i ) = total number of pages referenced in the most recent  (varies in time) </li></ul><ul><ul><li>if  too small will not encompass entire locality. </li></ul></ul><ul><ul><li>if  too large will encompass several localities. </li></ul></ul><ul><ul><ul><li>if  =   will encompass entire program. </li></ul></ul></ul>
  56. 56. Working-Set Model <ul><li>D =  WSS i  total demand frames </li></ul><ul><li>if D > m  thrashing </li></ul><ul><li>policy : if D > m , then suspend one of the processes (and swap it out completely) </li></ul><ul><li>the working-set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible </li></ul><ul><ul><li>it optimizes CPU utilization </li></ul></ul>
  57. 57. Keeping Track of the Working Set <ul><li>approximate with interval timer + a reference bit </li></ul><ul><li>Example:  = 10,000 references </li></ul><ul><ul><li>timer interrupts about every 5000 references </li></ul></ul><ul><ul><li>keep in memory 2 bits for each page </li></ul></ul><ul><ul><li>whenever a timer interrupts, copy and sets the values of all reference bits to 0 </li></ul></ul><ul><ul><li>if one of the bits in memory = 1  page in working set </li></ul></ul><ul><li>why is this not completely accurate? </li></ul><ul><li>improvement: 10 reference bits, and interrupt every 1000 time units </li></ul>
  58. 58. 10.6.3 Page-Fault Frequency Scheme <ul><li>establish “acceptable” page-fault rate </li></ul><ul><ul><li>if actual rate too low, process loses frame </li></ul></ul><ul><ul><li>if actual rate too high, process gains frame </li></ul></ul>
  59. 59. 10.7 Case Study <ul><li>Windows NT </li></ul><ul><li>Solaris 2 </li></ul><ul><li>Linux </li></ul>
  60. 60. 10.7.1 Windows NT <ul><li>Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page. </li></ul><ul><li>Processes are assigned working set minimum and working set maximum . </li></ul><ul><ul><li>Working set minimum is the minimum number of pages the process is guaranteed to have in memory. </li></ul></ul><ul><ul><li>A process may be assigned as many pages up to its working set maximum. </li></ul></ul><ul><li>When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory. </li></ul><ul><ul><li>Working set trimming removes pages from processes that have pages in excess of their working set minimum. </li></ul></ul>
  61. 61. Windows NT <ul><li>local page-replacement policy </li></ul><ul><ul><li>on single x86 processor – a variation of the clock algorithm </li></ul></ul><ul><ul><li>on multiprocessor and Alpha – a variation of FIFO </li></ul></ul><ul><li>How to determine working-set minimum and maximum? </li></ul><ul><ul><li>a mystery not stated by the textbook </li></ul></ul>
  62. 62. 10.7.2 Solaris 2 <ul><li>maintains a list of free pages to assign faulting processes </li></ul><ul><li>lotsfree – threshold parameter to begin paging </li></ul><ul><li>Paging is peformed by pageout process. </li></ul><ul><ul><li>pageout scans pages using modified clock algorithm </li></ul></ul><ul><ul><li>scanrate is the rate at which pages are scanned </li></ul></ul><ul><ul><ul><li>This ranged from slowscan to fastscan </li></ul></ul></ul><ul><ul><li>pageout is called more frequently depending upon the amount of free memory available </li></ul></ul>
  63. 63. Solar Page Scanner
  64. 64. Supplementary: Linux Memory Management (20.6) <ul><li>Linux’s physical memory-management system deals with allocating and freeing pages, groups of pages, and small blocks of memory. </li></ul><ul><li>It has additional mechanisms for handling virtual memory, memory mapped into the address space of running processes. </li></ul>
  65. 65. Splitting of Memory in a Buddy Heap
  66. 66. Managing Physical Memory <ul><li>The page allocator allocates and frees all physical pages; it can allocate ranges of physically-contiguous pages on request. </li></ul><ul><li>The allocator uses a buddy-heap algorithm to keep track of available physical pages. </li></ul><ul><ul><li>Each allocatable memory region is paired with an adjacent partner. </li></ul></ul><ul><ul><li>Whenever two allocated partner regions are both freed up they are combined to form a larger region. </li></ul></ul><ul><ul><li>If a small memory request cannot be satisfied by allocating an existing small free region, then a larger free region will be subdivided into two partners to satisfy the request. </li></ul></ul><ul><li>Memory allocations in the Linux kernel occur either statically ( drivers reserve a contiguous area of memory during system boot time) or dynamically (via the page allocator ). </li></ul>
  67. 67. Virtual Memory <ul><li>The VM system maintains the address space visible to each process: it creates pages of virtual memory on demand, and manages the loading of those pages from disk or their swapping back out to disk as required </li></ul><ul><li>The VM manager maintains two separate views of a process’s address space: </li></ul><ul><ul><li>a logical view describing instructions concerning the layout of the address space. The address space consists of a set of non-overlapping regions , each representing a continuous, page-aligned subset of the address space. </li></ul></ul><ul><ul><li>a physical view of each address space which is stored in the hardware page tables for the process. </li></ul></ul>
  68. 68. Virtual Memory (Cont.) <ul><li>on executing a new program, the process is given a new, completely empty virtual-address space; the program-loading routines populate the address space with virtual-memory regions. </li></ul><ul><li>creating a new process with fork involves creating a complete copy of the existing process’s virtual address space </li></ul><ul><ul><li>The kernel copies the parent process’s VMA descriptors, then creates a new set of page tables for the child. </li></ul></ul><ul><ul><li>The parent’s page tables are copies directly into the child’s, with the reference count of each page covered being incremented. </li></ul></ul><ul><ul><li>After the fork, the parent and child share the same physical pages of memory in their address spaces. </li></ul></ul>
  69. 69. Virtual Memory (Cont.) <ul><li>The VM paging system relocates pages of memory from physical memory out to disk when the memory is needed for something else. </li></ul><ul><li>The VM paging system can be divided into two sections: </li></ul><ul><ul><li>the pageout-policy algorithm decides which pages to write out to disk, and when. </li></ul></ul><ul><ul><li>the paging mechanism actually carries out the transfer, and pages data back into physical memory as needed. </li></ul></ul>
  70. 70. 10.9 Summary <ul><li>virtual memory makes it possible to execute a process whose logical address space is larger than the available physical address space </li></ul><ul><li>virtual memory increases multiprogramming level , thus CPU utilization and throughput </li></ul>
  71. 71. Summary <ul><li>pure demand paging </li></ul><ul><ul><li>backing store </li></ul></ul><ul><ul><li>page fault </li></ul></ul><ul><ul><li>page table </li></ul></ul><ul><ul><li>OS internal frame table </li></ul></ul><ul><li>page fault rate low => performance acceptable </li></ul>
  72. 72. Summary <ul><li>page replacement algorithms </li></ul><ul><ul><li>FIFO </li></ul></ul><ul><ul><ul><li>Belady’s anomaly </li></ul></ul></ul><ul><ul><li>optimal </li></ul></ul><ul><ul><li>Least Recently Used (LRU) </li></ul></ul><ul><ul><ul><li>reference bit </li></ul></ul></ul><ul><ul><ul><li>additional reference bits </li></ul></ul></ul><ul><ul><ul><li>second chance algorithm (clock algorithm) </li></ul></ul></ul><ul><ul><ul><li>enhanced second chance algorithm </li></ul></ul></ul><ul><ul><li>Counter-based </li></ul></ul><ul><ul><ul><li>Least Frequently Used (LFU) </li></ul></ul></ul>
  73. 73. Summary <ul><li>frame allocation policy </li></ul><ul><ul><li>fixed (i.e. equal share) </li></ul></ul><ul><ul><li>proportional (to program size) </li></ul></ul><ul><ul><li>priority-based </li></ul></ul><ul><ul><li>static, or local, page replacement </li></ul></ul><ul><ul><ul><li>supported by working-set model </li></ul></ul></ul><ul><ul><li>dynamic, or global, page replacement </li></ul></ul>
  74. 74. Summary <ul><li>working-set model </li></ul><ul><ul><li>locality </li></ul></ul><ul><ul><li>the WS is the set of pages in the current locality </li></ul></ul><ul><li>thrash </li></ul><ul><ul><li>if a process does not have enough memory for its working set, it will thrash </li></ul></ul>
  75. 75. Homework <ul><li>paper </li></ul><ul><ul><li>2, 4, 5, 8, 11, 17, 20 </li></ul></ul><ul><li>oral </li></ul><ul><ul><li>1, 6, 7, 9, 16, 18 </li></ul></ul><ul><li>lab </li></ul><ul><ul><li>21 </li></ul></ul><ul><li>supplementary materials </li></ul><ul><ul><li>Intel 80x86 protected mode operation </li></ul></ul>

×