• Save
Linux Memory
Upcoming SlideShare
Loading in...5
×
 

Linux Memory

on

  • 17,006 views

 

Statistics

Views

Total Views
17,006
Views on SlideShare
16,984
Embed Views
22

Actions

Likes
35
Downloads
0
Comments
5

2 Embeds 22

http://www.slideshare.net 19
http://www.linkedin.com 3

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • save the page.........
    Are you sure you want to
    Your message goes here
    Processing…
  • good one. I want to download it also. but not available seems
    Are you sure you want to
    Your message goes here
    Processing…
  • How to download this ppt , can anybody show easy way , if anybody have this LINUX MEMORY ppt , please send it on rtembedded@gmail.com
    Are you sure you want to
    Your message goes here
    Processing…
  • OK, I want to download it.
    Are you sure you want to
    Your message goes here
    Processing…
  • It's what i was needing. Great presentation.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Linux Memory Linux Memory Presentation Transcript

  • Principles of Virtual Memory Virtual Memory, Paging, Segmentation
  • Overview
    • Virtual Memory
    • Paging
    • Virtual Memory a nd Linux
  • 1. Virtual Memory
    • 1.1 Why Virtual Memory (VM)?
    • 1.2 What is VM ?
    • 1.3 The Mapping Process
    • 1.4 VM: Features
    • 1.5 VM: Advantages
    • 1.6 VM: Disadvantages
    • 1.7 VM: Implementation
  • 1.1 Why Virtual Memory (VM)?
    • S hortage of memory
      • Efficient memory management needed
    OS Process 3 Process 1 Process 2 Process 4 Memory
      • Process may be too big for physical memory
      • More active processes than physical memory can hold
    • Requirements of m ulti programming
      • Efficient p rotection scheme
      • Simple way of s haring
  • 1.2 What i s VM?
    • Program:
      • ....
      • Mov AX, 0xA0F4
      • ....
    Virtual Memory Physical Memory Virtual Address Physical Address „ Piece“ of Virtual Memory „ Piece“ of Physical Memory Note: It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit. 0xA0F4 0xC0F4 Mapping Unit (MMU) Table (one per Process)
  • 1.3 The Mapping Process
    • Usually every process has its own mapping table  own virtual address space (assumed from now on)
    virtual address check using mapping table
    • Not every „piece“ of VM has to be present in PM
      • „ Pieces“ may be loaded from HDD as they are referenced
      • Rarely used „pieces“ may be discarded or written out to disk (  swapping)
    MMU piece in physical memory? memory access fault OS brings „ piece“ in from HDD physical address OS adjusts mapping table translate address yes
  • 1.4 VM: Features Swapping
    • Danger: Thrashing:
      • „ Piece“ just swapped out is immediately requested again
      • System swaps in / out all the time , no real work is done
    • Thus: „piece“ for swap out has to be chosen carefully
      • K eep track of „piece“ usage („age of piece “)
      • Hopefully „piece“ used frequently lately will be used again in near future ( principle of locality !)
    lack of memory no need to swap out complete process!!! find rarely used „piece" adjust mapping table „ piece“ modified? save HDD location of „piece“ discard „ piece“ no write „piece“ out to disk yes
  • 1.4 VM: Features Protection
    • Each process has its own virtual address space
      • Processes invisible to each other
      • Process cannot access another processes memory
    • MMU checks protection bits on memory access (during address mapping)
      • „ Pieces“ can be protected from being written to or being executed or even being read
      • System can distinguish different protection levels (user / kernel mode)
    • Write protection can be used to implement copy on write (  Sharing)
  • 1.4 VM: Features Sharing
    • „ Pieces“ of different processes mapped to one single „piece“ of physical memory
      • Allows sharing of code (saves memory), e. g . libraries
      • Copy on write: „piece“ may be used by several processes until one writ es to it (then that process gets its own copy)
      • Simplifies interprocess-communication (IPC)
    shared memory P iece 2 P iece 1 Virtual memory Process 1 P iece 0 Piece 1 Piece 2 Piece 0 Physical memory Virtual memory Process 2 P iece 1 P iece 0 P iece 2
  • 1.5 VM: Advantages (1)
    • VM supports
      • Swapping
        • Rarely used „pieces“ can be discarded or swapped out
        • „ Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses
      • Protection
      • Sharing
        • Common data or code may be shared to save memory
    • Process need not be in memory as a whole
      • No need for complicated overlay techniques (OS does job)
      • Process may even be larger than all of physical memory
      • Data / code can be read from disk as needed
    • Code can be placed anywhere in physical memory without relocation (adresses are mapped!)
    • Increase d cpu utilization
      • more processes can be held in memory (in part)  more processes in ready state (consider: 80% HDD I/O wait time not uncommon)
    1.5 VM: Advantages (2)
  • 1.6 VM: Disadvantages
    • Memory requirements (mapping tables)
    • Longer memory access times ( mapping table lookup)
      • Can be improved using TLB
  • 1.7 VM: Implementation
    • VM may be implemented using
      • Paging
      • Segmentation (no)
      • Combination of both (no)
  • 2. Paging
    • 2.1 What i s Paging?
    • 2.2 Paging: Implementation
    • 2.3 Paging: Features
    • 2. 4 Paging : Advantage s
    • 2. 5 Paging : Disadvantages
    • 2. 6 Summary: Conversion o f a Virtual Address
  • Valid-Invalid Bit
    • With each page table entry a valid–invalid bit is associated ( v  in-memory, i  not-in-memory)
    • Initially valid–invalid bit is set to i on all entries
    • Example of a page table snapshot:
    v v v v i i i … . Frame # valid-invalid bit page table
  • 2.1 What i s Paging? Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Virtual memory (divided into equal size pages) Page 0 0x00 Page Table (one per process, one entry per page maintained by OS) Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Page 0 v v v Frame 0 Frame 1 Frame 3 Frame 0 Frame 1 Frame 2 Frame 3 Physical memory (divided into equal size page frames) 0x00 v v v
  • 2.2 Paging: Implementation Typical Page Table Entry other Page Frame # execute x write w read r r w x valid v r w x v referenced re v re modified m re m shared s m s caching disabled c s c super - page su c su process id pid su pid guard data (extended) guard gd g pid g gd g gd other
  • 2.2 Paging: Implementation Singlelevel Page Tables Problem: Page tables can get very large , e.g. 32 bit address space, 4KB pages  2^20 entries per process  4MB at 4B per entry 64 bit  16777216 GB page table!!!! one entry per page one table per process 0x14 0x 2 Virtual address Page # Offset Physical address 0x14 Offset 0x8 Frame # Page Table Base Register (PTBR) Page Table 0x8 ... 0x0 * L 0x1 * L 0x2 * L L : size of entry
  • 2.2 Paging: Implementation Multilevel Page Tables not all need be present saves memory table size can be restricted to one page Offset Page #1 Page #2 Page #3 Page Directory Page Middle Directory Page Table Page Frame # Offset Offset Frame # Oversized Super -P age v=0
  • Page Fault
    • If there is a reference to a page, first reference to that page will trap to operating system:
    • page fault
    • Operating system looks at another table to decide:
      • Invalid reference  abort
      • Just not in memory
    • Get empty frame
    • Swap page into frame
    • Reset tables
    • Set validation bit = v
    • Restart the instruction that caused the page fault
  • Steps in Handling a Page Fault
  • 2. 3 Paging: Features Prepaging
    • Process requests consecutive pages (or just one)  OS loads following pages into memory as well (expecting they will also be needed)
    • Saves time when large contiguous structures are used (e.g. huge arrays)
    • Wastes memory and time case pages not needed
    VM referenced by process prepaged by OS
  • 2. 3 Paging: Features Demand Paging
    • On process startup only first page is loaded into physical memory
    • Pages are then loaded as referenced
    • Saves memory
    • But: may cause frequent page faults until process has its working set in physical memory.
    • OS may adjust its policy (demand / prepaging) dependent on
      • Available free physical memory
      • Process types and history
  • 2. 3 Paging: Features Simplified Swapping
    • Process requires 3 frames
    • swap out 3 most seldomly used pages
    Swapping out the 3 most seldomly used „pieces“ will not work  Swap algo must try to create free pieces as big as possible (costly!) rarely used Process requires memory Paging VM system Non-Paging VM system 1 „piece“ 3 pages PM PM
  • 2. 4 Paging : Advantages
    • Allocating memory is easy and cheap
      • Any free page is ok, OS can take first one out of list it keeps
    • Eliminates external fragmentation
    • Data (page frames) can be scattered all over PM  pages are mapped appropriately anyway
    • Allows demand paging and prepaging
    • More efficient swapping
      • No need for considerations about fragmentation
      • Just swap out page least likely to be used
  • 2. 5 Paging : Disadvantages
    • Longer memory access times (page table lookup)
      • Can be improved using
        • TLB
        • Guarded page tables
        • Inverted page tables
    • Memory requirements (one entry per VM page)
      • Improve using
        • Multilevel page tables and variable page sizes (super - pages)
        • Guarded page tables
        • Page Table Length Register (PTLR) to limit virtual memory size
    • Internal fragmentation
      • Yet only an average of about ½ page per contiguous address range
  • Translation Lookaside Buffer
    • Each virtual memory reference can cause two physical memory accesses
      • one to fetch the page table
      • one to fetch the data
    • To overcome this problem a high-speed cache is set up for page table entries
      • called the TLB - Translation Lookaside Buffer
  • Translation Lookaside Buffer
    • Contains page table entries that have been most recently used
    • Functions same way as a memory cache
  • Translation Lookaside Buffer
    • Given a virtual address, processor examines the TLB
    • If page table entry is present (a hit), the frame number is retrieved and the real address is formed
    • If page table entry is not found in the TLB (a miss), the page number is used to index the process page table
  • Translation Lookaside Buffer
    • First checks if page is already in main memory
      • if not in main memory a page fault is issued
    • The TLB is updated to include the new page entry
  • 2. 6 Summary: Conversion o f a Virtual Address Virtual address Physical address OS Hard ware no protection fault reference legal? copy on write? HDD I/O complete: interrupt memory full? update page table process into ready state copy page process into blocking state yes page fault no exception to process no TLB page table miss page in mem? hit access rights? yes update TLB yes exception to process no swap out a page yes HDD I/O read req. no yes bring in page from HDD ! process into blocking state
  • 5. Virtual memory a nd Linux
    • 5.1 Why VM under Linux?
    • 5.2 The Linux VM System
    • 5.3 The Linux Protection Scheme
    • 5.4 The Linux Paging System
  • 5.1 Why VM under Linux?
    • Linux is a multitasking, multiuser OS. It requires:
      • Protection
      • Ability to ensure pseudo-parallel execution (even if cumulated process size greater than physical memory)
      • Efficient IPC methods (sharing)
    • Good solution: virtual memory
  • 5.2 The Linux VM System
    • Kernel runs in physical addressing mode, maintains VM system
    • Basically a paging system
    • Some remainders of a CoSP scheme present:
      • Process memory segmented into kernel/user memory
      • Process in user mode (  5.3) may not access kernel memory
      • V 2.0 defined code and data segments for each kernel and user mode
      • V 2.2 still defines those segments but for complete virtual address space
  • 5.3 The Linux Protection Scheme
    • Linux uses two modes: kernel and user mode
      • Makes no use of elaborate protection scheme x86 processors provide (only uses ring 0 (kernel) and ring 3 (user))
    • Programs are all started in user mode
    • Program needs to use system resources  must make system call (per software interrupt)  kernel code is executed on behalf of process
    • Kernel processes permanently run in kernel mode
  • 5.4 The Linux Paging System
    • Linux employs architecture independent memory management code
    • Linux uses 3-level paging system
      • Intel x86 system: only 2-level paging
        • Entry in page directory is treated as page middle directory with only one entry
        • 4 Mb pages on some Intel systems are used (e.g. for graphics memory)
    • Linux uses valid, protection, referenced, modified bits
    • Employs copy on write and demand paging
  • Windows Paging Policy
    • Demand paging without pre-paging
    • Maintain a certain number of free page frames
    • For 32-bit machine, each process has 4 GB of virtual address space
    • Backing store – disk space is not assigned to page until it is paged out
    • Uses working sets (per process)
      • Consists of pages mapped into memory and can be accessed without page fault
      • Has min/max size range that changes over time
        • If page fault occurs and working set < min, add page
        • If page fault occurs and working set > max, evict page from working set and add new page
        • If too many page faults, then increase size of working set
    • When evicting pages,
      • Evict from large processes that have been idle for a long time before small active processes
      • Consider foreground process last
  • Virtual-address Space
  • Shared Library Using Virtual Memory
  • Caches
    • If you were to implement a system using the above theoretical model then it would work, but not particularly efficiently. Both operating system and processor designers try hard to extract more performance from the system. Apart from making the processors, memory and so on faster the best approach is to maintain caches of useful information and data that make some operations faster. Linux uses a number of memory management related caches:
    • Buffer Cache
      • The buffer cache contains data buffers that are used by the block device drivers.
    • Page Cache
      • This is used to speed up access to images and data on disk.
    • Swap Cache
      • Only modified (or dirty ) pages are saved in the swap file.
    • Hardware Caches
    Hardware Caches
  • Linux Page Tables
    • Linux assumes that there are three levels of page tables