EE311 OS April 6, 2010
Upcoming SlideShare
Loading in...5
×
 

EE311 OS April 6, 2010

on

  • 1,688 views

 

Statistics

Views

Total Views
1,688
Views on SlideShare
1,688
Embed Views
0

Actions

Likes
0
Downloads
23
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • In An Emprical study using NVRAM, they evaluated new PC architecture with SCM. The new architectures replace DRAM or NAND-Flash with SCM. They evaluated following four architectures.
  • They found that, SCM is beneficial for I/O bound job but SCM can have performance degradation on memory bound job.
  • By virtue of low power consumption of SCM, The system with CPU and SCM only architecture has the least power consumption.
  • To exploit SCM only architecture, they proposed new OS by unified object for file and memory object. Traditional OS manage file data on buffer cache with redundant data but, in SCM only architecture, this process is redundant process and should be avoided. They directly access file object and memory object in the SCM without replica on DRAM.
  • As another approach on SCM only architecture, Adaptive context switch is applied. Since disk I/O is extremely faster on SCM architecture, there is no longer need for context switch to wait I/O completion. On the file I/O access, The process are just wait until I/O completion. They measured the context switch overehead is about 100us which is same order of unit of I/O operation on SCM. They achieve 5~10% performance improvements by this adaptive context switch.

EE311 OS April 6, 2010 EE311 OS April 6, 2010 Presentation Transcript

  • EE311 OS April 6, 2010 Prof. Kyu Ho Park http://core.kaist.ac.kr Lecture 10:File Systems Implementation
  • File System Implementation
    • New Trend- Storage Class Memory(SCM)
    • File-System Structure
    • File-System Implementation
    • Directory Implementation
    • Allocation Methods
    • Free-Space Management
  • New Memory Architecture
    • Storage Class Memory(SCM)
  • System Evolution
    • CPU-RAM-Disk
    • CPU-RAM-SSD-Disk
    • CPU-RAM-SCM-Disk
  • Hierarchy of Latency Freitas &Wilcke, IBM J.Res&Dev,2008
    • Disk 10E7-10E8 CPU cycles
    • SCM 10E3
    • DRAM 10E2
    • L2,L3 cache 10-100
    • L1 1
  • *SLC-Single level Cell, MLC-Multi Level Cell Roadmap for Memory Technology D.Roberts, T,Kgil, and T.Mudge, EDAA 2009 2009 2011 2013 2015 2017 NAND Flash-SLC* 0.0081 0.0052 0.0031 0.0021 0.0013 NAND Flash-MLC* 0.0041 0.0013 0.0008 0.0005 0.0003 PCRAM(nMOSFET)-SLC* 0.0254 0.0123 0.0069 0.0036 0.0024 PCRAM(nMOSFET)-MLC* 0.0127 0.0061 0.0017 0.0009 0.0006 DRAM Cell density 0.0153 0.0096 0.0061 0.0038 0.0024 Flash write/erase cycles 1E+05 1E+05 1E+05 1E+05 1E+05 PCRAM write/erase cycles 1E+10 1E+10 1E+12 1E+15 1E+15 Flash SLC/MLC data retention (years) 10-20 10-20 10-20 10-20 10-20 PCRAM SLC/MLC data retention (years) >10 >10 >10 >10 >10
  • New Memory Architectures NVRAMOS 2009
    • An Empirical Study using NVRAM
      • Performance/Energy tradeoffs on NVRAM
      • Operating System Support for NVRAM
      • Green data center with NVRAM
    RAM-Flash RAM-SCM SCM - Flash SCM - Only < Best power efficiency > < Best Performance > CPU SCM CPU SCM NOR, NAND Flash CPU RAM SCM CPU RAM NOR, NAND Flash
  • New Memory Architectures NVRAMOS 2009
    • I/O bound job
      • SCM is best
    • Memory bound job
      • DRAM is best
    • CPU bound job
      • Little impacts
  • SCM only System NVRAMOS 2009
    • SCM
      • Great potential to reduce energy consumption
      • SCM as memory cause performance degradation
    SCM - Only CPU SCM
  • Operating System support for SCM NVRAMOS 2009
    • Operating System support for SCM
      • Unified file object and memory objects
      • Eliminate redundant I/O accesses
      • mmap like operations for all memory/disk data
    Disk - Twice accesses for same data to update Buffer cache File SCM File memory
  • Adaptive Context Switch NVRAMOS 2009
    • Context Switch on Block devices
      • Fast block devices with SCM
        • No need to context switch, which takes 100us
        • Keep use whole schedule quantum
      • 5~10% performance improvements
      • Shared object accesses in multiple threads cause performance degradation/malfunctions(Why?)
    I/O access less than 1ms CPU RAM SCM
  • Objectives
    • To describe the details of implementing local file systems and directory structures
    • supplementary to Lecture 9.
  • File-System Structure
    • File structure
      • Logical storage unit
      • Collection of related information
    • File system resides on secondary storage (disks)
    • File system organized into layers
    • File control block – storage structure consisting of information about a file
  • File Systems
    • How the file system can be looked to the user;
      • Define a file and its attributes , the operations allowed on a file, and the directory structure for organizing a file.
    • Algorithms and data structures to map the logical file system to the physical disk.
  • Layered F ile System
  • Layered File System
    • I/O control;
    • It consists of device drivers and interrupt handlers to transfer information between the main memory and the disk system.
    • Device drivers: Its input consists of high-level commands.
    • “ get block 1000 ” “ H/W specific instructions”
    Device Driver
  • Basic File System Layer
    • It issues generic commands to the appropriate device drivers to read and write physical blocks on the disk.
    • Each physical block is identified by its numeric disk address( drive 1, cylinder 3, track 2, sector 10)
  • File organization module
    • It knows about files and their logical blocks as well as physical blocks.
    • It can translate logical block addresses to physical block addresses for the basic file system.
  • Logical File System
    • It manages metadata information.
      • Metadata : It includes all of the file system structure except the actual data.
    • It manages the directory structure to provide the file organization information with the information.
    • How?
  • Why layered structure?
  • File System Overview[ On Disk]
    • Boot block: It contains information needed by the system to boot an operating system.
    • Volume control block:
    • It contains volume( or partition) details, such as the number of blocks in the partition, size of blocks, free-block count and free-block pointers, free FCB count and FCB pointers.
    • In UNIX, it is called a superblock.
  • File System Overview
    • A directory structure per file system:
    • In UNIX, it includes file names and associated inode numbers.
    • A per-file FCB:
    • In UNIX, it is called i-node
  • In-memory
    • An in-memory mount table:
      • It contains information about each mounted volume.
    • An in-memory directory-structure cache:
    • It holds the directory information of recently accessed directories.
    • System-wide open-file table:
    • It contains a copy of the FCB of each open file.
    • Per-process open-file table:
    • It contains a pointer to the appropriate entry in the system-wide open-file table.
  • A Typical File Control Block [ = i-node in Unix]
  • In-Memory File System Structures
  • Virtual File Systems
    • Virtual File Systems (VFS) provide an object-oriented way of implementing file systems.
    • VFS allows the same system call interface (the API) to be used for different types of file systems.
    • The API is to the VFS interface, rather than any specific type of file system.
  • Schematic View of Virtual File System
  • Directory Implementation
    • Linear list of file names with pointer to the data blocks.
      • simple to program
      • time-consuming to execute
    • Hash Table – linear list with hash data structure.
      • decreases directory search time
      • collisions – situations where two file names hash to the same location
      • fixed size
  • Allocation Methods
    • An allocation method refers to how disk blocks are allocated for files:
    • Contiguous allocation
    • Linked allocation
    • Indexed allocation
  • Contiguous Allocation
    • Each file occupies a set of contiguous blocks on the disk
    • Simple – only starting location (block #) and length (number of blocks) are required
    • Random access
    • Wasteful of space (dynamic storage-allocation problem)
    • Files cannot grow
  • Contiguous Allocation of Disk Space
  • Linked Allocation
  • File-Allocation Table
  • Example of Indexed Allocation
  • Combined Scheme: UNIX (4K bytes per block)
  • Free-Space Management
    • Bit vector ( n blocks)
    … 0 1 2 n-1 bit[ i ] =  0  block[ i ] free 1  block[ i ] occupied Block number calculation (number of bits per word) *(number of 0-value words) + offset of first 1 bit
  • Free-Space Management (Cont.)
    • Bit map requires extra space
      • Example:
    • block size = 2 12 bytes
    • disk size = 2 30 bytes (1 gigabyte)
    • n = 2 30 /2 12 = 2 18 bits (or 32K bytes)
    • Easy to get contiguous files
    • Linked list (free list)
      • Cannot get contiguous space easily
      • No waste of space
    • Grouping
    • Counting
  • Free-Space Management (Cont.)
    • Need to protect:
      • Pointer to free list
      • Bit map
        • Must be kept on disk
        • Copy in memory and disk may differ
        • Cannot allow for block[ i ] to have a situation where bit[ i ] = 1 in memory and bit[ i ] = 0 on disk
      • Solution:
        • Set bit[ i ] = 1 in disk
        • Allocate block[ i ]
        • Set bit[ i ] = 1 in memory
  • Directory Implementation
    • Linear list of file names with pointer to the data blocks
      • simple to program
      • time-consuming to execute
    • Hash Table – linear list with hash data structure
      • decreases directory search time
      • collisions – situations where two file names hash to the same location
      • fixed size
  • Linked Free Space List on Disk
  • Efficiency and Performance
    • Efficiency dependent on:
      • disk allocation and directory algorithms
      • types of data kept in file’s directory entry
    • Performance
      • disk cache – separate section of main memory for frequently used blocks
      • free-behind and read-ahead – techniques to optimize sequential access
      • improve PC performance by dedicating section of memory as virtual disk, or RAM disk
  • Page Cache
    • A page cache caches pages rather than disk blocks using virtual memory techniques
    • Memory-mapped I/O uses a page cache
    • Routine I/O through the file system uses the buffer (disk) cache
    • This leads to the following figure
  • I/O Without a Unified Buffer Cache
  • Unified Buffer Cache
    • A unified buffer cache uses the same page cache to cache both memory-mapped pages and ordinary file system I/O
  • I/O Using a Unified Buffer Cache