Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
273
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • This graph shows hardware evolution over time. The vertical axis is in log scale. First, CPU and memory are improving at 50% per year. However, disk is improving only at the rate of 15% per year. And, this gap is widening from 5 orders of magnitude to 6 orders of magnitude. In human scales, back in 1990, if CPU access takes 1 second, disk would take 6 days. In 2000, this ratio is now 1 second to 3 months. Let’s think about that this means. It takes 1 second to grab a sheet of paper and write something down. If you call the Santa Claus to physically mail you a piece of paper, it may take 6 days. It takes about a month to make your own paper from papyrus, and most of the time is waiting for your paper to dry up. 3 months is a long time!
  • Now, let us look at the price trend over time. Again, the cost is in the log scale. First, let’s see the cost of paper and film, which is a critical barrier to cross for any storage technology to achieve an economy of scale. Once a storage technology crosses this barrier, it becomes cheap enough to be a storage alternative. (animate) Now let’s look at various cost curves. For disks, various disk geometries are introduced roughly at the top boundary of the paper and film cost barrier. Also, notice the cost curve for the persistent RAM. Back in 1998, the booming of digital photography changed the slope of the curve. By 2005, we would expect to see 4 to 10 GB of persistent RAM on personal desktops. Cheap enough, once cross the boundary
  • The idea of Conquest is to design and build a disk/persistent RAM hybrid file system, which delivers all file system services from memory, with the single exception of high-capacity storage. Two major benefits are simplicity and performance.
  • It is well known that small files take up little space and represent most accesses. Large files take up most of the storage capacity, and they are accessed sequentially most of the time. Of course, database is an exception, and Conquest currently does not handle database workload.
  • Based on this user behavior pattern, Conquest stores the following files in persistent RAM. Small files benefit the most from being stored in memory, because seek time and rotational delays comprise the bulk of the time spent on accessing small objects. Also, we now have fast byte-level accesses as opposed to block-level accesses. Small files are allocated contiguously. Storing metadata in memory avoids the notorious synchronous update problem. And, it deserves some discussion. Basically, if there is no metadata; there is no file system. Therefore, system designers take extra caution when it comes to handling metadata. If you update a directory, for example, most disk-based file systems will propagate the change synchronously to disk. It is a serious performance problem. By storing in metadata in memory, we no longer have this problem. Also, we now have a single representation for metadata, as opposed to a runtime representation and storage representation of metadata. Executables and shared libraries are also stored in core, so we can execute programs in-place, which reduces program startup time significantly.
  • Now let’s take a look at the data path for conventional file systems. A storage request has to go through the IO buffer management, which handles caching. If the request is not in the cache, it has to go through persistence support, which is responsible for translating storage and runtime forms of metadata. The request then needs to go through disk management, which handles disk layout, disk arm scheduling and so on before reaching the disk. For conquest memory, updates to metadata and data are in-place. There is no IO buffer management and disk management. Also, for persistence support, we don’t need to translate between runtime and storage states. All we need to manage is metadata allocation, which I will describe a bit later.
  • Since small files and metadata are taken care of, the disk only needs to handle large files. Which means, we can allocate disk space in big chucks, and it translates into lower access and management overhead. Also, without small objects, we don’t need to worry about fragmentation. We don’t need tricks for small files, such as storing data inside the metadata, or elaborate data structures, such as wrapping a balanced tree onto the geometry of the disk cylinders.
  • For large files that are accessed sequentially, disk can deliver near raw bandwidth, which is about 100 MB per second. And that speed is 200 times faster than random disk accesses. Also, large files have well-defined readahead semantics. Since they are read mostly, large file handling involve little synchronization overhead
  • This shows the disk data path of Conquest. Again, on the left side, we have the data path for conventional file systems. Immediately, you see that Conquest data conquest bypasses mechanisms involved in persistence support. The IO buffer management is greatly simplified because we know the behavior of large file accesses. Also, the disk management is greatly simplified due to the lack of small files and fragmentation management. fonts
  • You may ask, “what about large files that are randomly accessed?” In literature, random accesses are commonly defined as nonsequential accesses. However, if you take a look say a movie file, typically it has 150 scene changes. There are 150 places you may randomly jump to, and perform disk accesses sequentially. Also, looking at a mp3 file, the title is stored at the end of the file, so the typical access is to jump to the end of the file and go back to the beginning to play sequentially. Therefore, what may be random accesses are really near sequentially accesses. With this knowledge, we can simplify large-file metadata representation significantly. Mostly is…dumb data structures are still fast in memory
  • Now, let’s look at the performance of Conquest. This slide shows the result for PostMark benchmark, which models ISP workload. The graph plots the number of files against the transaction rate. Conquest, represented in dark blue, is compared against ramfs, ext2, reiserfs, and SGI XFS. Ramfs, in the light blue, does not provide persistence, but it is a base case comparison for the quality of Conquest implementation. Ext2, in green, is the most widely used file system in the UNIX world. Reiserfs, in orange, is a journaling file system optimized for small files. SGI XFS, in red, is also a journaling file system, which is based on the IO-Lite technology. As you can see, Conquest is performance is comparable to the performance of ramfs. Compared to other disk-based file systems, Conquest is at least 24% faster. Note that all these file systems are operating in the LRU disk cache. File systems optimized for disk does not take full advantage of memory speed.
  • Now let’s fix the number of files to 10,000, and vary the percentage of large files from 0 to 10 percent. Since the working set is larger than memory, the graph does not include the ramfs. As you can see, when both memory and disk components are exercised, Conquest can be still be several times faster than leading disk-based file systems. Here is the boundary of physical RAM. Since we can’t see the right side of the graph too well, let’s zoom into the graph.
  • When the working set is greater than RAM, Conquest still runs 1.4 to 2 times faster than various disk-based file systems. This improvement is very significant.

Transcript

  • 1. File System Extensibility and Non-Disk File Systems Andy Wang COP 5611 Advanced Operating Systems
  • 2. Outline
    • File system extensibility
    • Non-disk file systems
  • 3. File System Extensibility
    • Any existing file system can be improved
    • No file system is perfect for all purposes
    • So the OS should make multiple file systems available
    • And should allow for future improvements to file systems
  • 4. Approaches to File System Extensibility
    • Modify an existing file system
    • Virtual file systems
    • Layered and stackable file system layers
  • 5. Modifying Existing File Systems
    • Make the changes you want to an already operating file system
      • Reuses code
      • But changes everyone’s file system
      • Requires access to source code
      • Hard to distribute
  • 6. Virtual File Systems
    • Permit a single OS installation to run multiple file systems
    • Using the same high-level interface to each
    • OS keeps track of which files are instantiated by which file system
    • Introduced by Sun
  • 7. / A 4.2 BSD File System
  • 8. / B 4.2 BSD File System NFS File System
  • 9. Goals of Virtual File Systems
    • Split FS implementation-dependent and -independent functionality
    • Support semantics of important existing file systems
    • Usable by both clients and servers of remote file systems
    • Atomicity of operation
    • Good performance, re-entrant, no centralized resources, “OO” approach
  • 10. Basic VFS Architecture
    • Split the existing common Unix file system architecture
      • Normal user file-related system calls above the split
      • File system dependent implementation details below
    • I_nodes fall below
    • open() and read() calls above
  • 11. VFS Architecture Block Diagram System Calls V_node Layer PC File System 4.2 BSD File System NFS Floppy Disk Hard Disk Network
  • 12. Virtual File Systems
    • Each VFS is linked into an OS-maintained list of VFS’s
      • First in list is the root VFS
    • Each VFS has a pointer to its data
      • Which describes how to find its files
    • Generic operations used to access VFS’s
  • 13. V_nodes
    • The per-file data structure made available to applications
    • Has both public and private data areas
    • Public area is static or maintained only at VFS level
    • No locking done by the v_node layer
  • 14. rootvfs BSD vfs 4.2 BSD File System NFS mount BSD vfs_data … vfs_vnodecovered vfs_next mount
  • 15. rootvfs BSD vfs 4.2 BSD File System NFS v_node / create root / vfs_data … vfs_vnodecovered vfs_next mount v_data … v_vfsmountedhere v_vfsp i_node /
  • 16. rootvfs BSD vfs 4.2 BSD File System NFS v_node / v_node A create dir A vfs_data … vfs_vnodecovered vfs_next mount v_data … v_vfsmountedhere v_vfsp i_node / v_data … v_vfsmountedhere v_vfsp i_node A
  • 17. rootvfs BSD vfs 4.2 BSD File System NFS v_node / v_node A NFS vfs mount NFS vfs_data … vfs_vnodecovered vfs_next mount v_data … v_vfsmountedhere v_vfsp i_node / v_data … v_vfsmountedhere v_vfsp i_node A vfs_data … vfs_vnodecovered vfs_next mntinfo
  • 18. rootvfs BSD vfs 4.2 BSD File System NFS v_node / v_node A NFS vfs v_node B create dir B vfs_data … vfs_vnodecovered vfs_next mount v_data … v_vfsmountedhere v_vfsp i_node / v_data … v_vfsmountedhere v_vfsp i_node A vfs_data … vfs_vnodecovered vfs_next mntinfo v_data … v_vfsmountedhere v_vfsp i_node B
  • 19. rootvfs BSD vfs 4.2 BSD File System NFS mount v_node / i_node / v_node A NFS vfs v_node B read root / vfs_data … vfs_vnodecovered vfs_next v_data … v_vfsmountedhere v_vfsp v_data … v_vfsmountedhere v_vfsp i_node A vfs_data … vfs_vnodecovered vfs_next mntinfo v_data … v_vfsmountedhere v_vfsp i_node B
  • 20. rootvfs BSD vfs NFS vfs v_node / v_node A v_node B i_node / mount 4.2 BSD File System NFS i_node A i_node B mntinfo read dir B vfs_data … vfs_vnodecovered vfs_next vfs_data … vfs_vnodecovered vfs_next v_data … v_vfsmountedhere v_vfsp v_data … v_vfsmountedhere v_vfsp v_data … v_vfsmountedhere v_vfsp
  • 21. Does the VFS Model Give Sufficient Extensibility?
    • The VFS approach allows us to add new file systems
    • But it isn’t as helpful for improving existing file systems
    • What can be done to add functionality to existing file systems?
  • 22. Layered and Stackable File System Layers
    • Increase functionality of file systems by permitting some form of composition
      • One file system calls another, giving advantages of both
    • Requires strong common interfaces, for full generality
  • 23. Layered File Systems
    • Windows NT provides one example of layered file systems
    • File systems in NT are the same as device drivers
    • Device drivers can call other device drivers
    • Using the same interface
  • 24. Windows NT Layered Drivers Example User-Level Process User mode Kernel mode I/O Manager File System Driver Multivolume Disk Driver Disk Driver System Services
  • 25. Another Approach - UCLA Stackable Layers
    • More explicitly built to handle file system extensibility
    • Layered drivers in Windows NT allow extensibility
    • Stackable layers support extensibility
  • 26. Stackable Layers Example LFS VFS Layer File System Calls File System Calls VFS Layer LFS Compression
  • 27. How Do You Create a Stackable Layer?
    • Write just the code that the new functionality requires
    • Pass all other operations to lower levels ( bypass operations)
    • Reconfigure the system so the new layer is on top
  • 28. User File System Directory Layer Directory Layer Compress Layer UFS Layer Encrypt Layer LFS Layer
  • 29. What Changes Does Stackable Layers Require?
    • Changes to v_node interface
      • For full value, must allow expansion to the interface
    • Changes to mount commands
    • Serious attention to performance issues
  • 30. Extending the Interface
    • New file layers provide new functionality
      • Possibly requiring new v_node operations
    • Each layer must be prepared to deal with arbitrary unknown operations
    • Bypass v_node operation
  • 31. Handling a Vnode Operation
    • A layer can do three things with a v_node operation:
      • 1. Do the operation and return
      • 2. Pass it down to the next layer
      • 3. Do some work, then pass it down
    • The same choices are available as the result is returned up the stack
  • 32. Mounting Stackable Layers
    • Each layer is mounted with a separate command
      • Essentially pushing new layer on stack
    • Can be performed at any normal mount time
      • Not just on system build or boot
  • 33. What Can You Do With Stackable Layers?
    • Leverage off existing file system technology, adding
      • Compression
      • Encryption
      • Object-oriented operations
      • File replication
    • All without altering any existing code
  • 34. Performance of Stackable Layers
    • To be a reasonable solution, per-layer overhead must be low
    • In UCLA implementation, overhead is ~1-2% per layer
      • In system time, not elapsed time
    • Elapsed time overhead ~.25% per layer
      • Highly application dependent, of course
  • 35. File Systems Using Other Storage Devices
    • All file systems discussed so far have been disk-based
    • The physics of disks has a strong effect on the design of the file systems
    • Different devices with different properties lead to different file systems
  • 36. Other Types of File Systems
    • RAM-based
    • Disk-RAM-hybrid
    • Flash-memory-based
    • MEMS-based
    • Network/distributed
      • discussion of these deferred
  • 37. Fitting Various File Systems Into the OS
    • Something like VFS is very handy
    • Otherwise, need multiple file access interfaces for different file systems
    • With VFS, interface is the same and storage method is transparent
    • Stackable layers makes it even easier
      • Simply replace the lowest layer
  • 38.
    • Store files in main memory, not on disk
      • Fast access and high bandwidth
      • Usually simple to implement
      • Hard to make persistent
      • Often of limited size
      • May compete with other memory needs
    In-Core File Systems
  • 39. Where Are In-Core File Systems Useful?
    • When brain-dead OS can’t use all main memory for other purposes
    • For temporary files
    • For files requiring very high throughput
  • 40. In-Core File System Architectures
    • Dedicated memory architectures
    • Pageable in-core file system architectures
  • 41. Dedicated Memory Architectures
    • Set aside some segment of physical memory to hold the file system
      • Usable only by the file system
    • Either it’s small, or the file system must handle swapping to disk
    • RAM disks are typical examples
  • 42. Pageable Architectures
    • Set aside some segment of virtual memory to hold the file system
      • Share physical memory system
    • Can be much larger and simpler
    • More efficient use of resources
    • UNIX /tmp file systems are typical examples
  • 43. Basic Architecture of Pageable Memory FS
    • Uses VFS interface
    • Inherits most of code from standard disk-based filesystem
      • Including caching code
    • Uses separate process as “wrapper” for virtual memory consumed by FS data
  • 44. How Well Does This Perform?
    • Not as well as you might think
      • Around 2 times disk based FS
      • Why?
    • Because any access requires two memory copies
      • 1. From FS area to kernel buffer
      • 2. From kernel buffer to user space
    • Fixable if VM can swap buffers around
  • 45. Other Reasons Performance Isn’t Better
    • Disk file system makes substantial use of caching
    • Which is already just as fast
    • But speedup for file creation/deletion is faster
      • requires multiple trips to disk
  • 46. Disk/RAM Hybrid FS
    • Conquest File System
    • http://www.cs.fsu.edu/~awang/conquest
  • 47. Hardware Evolution 1990 2000 1 KHz 1 MHz 1 GHz CPU (50% /yr) Memory (50% /yr) Disk (15% /yr) Accesses Per Second (Log Scale) 1995 (1 sec : 6 days) (1 sec : 3 months) 10 5 10 6
  • 48. Price Trend of Persistent RAM 1995 2005 10 0 Year $/MB (log) 2000 10 -2 10 -1 10 1 10 2 paper/film 3.5” HDD 2.5” HDD 1” HDD Persistent RAM Booming of digital photography 4 to 10 GB of persistent RAM
  • 49. Conquest
    • Design and build a disk/persistent-RAM hybrid file system
    • Deliver all file system services from memory, with the exception of high-capacity storage
  • 50. User Access Patterns
    • Small files
      • Take little space (10%)
      • Represent most accesses (90%)
    • Large files
      • Take most space
      • Mostly sequential accesses
    • Except database applications
  • 51. Files Stored in Persistent RAM
    • Small files (< 1MB)
      • No seek time or rotational delays
      • Fast byte-level accesses
      • Contiguous allocation
    • Metadata
      • Fast synchronous update
      • No dual representations
    • Executables and shared libraries
      • In-place execution
  • 52. Memory Data Path of Conquest Conventional file systems IO buffer Disk management Storage requests IO buffer management Disk Persistence support Conquest Memory Data Path Storage requests Persistence support Battery-backed RAM Small file and metadata storage
  • 53. Large-File-Only Disk Storage
    • Allocate in big chunks
      • Lower access overhead
      • Reduced management overhead
    • No fragmentation management
    • No tricks for small files
      • Storing data in metadata
    • No elaborate data structures
      • Wrapping a balanced tree onto disk cylinders
  • 54. Sequential-Access Large Files
    • Sequential disk accesses
      • Near-raw bandwidth
    • Well-defined readahead semantics
    • Read-mostly
      • Little synchronization overhead (between memory and disk)
  • 55. Disk Data Path of Conquest Conventional file systems IO buffer Disk management Storage requests IO buffer management Disk Persistence support Conquest Disk Data Path IO buffer management IO buffer Storage requests Disk management Disk Battery-backed RAM Small file and metadata storage Large-file-only file system
  • 56. Random -Access Large Files
    • Random access?
      • Common definition: nonsequential access
      • A typical movie has 150 scene changes
      • MP3 stores the title at the end of the files
    • Near Sequential access?
      • Simplify large-file metadata representation significantly
  • 57.
    • Conquest is comparable to ramfs
    • At least 24% faster than the LRU disk cache
    PostMark Benchmark
    • ISP workload (emails, web-based transactions)
    250 MB working set with 2 GB physical RAM
  • 58.
    • When both memory and disk components are exercised, Conquest can be several times faster than ext2fs , reiserfs , and SGI XFS
    PostMark Benchmark 10,000 files, 3.5 GB working set with 2 GB physical RAM > RAM <= RAM
  • 59.
    • When working set > RAM, Conquest is 1.4 to 2 times faster than ext2fs , reiserfs , and SGI XFS
    PostMark Benchmark 10,000 files, 3.5 GB working set with 2 GB physical RAM
  • 60. Flash Memory File Systems
    • What is flash memory?
    • Why is it useful for file systems?
    • A sample design of a flash memory file system
  • 61. Flash Memory
    • A form of solid-state memory similar to ROM
      • Holds data without power supply
    • Reads are fast
    • Can be written once, more slowly
    • Can be erased, but very slowly
    • Limited number of erase cycles before degradation
  • 62. Writing In Flash Memory
    • If writing to empty location, just write
    • If writing to previously written location, erase it, then write
    • Typically, flash memories allow erasure only of an entire sector
      • Can read (sometimes write) other sectors during an erase
  • 63. Typical Flash Memory Characteristics ~$300/Gbyte Price 15-45 mA active 5-20  A standby Power consumption 500ms/block 100,000 times 64Kbytes Erase cycle Cycle limit Sector size 10  s/byte Write cycle 80-150 ns Read cycle
  • 64. Pros/Cons of Flash Memory
    • Small, and light
    • Uses less power than disk
    • Read time comparable to DRAM
    • No rotation/seek complexities
    • No moving parts (shock resistant)
    • Expensive (compared to disk)
    • Erase cycle very slow
    • Limited number of erase cycles
  • 65. Flash Memory File System Architectures
    • One basic decision to make
      • Is flash memory disk-like?
      • Or memory-like?
    • Should flash memory be treated as a separate device, or as a special part of addressable memory?
  • 66. Hitachi Flash Memory File System
    • Treats flash memory as device
      • As opposed to directly addressable memory
    • Basic architecture similar to log file system
  • 67. Basic Flash Memory FS Architecture
    • Writes are appended to tail of sequential data structure
    • Translation tables to find blocks later
    • Cleaning process to repair fragmentation
    • This architecture does no wear-leveling
  • 68. Flash Memory Banks and Segments
    • Architecture divides entire flash memory into banks (8, in current implementation)
    • Banks are subdivided into segments
      • 8 segments per bank, currently
    • 256 Kbytes per segment
    • 16 Mbytes total capacity
  • 69. Writing Data in Flash Memory File System
    • One bank is currently active
    • New data is written to block in active bank
    • When this bank is full, move on to bank with most free segments
    • Various data structures maintain illusion of “contiguous” memory
  • 70. Cleaning Up Data
    • Cleaning is done on a segment basis
    • When a segment is to be cleaned, its entire bank is put on a cleaning list
    • No more writes to bank till cleaning is done
    • Segments chosen in manner similar to LFS
  • 71. Cleaning a Segment
    • Copy live data to another segment
    • Erase entire segment
      • segment is erasure granularity
    • Return bank to active bank list
  • 72. Performance of the Prototype System
    • No seek time, so sequential/random access should be equally fast
      • Around 650-700 Kbytes per second
    • Read performance goes at this speed
    • Write performance slowed by cleaning
      • How much depends on how full the file system is
      • Also, writing is simply slower in flash
  • 73. More Flash Memory File System Performance Data
    • On Andrew Benchmark, performs comparably to pageable memory FS
      • Even when flash memory nearly full
    • This benchmark does lots of reads, few writes
      • Allowing flash file system to perform lots of cleaning without delaying writes