An Efficient Management  Scheme for Large-Scale Flash-Memory Storage Systems Li-Pin Change and Tei-Wei Kuo 2004 ACM Sympos...
Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><...
Introduction <ul><li>Traditional design on flash-memory storage systems </li></ul><ul><ul><li>Usually adopt static table-d...
Introduction (Cont’) <ul><li>Investigate the behaviors of access patterns generated by realistic and typical workloads </l...
Fundamentals <ul><li>NAND flash memory </li></ul><ul><ul><li>Organized in terms of blocks </li></ul></ul><ul><ul><li>Each ...
Schematic of traditional method
Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><...
Space Management <ul><li>PC (Physical cluster) – base unit </li></ul><ul><ul><li>LCPC, LDPC, FCPC, FDPC ( Live / Free, Cle...
<ul><li>PC-Based Garbage collection </li></ul><ul><ul><li>An FDPC is selected to recycle </li></ul></ul><ul><ul><li>Find i...
<ul><li>Value-driven heuristic method </li></ul><ul><ul><li>benefit() & cost() when recycle one page </li></ul></ul><ul><u...
<ul><li>Allocation Strategies </li></ul><ul><ul><li>There are three cases for space allocation when a new request arrives ...
Logical-to-Physical Address Translation <ul><li>Dynamic-hashing-based method </li></ul><ul><ul><li>Main-memory-resident ha...
Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><...
Performance Evaluation <ul><li>Workload 1 : an ordinary user access pattern – root disk of mobile PC </li></ul><ul><li>Wor...
Memory Usages vs. Number of Pages Written Extra pages writes by the copyings of live data
26.18 GB 48.9 GB 26.3 GB 25.8 GB Pages Written Footprint Size Scheme 22.6 MB (peak) The proposed scheme 10 MB F-scheme,(1 ...
System Startup Time <ul><li>The granularity size was the dominated factor </li></ul><ul><ul><li>The spare areas of the fir...
Conclusion <ul><li>This paper proposes a flexible management scheme to efficiently manage high-capacity flash-memory stora...
Upcoming SlideShare
Loading in...5
×

An Efficient Management Scheme for Large Scale Flash Memory ...

202

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
202
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

An Efficient Management Scheme for Large Scale Flash Memory ...

  1. 1. An Efficient Management Scheme for Large-Scale Flash-Memory Storage Systems Li-Pin Change and Tei-Wei Kuo 2004 ACM Symposium on Applied Computing Speaker : Park Jongseon
  2. 2. Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><ul><li>Space Management </li></ul><ul><li>Logical-to-Physical Address Translation </li></ul><ul><li>Performance Evaluation </li></ul><ul><li>Conclusion </li></ul>
  3. 3. Introduction <ul><li>Traditional design on flash-memory storage systems </li></ul><ul><ul><li>Usually adopt static table-driven scheme </li></ul></ul><ul><ul><li>With fixed-sized granularity (e.g., 16 KB) </li></ul></ul><ul><li>Severe challenges by rapid growing of capacity </li></ul><ul><ul><li>Performance degradation on system start-up </li></ul></ul><ul><ul><li>Huge demand of main-memory space for management </li></ul></ul><ul><li>Enlarging of granularity – can’t be solution </li></ul><ul><ul><li>Tradeoff exist between the memory usage & the system performance </li></ul></ul>
  4. 4. Introduction (Cont’) <ul><li>Investigate the behaviors of access patterns generated by realistic and typical workloads </li></ul><ul><li>Adopt variable granularities of management units </li></ul><ul><ul><li>Issues </li></ul></ul><ul><ul><ul><li>Address translation </li></ul></ul></ul><ul><ul><ul><li>Space management </li></ul></ul></ul><ul><ul><ul><li>Garbage collection </li></ul></ul></ul><ul><li>Goals - Improvement of resource usages and system performance </li></ul>
  5. 5. Fundamentals <ul><li>NAND flash memory </li></ul><ul><ul><li>Organized in terms of blocks </li></ul></ul><ul><ul><li>Each block is of fixed number of pages </li></ul></ul><ul><ul><li>Typical size : block - 16KB, page - 512B </li></ul></ul><ul><ul><li>Erase unit (block) > read & write unit (page) </li></ul></ul><ul><ul><li>Block must be erased before written or updated </li></ul></ul><ul><ul><li>Life time limitation : 1 million times </li></ul></ul><ul><ul><li>Unbalanced Operation speed : erase > write > read </li></ul></ul><ul><li>So, usually adopt </li></ul><ul><ul><li>Out-place-updating </li></ul></ul><ul><ul><li>Wear-leveling : avoid specific block worn out </li></ul></ul>
  6. 6. Schematic of traditional method
  7. 7. Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><ul><li>Space Management </li></ul><ul><ul><li>PC-Based Garbage collection </li></ul></ul><ul><ul><li>Value-driven heuristic method </li></ul></ul><ul><ul><li>Allocation Strategies </li></ul></ul><ul><li>Logical-to-Physical Address Translation </li></ul><ul><li>Performance Evaluation </li></ul><ul><li>Conclusion </li></ul>
  8. 8. Space Management <ul><li>PC (Physical cluster) – base unit </li></ul><ul><ul><li>LCPC, LDPC, FCPC, FDPC ( Live / Free, Clean / Dirty ) </li></ul></ul><ul><ul><li>Dirty means that it could involved in garbage collection. </li></ul></ul>
  9. 9. <ul><li>PC-Based Garbage collection </li></ul><ul><ul><li>An FDPC is selected to recycle </li></ul></ul><ul><ul><li>Find its proper dirty subtree </li></ul></ul><ul><ul><li>All LDPC’s in the dirty subtree must be copied to somewhere </li></ul></ul><ul><ul><li>Merge FDPC’s as one large FDPC </li></ul></ul><ul><ul><li>Erase blocks of the one large FDPC and turn it into a FCPC </li></ul></ul>
  10. 10. <ul><li>Value-driven heuristic method </li></ul><ul><ul><li>benefit() & cost() when recycle one page </li></ul></ul><ul><ul><li>The garbage collection policy would select largest weight </li></ul></ul><ul><ul><li>e.g. </li></ul></ul><ul><ul><li>Subtree rooted by node B </li></ul></ul><ul><ul><ul><li>: ( 0 - 2 * 64 ) + ( 64 - 0 ) + ( 128 - 0 ) = 64 </li></ul></ul></ul>
  11. 11. <ul><li>Allocation Strategies </li></ul><ul><ul><li>There are three cases for space allocation when a new request arrives </li></ul></ul><ul><ul><ul><li>Case 1: There exists an FCPC that can accommodate the request </li></ul></ul></ul><ul><ul><ul><ul><li>By a best-fit algorithm </li></ul></ul></ul></ul><ul><ul><ul><li>Case 2: There exists an FDPC that can accommodate the request </li></ul></ul></ul><ul><ul><ul><ul><li>Based on the weight function value of PC’s </li></ul></ul></ul></ul><ul><ul><ul><li>Case 3: No single type of PC’s that could accommodate the request </li></ul></ul></ul><ul><ul><ul><ul><li>Merge available PC’s repeatedly until an proper FCPC appears. </li></ul></ul></ul></ul>
  12. 12. Logical-to-Physical Address Translation <ul><li>Dynamic-hashing-based method </li></ul><ul><ul><li>Main-memory-resident hash table instead of static array </li></ul></ul><ul><ul><li>Each hash entry is a chain of tuples for collision resolution </li></ul></ul><ul><ul><li>Each tuple represents a logical chunk (LC) of pages </li></ul></ul><ul><ul><li>Each tuple contains starting logical address, starting physical address, and the number of pages </li></ul></ul>
  13. 13. Contents <ul><li>Introduction </li></ul><ul><li>Fundamentals </li></ul><ul><li>Schematic of traditional method </li></ul><ul><li>Space Management </li></ul><ul><li>Logical-to-Physical Address Translation </li></ul><ul><li>Performance Evaluation </li></ul><ul><ul><li>Memory Usages vs. Number of Pages Written </li></ul></ul><ul><ul><li>System Startup Time </li></ul></ul><ul><li>Conclusion </li></ul>
  14. 14. Performance Evaluation <ul><li>Workload 1 : an ordinary user access pattern – root disk of mobile PC </li></ul><ul><li>Workload 2 : multimedia data access pattern – storage system of a multimedia appliance </li></ul><ul><li>Performance Metrics </li></ul><ul><ul><li>Memory overhead – the size of the required main memory footprint, (for the proposed scheme, the peak size) </li></ul></ul><ul><ul><li>The total amount of data written (including extra overheads by garbage collection) </li></ul></ul>
  15. 15. Memory Usages vs. Number of Pages Written Extra pages writes by the copyings of live data
  16. 16. 26.18 GB 48.9 GB 26.3 GB 25.8 GB Pages Written Footprint Size Scheme 22.6 MB (peak) The proposed scheme 10 MB F-scheme,(1 block) 321 MB F-scheme,(1 page) Actually written Workload 1 : an ordinary user access pattern 20.0 GB 24.8 GB 20.0 GB Pages Written Footprint Size Scheme 3.18 MB (peak) The proposed scheme 10 MB F-scheme,(1 block) 321 MB F-scheme,(1 page) Workload 2 : multimedia data access pattern
  17. 17. System Startup Time <ul><li>The granularity size was the dominated factor </li></ul><ul><ul><li>The spare areas of the first page of every management unit was needed to scan </li></ul></ul><ul><ul><li>For the proposed scheme, only the spare area of the first page of every PC was needed to scan </li></ul></ul>
  18. 18. Conclusion <ul><li>This paper proposes a flexible management scheme to efficiently manage high-capacity flash-memory storage systems. </li></ul><ul><li>A tree-based management scheme with variable allocation granularities. </li></ul><ul><ul><li>Garbage collection : value-driven heuristic </li></ul></ul><ul><ul><li>A space allocation algorithm </li></ul></ul><ul><ul><li>Integrated solution : logical-to-physical address translation method </li></ul></ul><ul><li>The system startup time, the memory usage, and the performance on on-line access are improved. </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×