SlideShare a Scribd company logo
1 of 26
Accelerating SSD Performance with HLNAND Roland Schuetz MOSAID Technologies Inc. David Won, INDILINX
Presentation Outline ,[object Object],[object Object],[object Object],[object Object]
PC System Architecture Where are the Bottlenecks? PC System Architecture Migration of System Memory Creating New Memory Sub-Architecture Utilize Upgraded PCIe link for New Storage Memory Sub-Architecture Most Memory Interfaces and System Interconnect Have Undergone Dramatic BW Upgrades Over the Years Bulk Storage Access BW Lags (Mechanical Latency)
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],SSD Market Segment Enterprise Notebook NetBook / Ultraportable Source: STEC Source: SanDisk Source: STEC
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Enterprise Flash Storage Excellent Market for HLNAND SSDs Source: Violin 1010 Memory Appliance, Violin Memory, Inc.
Current SSD Offerings (1st Generation) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
How are Current SSDs Stacking Up? Source: Engadget
Current SSDs vs. HDDs
HLNAND – New High Speed NAND Standard ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
HLNAND – New High Speed NAND Standard (cont’d) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
PC Demands on System Storage ,[object Object],[object Object]
IO Operation Example Can SSD Deliver “Instant Boot” ,[object Object],[object Object],[object Object],*  Mtron SSD 32 GB: Performance with a Catch Patrick Schmid, Achim Roos,  November 21, 2007; SSD: Mtron MSDSATA6025032NA 10 Seconds is very observable time!
IO Operation Example Can SSD Deliver “Instant Boot” ,[object Object],[object Object],[object Object],HLNAND based SSDs offer 260% - 450% and up to 800% IO rate improvement.  3.8 sec. not instant, but much closer
IOPS are Holly Grail in Enterprise Applications ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],I O P S * Assume similar design to STEC SSD and multiply by media speed improvement; 45,000 * (266/40).Not including 4GFC saturation limit. Holy Grail of the Monastery of Xenophontos, Macedonian Heritage, 2000-2008
SSD Performance Comparison HLNAND-based vs. NAND-based SSD Controller for HLNAND HL1 Ring Host Interface ,[object Object],[object Object],SSD Controller for NAND Channel 1 - Max. 25MB/s x8b Channel 2 - Max. 25MB/s x8b Channel 8 - Max. 25MB/s x8b Parallel Bus Host Interface ,[object Object],[object Object],32 dies per ring 16 dies per channel, no termination 128GB SSD using 128 * 8Gb SLC Ring 1 - Max. 266MB/s @x8b Ring 1 - Max. 266MB/s @x8b
HLNAND SSD vs. NAND SSD ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],8 Bit HLNAND-based SSD 8 Bit NAND-based SSD
HLNAND SSD vs. NAND SSD Read Throughput, Experimental Results  HLNAND-based SSD, Media Rate NAND-based SSD, Media Rate Plots from Mobile Embedded Lab, University of Seoul, 2008
HLNAND SSD vs. SSD vs. HDD Estimates and Measured Performance Read Throughput
HLNAND SSD vs. NAND SSD Write Throughput, Experimental Results HLNAND-based SSD, Media Rate NAND-based SSD, Media Rate Plots from Mobile Embedded Lab, University of Seoul, 2008
HLNAND SSD vs. SSD vs. HDD Estimates and Measured Performance Write Throughput
SSD Controller Design: Flash Translation Layer, FTL ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
HLAND Improves FTL Performance ,[object Object],[object Object],[object Object],[object Object],[object Object]
Wear Leveling Enhancement: Copy-After-Page-Pair-Erase ,[object Object],[object Object],1. Page-pair erase 2. Copy Copy-after-PPE C E :  Erase cost C cp  :  Copy cost M   : Number of blocks erased concurrently N p :  Number of pages in a block k :  Number of page-copies  : Additional copy overhead Blk 0 1 2 3 4 Log 0 1 2 4 4
Synthetic Workload Experiment ,[object Object],[object Object],[object Object],[object Object],[object Object],Experiment performed by Mobile Embedded Lab, University of Seoul, 2008 38 82 263 617 5% 10% 25% 60% Erase Copyback Program Read
New Hierarchy for New User Experience   Historical Storage Hierarchy New Storage Hierarchy
Conclusions ,[object Object],High Speed Interface Low Power Consumption High Scalability Interface Extensibility Reduced Overall Cost with  Increased Performance  Advance Core Features

More Related Content

What's hot

Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...DataStax Academy
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
NVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about TimeNVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about TimeDell World
 
了解Cpu
了解Cpu了解Cpu
了解CpuFeng Yu
 
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1Gridstore-Datasheet-HyperConvergedAppliance-FINAL1
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1Chris WeyelI
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Community
 
Cassandra and Solid State Drives
Cassandra and Solid State DrivesCassandra and Solid State Drives
Cassandra and Solid State DrivesRick Branson
 

What's hot (20)

Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
 
Storage spaces direct webinar
Storage spaces direct webinarStorage spaces direct webinar
Storage spaces direct webinar
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
NVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about TimeNVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about Time
 
了解Cpu
了解Cpu了解Cpu
了解Cpu
 
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1Gridstore-Datasheet-HyperConvergedAppliance-FINAL1
Gridstore-Datasheet-HyperConvergedAppliance-FINAL1
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 
TDS-16489U - Dual Processor
TDS-16489U - Dual ProcessorTDS-16489U - Dual Processor
TDS-16489U - Dual Processor
 
Cassandra and Solid State Drives
Cassandra and Solid State DrivesCassandra and Solid State Drives
Cassandra and Solid State Drives
 

Similar to Accelerating SSD Performance with HLNAND

Flash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from RealityFlash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from RealityHitachi Vantara
 
Toshiba Storage Protfolio 2014
Toshiba Storage Protfolio 2014Toshiba Storage Protfolio 2014
Toshiba Storage Protfolio 2014Mustafa Kuğu
 
Challenges and Trends of SSD Design
Challenges and Trends of SSD DesignChallenges and Trends of SSD Design
Challenges and Trends of SSD DesignHenry Chao
 
Ssd And Enteprise Storage
Ssd And Enteprise StorageSsd And Enteprise Storage
Ssd And Enteprise StorageFrank Zhao
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
 
The Power of One: Supermicro’s High-Performance Single-Processor Blade Systems
The Power of One: Supermicro’s High-Performance Single-Processor Blade SystemsThe Power of One: Supermicro’s High-Performance Single-Processor Blade Systems
The Power of One: Supermicro’s High-Performance Single-Processor Blade SystemsRebekah Rodriguez
 
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...Need For Speed- Using Flash Storage to optimise performance and reduce costs-...
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
 
The future of tape april 16
The future of tape april 16The future of tape april 16
The future of tape april 16Josef Weingand
 
Heterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsHeterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsAnand Haridass
 
Seagate – Next Level Storage (Webinar mit Boston Server & Storage, 2018 09-28)
Seagate – Next Level Storage (Webinar mit Boston Server & Storage,  2018 09-28)Seagate – Next Level Storage (Webinar mit Boston Server & Storage,  2018 09-28)
Seagate – Next Level Storage (Webinar mit Boston Server & Storage, 2018 09-28)BOSTON Server & Storage Solutions GmbH
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
Cy7 introduction
Cy7 introductionCy7 introduction
Cy7 introductionKunhui Wu
 
Design Tradeoffs for SSD Performance
Design Tradeoffs for SSD PerformanceDesign Tradeoffs for SSD Performance
Design Tradeoffs for SSD Performancejimmytruong
 
SDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxSDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxssuserabc741
 
I_O Switch Thunderstorm Data Sheet
I_O Switch Thunderstorm Data SheetI_O Switch Thunderstorm Data Sheet
I_O Switch Thunderstorm Data SheetDimitar Boyn
 

Similar to Accelerating SSD Performance with HLNAND (20)

Flash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from RealityFlash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from Reality
 
Toshiba Storage Protfolio 2014
Toshiba Storage Protfolio 2014Toshiba Storage Protfolio 2014
Toshiba Storage Protfolio 2014
 
Challenges and Trends of SSD Design
Challenges and Trends of SSD DesignChallenges and Trends of SSD Design
Challenges and Trends of SSD Design
 
Ssd And Enteprise Storage
Ssd And Enteprise StorageSsd And Enteprise Storage
Ssd And Enteprise Storage
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective
 
The Cell Processor
The Cell ProcessorThe Cell Processor
The Cell Processor
 
The Power of One: Supermicro’s High-Performance Single-Processor Blade Systems
The Power of One: Supermicro’s High-Performance Single-Processor Blade SystemsThe Power of One: Supermicro’s High-Performance Single-Processor Blade Systems
The Power of One: Supermicro’s High-Performance Single-Processor Blade Systems
 
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...Need For Speed- Using Flash Storage to optimise performance and reduce costs-...
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
 
The future of tape april 16
The future of tape april 16The future of tape april 16
The future of tape april 16
 
Heterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsHeterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of Systems
 
Welcome to the Datasphere – the next level of storage
Welcome to the Datasphere – the next level of storageWelcome to the Datasphere – the next level of storage
Welcome to the Datasphere – the next level of storage
 
Seagate – Next Level Storage (Webinar mit Boston Server & Storage, 2018 09-28)
Seagate – Next Level Storage (Webinar mit Boston Server & Storage,  2018 09-28)Seagate – Next Level Storage (Webinar mit Boston Server & Storage,  2018 09-28)
Seagate – Next Level Storage (Webinar mit Boston Server & Storage, 2018 09-28)
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
IO Dubi Lebel
IO Dubi LebelIO Dubi Lebel
IO Dubi Lebel
 
Cy7 introduction
Cy7 introductionCy7 introduction
Cy7 introduction
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Design Tradeoffs for SSD Performance
Design Tradeoffs for SSD PerformanceDesign Tradeoffs for SSD Performance
Design Tradeoffs for SSD Performance
 
SDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxSDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptx
 
I_O Switch Thunderstorm Data Sheet
I_O Switch Thunderstorm Data SheetI_O Switch Thunderstorm Data Sheet
I_O Switch Thunderstorm Data Sheet
 

Accelerating SSD Performance with HLNAND

  • 1. Accelerating SSD Performance with HLNAND Roland Schuetz MOSAID Technologies Inc. David Won, INDILINX
  • 2.
  • 3. PC System Architecture Where are the Bottlenecks? PC System Architecture Migration of System Memory Creating New Memory Sub-Architecture Utilize Upgraded PCIe link for New Storage Memory Sub-Architecture Most Memory Interfaces and System Interconnect Have Undergone Dramatic BW Upgrades Over the Years Bulk Storage Access BW Lags (Mechanical Latency)
  • 4.
  • 5.
  • 6.
  • 7. How are Current SSDs Stacking Up? Source: Engadget
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17. HLNAND SSD vs. NAND SSD Read Throughput, Experimental Results HLNAND-based SSD, Media Rate NAND-based SSD, Media Rate Plots from Mobile Embedded Lab, University of Seoul, 2008
  • 18. HLNAND SSD vs. SSD vs. HDD Estimates and Measured Performance Read Throughput
  • 19. HLNAND SSD vs. NAND SSD Write Throughput, Experimental Results HLNAND-based SSD, Media Rate NAND-based SSD, Media Rate Plots from Mobile Embedded Lab, University of Seoul, 2008
  • 20. HLNAND SSD vs. SSD vs. HDD Estimates and Measured Performance Write Throughput
  • 21.
  • 22.
  • 23.
  • 24.
  • 25. New Hierarchy for New User Experience Historical Storage Hierarchy New Storage Hierarchy
  • 26.

Editor's Notes

  1. August 2008 Flash Memory Summit Santa Clara, CA USA
  2. - Loading and saving from and to the HDD/SSD
  3. August 2008 Flash Memory Summit Santa Clara, CA USA
  4. How many conventional flash devices per channel to saturate? Sata 1 Sata 2 Sata 3
  5. Comparable number of interface blocks in controller Higher speed IO in HLNAND rings allows higher BW P to P connection facilitates higher IO rates per ring (channel) Lower pin count on hlnand controller -> dependent on number rings, not devices