Cassandra and Solid State Drives

3,439 views

Published on

By Rick Branson of Instagram

Published in: Technology
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,439
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
0
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide

Cassandra and Solid State Drives

  1. 1. CASSANDRA & SOLIDSTATE DRIVESRick Branson, DataStax
  2. 2. FACTCASSANDRA’S STORAGEENGINE WAS OPTIMIZEDFOR SPINNING DISKS
  3. 3. LSM-TREES
  4. 4. WRITE PATH
  5. 5. Client CassandraOn-Disk Node Commit Log{ cf1: { row1: { col1: abc } } }{ cf1: { row1: { col2: def } } }{ cf1: { row1: { col1: <del> } } }{ cf1: { row2: { col1: xyz } } }{ cf1: { row1: { col3: foo } } }insert({ cf1: { row1: { col3: foo } } })In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”row1row2col1: [del] col2: “def” col3: “foo”col1: “xyz”COMMIT
  6. 6. SSTableSSTableSSTableSSTableSSTableSSTableSSTable1SSTableSSTableSSTableSSTableSSTableSSTableSSTable2SSTableSSTableSSTableSSTableSSTableSSTableSSTable3SSTableSSTableSSTableSSTableSSTableSSTableSSTable4In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”In-Memory Memtable for “cf1”row1row2col1: [del] col2: “def” col3: “foo”col1: “xyz”FLUSH
  7. 7. SSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTable SSTableSSTableSSTableSSTableSSTable SSTableSSTableSSTableSSTableSSTable31 2 4SSTables are merged to maintain read performanceCOMPACTSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTable
  8. 8. SSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTable SSTableSSTableSSTableSSTableSSTable SSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableSSTableNew SSTable is streamedto disk and old SSTablesare erasedX X X X
  9. 9. TAKEAWAYS• All disk writes are sequential, append-only operations• On-disk tables (SSTables) are written insorted order, so compaction is linearcomplexity O(N)• SSTables are completely immutable
  10. 10. TAKEAWAYS• All disk writes are sequential, append-only operations• On-disk tables (SSTables) are written insorted order, so compaction is linearcomplexity O(N)• SSTables are completely immutableIMPORTANT
  11. 11. COMPARED• Most popular data storage enginesrewrite modified data in-place: MySQL(InnoDB), PostgreSQL, Oracle,MongoDB, Membase, BerkeleyDB, etc• Most perform similar buffering ofwrites before flushing to disk• ... but flushes are RANDOM writes
  12. 12. SPINNING DISKS• Dirt cheap: $0.08/GB• Seek time limited by time it takes for driveto rotate: IOPS = RPM/60• 7,200 RPM = ~120 IOPS• 15,000 RPM has been the max for decades• Sequential operations are best: 125MB/sec for modern SATA drives
  13. 13. THAT WAS THE WORLDIN WHICH CASSANDRAWAS BUILT
  14. 14. 2012: MLC NAND FLASH*• Affordable: ~$1.75/GB street• Massive IOPS: 39,500/sec read, 23,000/sec write• Latency of less than 100µs• Good sequential throughput: 270MB/sec read, 205MB/sec write• Way cheaper per IOPS: $0.02 vs $1.25* based on specifications provided by Intel for 300GB Intel 320 drive
  15. 15. WITH RANDOM ACCESSSTORAGE, ARE CASSANDRA’SLSM-TREES OBSOLETE?
  16. 16. SOLID STATE HASSOME MAJOR BUTS...
  17. 17. ... BUT• Cannot overwrite directly: must erasefirst, then write• Can write in small increments (4KB),but only erase in ~512KB blocks• Latency: write is ~100µs, erase is ~2ms• Limited durability: ~5,000 cycles (MLC)for each erase block
  18. 18. WEAR LEVELING is usedto reduce the number oftotal erase operations
  19. 19. WEAR LEVELING
  20. 20. WEAR LEVELINGErase Block
  21. 21. WEAR LEVELING
  22. 22. WEAR LEVELING
  23. 23. WEAR LEVELINGDisk Page
  24. 24. WEAR LEVELINGWrite 1
  25. 25. WEAR LEVELINGWrite 1Write 2
  26. 26. WEAR LEVELINGWrite 1Write 2Write 3
  27. 27. Write 1Write 2Write 3How is data from onlyWrite 2 modified?Remember: the whole block must be erased
  28. 28. Mark Garbage
  29. 29. Mark Garbage AppendModifiedDataEmpty Block
  30. 30. Wait... GARBAGE?
  31. 31. THAT MEANS...
  32. 32. ... fragmentation,WHICH MEANS...
  33. 33. Garbage Collection!
  34. 34. GARBAGE COLLECTION• Compacts fragmented disk blocks• Erase operations drag on performance• Modern SSDs do this in thebackground... as much as possible• If no empty blocks are available, GCmust be done before ANY writes cancomplete
  35. 35. WRITE AMPLIFICATION• When only a few kilobytes are written,but fragmentation causes a wholeblock to be rewritten• The smaller & more random the writes,the worse this gets• Modern “mark and sweep” GC reducesit, but cannot eliminate it
  36. 36. Torture test shows massivewrite performance drop-offfor heavily fragmented driveSource: http://www.anandtech.com/show/4712/the-crucial-m4-ssd-update-faster-with-fw0009/6
  37. 37. Some poorly designed drivesCOMPLETELY fall apartSource: http://www.anandtech.com/show/5272/ocz-octane-128gb-ssd-review/6
  38. 38. Even a well-behaved drivesuffers significantly from thetorture testSource: http://www.anandtech.com/show/4244/intel-ssd-320-review/11
  39. 39. Post-torture, all disk blockswere marked empty, and the“fast” comes back...Source: http://www.anandtech.com/show/4244/intel-ssd-320-review/11
  40. 40. “TRIM”• Filesystems don’t typically immediatelyerase data when files are deleted, they justmark them as deleted and erase later• TRIM allows the OS to actively tell the drivewhen a region of disk is no longer used• If an entire erase block is marked asunused, GC is avoided, otherwise TRIMjust hastens the collection process
  41. 41. TRIM only reduces thewrite amplification effect,it can’t eliminate it.
  42. 42. THEN THERE’SLIFETIME...
  43. 43. AnandTech estimates that modern MLC SSDsonly last about 1.5 years under heavy MySQL load,which causes around 10x write amplification
  44. 44. REMEMBER THIS?
  45. 45. TAKEAWAYS• All disk writes are sequential, append-only operations• On-disk tables (SSTables) are written insorted order, so compaction is linearcomplexity O(N)• SSTables are completely immutable
  46. 46. CASSANDRAONLY WRITESSEQUENTIALLY
  47. 47. “For a sequential write workload,write amplification is equal to 1,i.e., there is no writeamplification.”Source: Hu, X.-Y., and R. Haas, “The Fundamental Limitations of Flash Random WritePerformance: Understanding, Analysis, and Performance Modeling”
  48. 48. THANK YOU.~ @rbranson

×