BigData as a Platform: Cassandra and Current Trends


Published on

BigData as a Platform:Cassandra and Current Trends. Talk from 8th Advanced Computing Conference in Seoul. September 12th, 2012

Published in: Technology, Business

BigData as a Platform: Cassandra and Current Trends

  1. 1. BigData As A PlatformCassandra and current trendsMatthew F. Dennis // @mdennis8th Advanced Computing ConferenceSeptember 12, 2012Seoul, South Korea
  2. 2. Why Does BigData Matter To Business?
  3. 3. Effective Use of BigData Leads To Success
  4. 4. And Businesses Have Noticed … (according to job trends)
  5. 5. Requirements Of A BigData Platform● Performance● Scalability● Availability
  6. 6. Measuring Performance● throughput (in ops/sec)● latency (for a single request)
  7. 7. Uncommon Cassandra Performance Features● No Locking● Log Structured Storage Engine● Highly Parallel● Immutable On Disk Structures● On Disk Compression● No “Master” Nodes
  8. 8. Cassandra Throughput
  9. 9. Cassandra Latency (in microseconds)
  10. 10. Cassandra + SSDBut can you get such low latency and highthroughput for random reads from disk?With Cassandra + SSDs, YES!(SSD latency is usually only ~100 us)
  11. 11. A Random Note About C* and SSDs ● Cassandra can use cheap consumer grade MLC SSDs (~$1.00 USD / GB) ● no in-place updates results in far fewer erase cycles on the drive which results in the drive lasting longer ● Compared to nearly all other databases, consumer SSDs last ~10x longer on C* ● to put it another way MLC drives last about as long with C* as enterprise SLC drives last with most other databases
  12. 12. Why Not On Rotational Disks Too?● rotational disks require ~8ms per seek● note that this is a HW limitation, an absolute upper limit (for that HW)● no system can do better than the seek time when randomly retrieving data from disk (and most do far worse)
  13. 13. What About Writes/Updates?● all write I/O in Cassandra is sequential● no global write lock● no Btrees● compare to MySQL, BerkeleyDB, MongoDB, Oracle, et cetera which either lock (sometimes with a global lock) and/or generate random writes for updates● locking is not the only way to handle concurrency !!!
  14. 14. Larger Than Memory Datasets● write performance degrades only marginally as the dataset outgrows memory; essentially no change in latency or throughput● read performance degrades gracefully and is relative to the percent of data in memory
  15. 15. Most Systems Do Not Behave That Way
  16. 16. Performance Compared to HBase● 10x better read throughput● 8x better write throughput● 8x better read latency● 10x better write latency (even when HBase was running without durability) University of Toronto, Canada Middleware Systems Research Group, et al 38th International Conference On Very Large Data Bases
  17. 17. And We’re Not Even Finished Yet ...● native CQL transport layer● unified off-heap row and key caches● vnodes● no read-before-write on secondary indexes● native JBOD support● straight up profiler driven optimizations
  18. 18. Measuring ScalabilityIf performance is measured in throughputand latency, then scalability is the stabilityof latency as throughput increases (or thestability of latency and throughput as “load”increases); essentially scalability is howwell a system handles growth
  19. 19. Linear Scalability If for all values of X, Y, Z, C and N:latency=X latency=Xthroughput=Y throughput=Y“load”=Z implies “load”=CZnodes=N nodes=CN Then: the system is perfectly linearly scalable with respect to “load”
  20. 20. Linear Scalability If for all values of X, Y, Z, C and N:latency=X latency=Xthroughput=Y throughput=CY“load”=Z implies “load”=Znodes=N nodes=CNThen: the system is perfectly linearly scalable with respect to throughput
  21. 21. Cassandra Scalability“In terms of scalability, there is a clear winner throughout ourexperiments. Cassandra achieves the highest throughput for themaximum number of nodes in all experiments with [nearly linear]increasing throughput from 1 to 12 nodes.”University of Toronto, CanadaMiddleware Systems Research Group, et al38th International Conference On Very Large Data Bases
  22. 22. Cassandra Scalability reads: 25% scans: 25% writes: 50%University of Toronto, CanadaMiddleware Systems Research Group, et al38th International Conference On Very Large Data Bases
  23. 23. Cassandra Scalability (client operations per second at RF=3)
  24. 24. Requirements Of A BigData Platform Performance Scalability●Availability – performance and scalabilityare easy if you ignore availability and/orassume failures never happen
  25. 25. Measuring Availabilityavailability is measured by the amount ofdowntime a system has over a givenperiod of time
  26. 26. Common Causes of Downtime In Large Scale Systems● Component Failure (disk)● Machine Failure (NIC, cpu, power supply)● Rack Failure (router, switch, UPS, AC)● Site Failure (power grid, natural disaster, war, coup)
  27. 27. The Common Theme In The Solutions? Replication
  28. 28. Legacy Replicationcan be HW or SW, replicated or not this is not necessarily a SPOF
  29. 29. Legacy Replication SPOF SPOF SPOF SPOFthe master/leader is a SPOF
  30. 30. Thoughts On Availability (and legacy replication)“High availability implies that a single fault will notbring down your system. Not ‘we’ll recover quickly.’” -- Ben Coverston, DataStax“The biggest problem with failover is that you’realmost never using it until it really hurts. Its likebackups that you never test.” -- Rick Branson, Instagram
  31. 31. Cassandra Replicationcloud provider 0 cloud provider 1 data center 0 data center 1
  32. 32. More Thoughts On Availability (and legacy replication)
  33. 33. Continuous Global Availability ...… requires more than being able torecover from faults, it requiresbeing able to tolerate the faultswithout downtime in the first place
  34. 34. if you care about continuous global availabilitythen you must serve reads and writes from multiple geographical locations there is no alternative
  35. 35. Cassandra As A BigData PlatformPerformanceScalabilityAvailability
  36. 36. And now back to our trends ...
  37. 37. Public Cassandra Users Early 2011
  38. 38. Public Cassandra Users Mid 2012
  39. 39. DataStax Enterprise
  40. 40. DataStax Enterprise
  41. 41. Q?Matthew F. Dennis // @mdennis
  42. 42. Thank You! Matthew F. Dennis // @mdennis
  43. 43. A Quick Side Note ... Cassandra Replication Follows The Dynamo Model * Read It!* Cassandra is not a strict reimplementation of dynamo