• Like
  • Save
C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz
Upcoming SlideShare
Loading in...5
×
 

C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz

on

  • 5,038 views

At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting ...

At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting design anti-patterns, performance killers and other opportunities to lose a finger that are at your disposal with Cassandra.

Statistics

Views

Total Views
5,038
Views on SlideShare
4,997
Embed Views
41

Actions

Likes
14
Downloads
140
Comments
0

3 Embeds 41

http://digaku.ansvia.com 33
https://twitter.com 7
http://localhost 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz Presentation Transcript

    • June 19, 2013#Cassandra13Axel Liljencrantzliljencrantz@spotify.comHow not to useCassandra
    • #Cassandra13The Spotify backend
    • #Cassandra13The Spotify backend•  Around 3000 servers in 3 datacenters•  Volumeso  We have ~ 12 soccer fields of musico  Streaming ~ 4 Wikipedias/secondo  ~ 24 000 000 active users
    • #Cassandra13The Spotify backend•  Specialized software powering Spotifyo  ~ 70 serviceso  Mostly Python, some Javao  Small, simple services responsible for single task
    • #Cassandra13Storage needs•  Used to be a pure PostgreSQL shop•  Postgres is awesome, but...o  Poor cross-site replication supporto  Write master failure requires manual interventiono  Sharding throws most relational advantages out thewindow
    • #Cassandra13Cassandra @ Spotify•  We started using Cassandra ~2 years ago•  About a dozen services use it by now•  Back then, there was little information about how todesign efficient, scalable storage schemas forCassandra
    • #Cassandra13Cassandra @ Spotify•  We started using Cassandra ~2 years ago•  About a dozen services use it by now•  Back then, there was little information about how todesign efficient, scalable storage schemas forCassandra•  So we screwed up•  A lot
    • #Cassandra13How to misconfigure Cassandra
    • #Cassandra13Read repair•  Repair from outages during regular read operation•  With RR, all reads request hash digests from all nodes•  Result is still returned as soon as enough nodes havereplied•  If there is a mismatch, perform a repair
    • #Cassandra13Read repair•  Useful factoid: Read repair is performed across all datacenters•  So in a multi-DC setup, all reads will result in requests beingsent to every data center•  Weve made this mistake a bunch of times•  New in 1.1: dclocal_read_repair
    • #Cassandra13Row cache•  Cassandra can be configured to cache entire data rows inRAM•  Intended as a memcache alternative•  Lets enable it. Whats the worst that could happen, right?
    • #Cassandra13Row cacheNO!•  Only stores full rows•  All cache misses are silently promoted to full row slices•  All writes invalidate entire row•  Dont use unless you understand all use cases
    • #Cassandra13Compression•  Cassandra supports transparent compression of all data•  Compression algorithm (snappy) is super fast•  So you can just enable it and everything will be better, right?
    • #Cassandra13Compression•  Cassandra supports transparent compression of all data•  Compression algorithm (snappy) is super fast•  So you can just enable it and everything will be better, right?•  NO!•  Compression disables a bunch of fast paths, slowing downfast reads
    • #Cassandra13How to misuse Cassandra
    • #Cassandra13Performance worse over time•  A freshly loaded Cassandra cluster is usually snappy•  But when you keep writing to the same columns over for along time, performance goes down•  Weve seen clusters where reads touch a dozen SSTableson average•  nodetool cfhistograms is your friend
    • #Cassandra13Performance worse over time•  CASSANDRA-5514•  Every SSTable stores first/last column of SSTable•  Time series-like data is effectively partitioned
    • #Cassandra13Few cross continent clusters•  Few cross continent Cassandra users•  We are kind of on our own when it comes to some problems•  CASSANDRA-5148•  Disable TCP nodelay•  Reduced packet count by 20 %
    • #Cassandra13How not to upgrade Cassandra
    • #Cassandra13How not to upgrade Cassandra•  Very few total cluster outageso  Clusters have been up and running since the early0.7 days, been rolling upgraded, expanded, fullhardware replacements etc.•  Never lost any data!o  No matter how spectacularly Cassandra fails, it hasnever written bad datao  Immutable SSTables FTW
    • #Cassandra13Upgrade from 0.7 to 0.8•  This was the first big upgrade we did, 0.7.4 ⇾ 0.8.6•  Everyone claimed rolling upgrade would worko  It did not•  One would expect 0.8.6 to have this fixed•  Patched Cassandra and rolled it a day later•  Takeaways:o  ALWAYS try rolling upgrades in a testing environmento  Dont believe what people on the Internet tell you
    • #Cassandra13Upgrade 0.8 to 1.0•  We tried upgrading in test env, worked fine•  Worked fine in production...•  Except the last cluster•  All data gone
    • #Cassandra13Upgrade 0.8 to 1.0•  We tried upgrading in test env, worked fine•  Worked fine in production...•  Except the last cluster•  All data gone•  Many keys per SSTable ⇾ corrupt bloom filters•  Made Cassandra think it didnt have any keys•  Scrub data ⇾ fixed•  Takeaway: ALWAYS test upgrades using production data
    • #Cassandra13Upgrading 1.0 to 1.1•  After the previous upgrades, we did all the tests withproduction data and everything worked fine...•  Until we redid it in production, and we had reports ofmissing rows•  Scrub ⇾ restart made them reappear•  This was in December, have not been able to reproduce•  PEBKAC?•  Takeaway: ?
    • #Cassandra13How not to deal with large clusters
    • #Cassandra13Coordinator•  Coordinator performs partitioning, passes on request tothe right nodes•  Merges all responses
    • #Cassandra13What happens if one node is slow?
    • #Cassandra13What happens if one node is slow?Many reasons for temporary slowness:•  Bad raid battery•  Sudden bursts of compaction/repair•  Bursty load•  Net hiccup•  Major GC•  Reality
    • #Cassandra13What happens if one node is slow?•  Coordinator has a request queue•  If a node goes down completely, gossip will noticequickly and drop the node•  But what happens if a node is just super slow?
    • #Cassandra13What happens if one node is slow?•  Gossip doesnt react quickly to slow nodes•  The request queue for the coordinator on every node inthe cluster fills up•  And the entire cluster stops accepting requests
    • #Cassandra13What happens if one node is slow?•  Gossip doesnt react quickly to slow nodes•  The request queue for the coordinator on every node inthe cluster fills up•  And the entire cluster stops accepting requests•  No single point of failure?
    • #Cassandra13What happens if one node is slow?•  Solution: Partitioner awareness in client•  Max 3 nodes go down•  Available in Astyanax
    • #Cassandra13How not to delete data
    • #Cassandra13Deleting dataHow is data deleted?•  SSTables are immutable, we cant remove the data•  Cassandra creates tombstones for deleted data•  Tombstones are versioned the same way as any otherwrite
    • #Cassandra13How not to delete dataDo tombstones ever go away?•  During compactions, tombstones can get merged intoSStables that hold the original data, making thetombstones redundant•  Once a tombstone is the only value for a specificcolumn, the tombstone can go away•  Still need grace time to handle node downtime
    • #Cassandra13How not to delete data•  Tombstones can only be deleted once all non-tombstone values have been deleted•  If youre using SizeTiered compaction, old rows willrarely get deleted
    • #Cassandra13How not to delete data•  Tombstones are a problem even when using levelledcompaction•  In theory, 90 % of all rows should live in a singleSSTable•  In production, weve found that 20 - 50 % of all reads hitmore than one SSTable•  Frequently updated columns will exist in many levels,causing tombstones to stick around
    • #Cassandra13How not to delete data•  Deletions are messy•  Unless you perform major compactions, tombstones willrarely get deleted from «popular» rows•  Avoid schemas that delete data!
    • #Cassandra13TTL:ed data•  Cassandra supports TTL:ed data•  Once TTL:ed data expires, it should just be compactedaway, right?•  We know we dont need the data anymore, no need fora tombstone, so it should be fast, right?
    • #Cassandra13TTL:ed data•  Cassandra supports TTL:ed data•  Once TTL:ed data expires, it should just be compactedaway, right?•  We know we dont need the data anymore, no need fora tombstone, so it should be fast, right?•  Noooooo...•  (Overwritten data could theoretically bounce back)
    • #Cassandra13TTL:ed data•  CASSANDRA-5228•  Drop entire sstables when all columns are expired
    • #Cassandra13The Playlist serviceOur most complex service•  ~1 billion playlists•  40 000 reads per second•  22 TB of compressed data
    • #Cassandra13The Playlist serviceOur old playlist system had many problems:•  Stored data across hundreds of millions of files, makingbackup process really slow.•  Home brewed replication model that didnt work verywell•  Frequent downtimes, huge scalability problems
    • #Cassandra13The Playlist serviceOur old playlist system had many problems:•  Stored data across hundreds of millions of files, makingbackup process really slow.•  Home brewed replication model that didnt work verywell•  Frequent downtimes, huge scalability problems•  Perfect test case forCassandra!
    • #Cassandra13Playlist data model•  Every playlist is a revisioned object•  Think of it like a distributed versioning system•  Allows concurrent modification on multiple offlined clients•  We even have an automatic merge conflict resolver thatworks really well!•  Thats actually a really useful feature
    • #Cassandra13Playlist data model•  Every playlist is a revisioned object•  Think of it like a distributed versioning system•  Allows concurrent modification on multiple offlined clients•  We even have an automatic merge conflict resolver thatworks really well!•  Thats actually a really useful feature said no one ever
    • #Cassandra13Playlist data model•  Sequence of changes•  The changes are the authoritative data•  Everything else is optimization•  Cassandra pretty neat for storing this kind of stuff•  Can use consistency level ONE safely
    • #Cassandra13
    • #Cassandra13Tombstone hellNoticed that HEAD requests took several seconds for somelistsEasy to reproduce in cassandra-cli•  get playlist_head[utf8(spotify:user...)];•  1-15 seconds latency - should be < 0.1 sCopy head SSTables to development machine forinvestigationCassandra tool sstabletojson showed that the row contained600 000 tombstones!
    • #Cassandra13Tombstone hellWe expected tombstones would be deleted after 30 days•  Nope, all tombstones since 1.5 years ago were thereRevelation: Rows existing in 4+ SSTables never havetombstones deleted during minor compactions•  Frequently updated lists exists in nearly all SSTablesSolution:Major compaction (CF size cut in half)
    • #Cassandra13Zombie tombstones•  Ran major compaction manually on all nodes during afew days.•  All seemed well...•  But a week later, the same lists took several secondsagain‽‽‽
    • #Cassandra13Repair vs major compactionsA repair between the major compactions "resurrected" thetombstones :(New solution:•  Repairs during Monday-Friday•  Major compaction Saturday-SundayA (by now) well-known Cassandra anti-pattern:Dont use Cassandra to store queues
    • #Cassandra13Cassandra counters•  There are lots of places in the Spotify UI where wecount things•  # of followers of a playlist•  # of followers of an artist•  # of times a song has been played•  Cassandra has a feature called distributed counters thatsounds suitable•  Is this awesome?
    • #Cassandra13Cassandra counters•  Theyve actually worked reasonably well for us.
    • #Cassandra13Lessons•  There are still various esoteric problems with large scaleCassandra installations•  Debugging them is interesting•  If you agree with the above statements, you shouldtotally come work with us
    • #Cassandra13Lessons•  Cassandra read performance is heavily dependent onthe temporal patterns of your writes•  Cassandra is initially snappy, but various write patternsmake read performance slowly decrease•  Super hard to perform realistic benchmarks
    • #Cassandra13Lessons•  Avoid repeatedly writing data to the same row over verylong spans of time•  If youre working at scale, youll need to know howCassandra works under the hood•  nodetool cfhistograms is your friend
    • June 19, 2013#Cassandra13Questions?