Cassandra at Instagram (August 2013)

Rick Branson
Rick BransonInfrastructure Engineer at Instagram
CASSANDRA
AT INSTAGRAM
Rick Branson, Infrastructure Engineer
@rbranson
SF Cassandra Meetup
August 29, 2013
Disqus HQ
September 2012
Redis fillin' up.
Cassandra at Instagram (August 2013)
What sucks?
THE OBVIOUS
Memory is expensive.
LESS OBVIOUS:
In-memory "degrades" poorly
Cassandra at Instagram (August 2013)
•Flat namespace. What's in there?
•Heap fragmentation
•Single threaded
BGSAVE
•Boils down to centralized logging
•VERY high skew of writes to reads
(1,000:1)
•Ever growing data set
•Durability highly valued
•Dumb to store it in RAM, basically...
The Data
• Cassandra 1.1
• 3 EC2 m1.xlarge (2-core, 15GB RAM)
• RAIDed ephemerals (1.6TB of SATA)
• RF=3
• 6GB Heap, 200MB NewSize
• HSHA
The Setup
It worked. Mostly.
The horriblecool
thing about Chef...
commit a1489a34d2aa69316b010146ab5254895f7b9141
Author: Rick Branson
Date: Thu Oct 18 20:05:16 2012 -0700
Follow the rules for Cassandra listen_address so I don't
burn a whole day fixing my retarded mistake
Cassandra at Instagram (August 2013)
commit 41c96f3243a902dd6af4ea29ef6097351a16494a
Author: Rick Branson
Date: Tue Oct 30 17:12:00 2012 -0700
Use 256k JVM stack size for C* -- fixes a bug that got
integrated with 1.1.6 packaging + Java 1.6.0_u34+
November 2013
Doubled to 6 nodes.
18,000 connections. Spread those more evenly.
commit 3f2e4f2e5da6fe99d7f3fc13c0da09b464b3a9e0
Author: Rick Branson
Date: Wed Nov 21 09:50:21 2012 -0800
Drop key cache size on C*UA cluster: was causing heap
issues, and apparently 1GB is _WAY_ outside of the normal
range of operation for nodes of this size.
commit 5926aa5ce69d48e5f2bb7c0d0e86b411645bc786
Author: Rick Branson
Date: Mon Dec 24 12:41:13 2012 -0800
Lower memtable sizes on C* UA cluster to make more room
for compression metadata / bloom filters on heap
Cassandra at Instagram (August 2013)
1.2.1.
It went well.
well... until...
Cassandra at Instagram (August 2013)
commit 84982635d5c807840d625c22a8bd4407c1879eba
Author: Rick Branson
Date: Thu Jan 31 09:43:56 2013 -0800
Switch Cassandra from tokens to vnodes
commit e990acc5dc69468c8a96a848695fca56e79f8b83
Author: Rick Branson
Date: Sun Feb 10 20:26:32 2013 -0800
We aren't ready for vnodes yet guys
TAKEAWAY
Let stupidenterprising, experienced operators that
will submit patches take the first few bullets on
brand-new major versions.
commit acb02daea57dca889c2aa45963754a271fa51566
Author: Rick Branson
Date: Sun Feb 10 20:36:34 2013 -0800
Doubled C* cluster
commit cc13a4c15ee0051bb7c4e3b13bd6ae56301ac670
Author: Rick Branson
Date: Thu Mar 14 16:23:18 2013 -0700
Subtract token from C*ua7 to replace the node
pycassa exceptions (last 6 months)
Cassandra at Instagram (August 2013)
•3.4TB
•vnode migration still pending
TAKEAWAY
Adopt a technology by understanding what it's
best at and letting it do that first, then expand...
Cassandra at Instagram (August 2013)
•Sharded master/slave Redis
•32x68GB (m2.4xlarge)
•Space (memory) bound
•Resharding sucks
•Failover is manual, wakes us up at night
user_id: [
activity,
activity,
...
]
user_id: [
activity,
activity,
...
]
Thrift Serialized Activity
Bound the Size
user_id: [
activity1,
activity2,
...
activity100,
activity101,
...
]
LTRIM <user_id> 0 99
Undo
user_id: [
activity1,
activity2,
activity3,
...
]
LREM <user_id> 0 <activity2>
C* data model
user_id
TimeUUID1 TimeUUID2
...
TimeUUID101
user_id
<activity> <activity>
...
<activity>
Bound the Size
user_id
TimeUUID1 TimeUUID2
...
TimeUUID101
user_id
<activity> <activity>
...
<activity>
get(<user_id>)
delete(<user_id>,
columns=[<TimeUUID101>,
<TimeUUID102>,
<TimeUUID103>,
...])
The great destroyer of
systems shows up.
Tombstones abound.
user_id
TimeUUID1 TimeUUID2
...
TimeUUID2
user_id <activity> <activity> ... [tombstone]user_id
timestamp1 timestamp2
...
timestamp2
Cassandra internally stores deletes as
tombstones, which mark data for a given
column as deleted at-or-before a timestamp.
Column Delete
tombstone timestamp is >= live
column timestamp, so it will be
hidden from queries and
compacted away.
user_id
TimeUUID1 TimeUUID2
...
TimeUUID101
user_id <activity> <activity> ... <activity>user_id
timestamp1 timestamp2
...
timestamp101
TimeUUID = timestamp
To avoid tombstones, exploit that the
timestamp embedded in our TimeUUID
(ordering) is the same as the column
timestamp.
user_id
TimeUUID1 TimeUUID2
...
TimeUUID101
user_id <activity> <activity> ... <activity>user_id
timestamp1 timestamp2
...
timestamp101
delete(<user_id>,
timestamp=<timestamp101>)
Row Delete
Cassandra can also store row tombstones,
which delete all data from a row at-or-before
the timestamp provided.
Optimizes Reads
SSTable
max_ts=100
SSTable
max_ts=200
SSTable
max_ts=300
SSTable
max_ts=400
SSTable
max_ts=500
SSTable
max_ts=600
SSTable
max_ts=700
SSTable
max_ts=800
Contains row tombstone
with timestamp 350
Safely ignored
using in-memory
metadata
~10% of actions are undos.
Undo Support
user_id
TimeUUID1 TimeUUID2
...
TimeUUID101
user_id
<activity> <activity>
...
<activity>
get(<user_id>)
delete(<user_id>,
columns=[<TimeUUID2>])
Cassandra at Instagram (August 2013)
get(<user_id>)
delete(<user_id>,
columns=[<TimeUUID2>])
Simple Race Condition
The state of the row may have changed
between these two operations.
💩
Replica
[A, B]
Replica
[A]
Writer
insert B OK
Replica
[A, B]
FAIL
Like
Diverging Replicas
Replica
[A, B]
Replica
[A]
Writer
read [A]
Replica
[A, B]
Undo Like
Diverging Replicas
Replica is missing B, so if a read
is required to find B before
deleting it, it's going to fail.
SuperColumn = Old/Busted
AntiColumn = New/Hotness
user_id
(0, <TimeUUID>) (1, <TimeUUID>) (1, <TimeUUID>)
user_id
anti-column activity activity
"Anti-Column"
Borrowing from the idea of Cassandra's by-
name tombstones, Contains an MD5 hash of
the activity data "value" it is marking as
deleted.
user_id
(0, <TimeUUID>) (1, <TimeUUID>) (1, <TimeUUID>)
user_id
anti-column activity activity
Composite Column
First component is zero for anti-columns,
splitting the row into two independent lists,
and ensuring the anti-columns always appear
at the head.
Replica
[A, B]
Replica
[A]
Writer
insert B OK
Replica
[A, B]
FAIL
Like
Diverging Replicas: Solved
Replica
[A, B, C]
Replica
[A, C]
Writer
insert C
Replica
[A, B, C]
Undo Like
Diverging Replicas: Solved
OK
Instead of read-before-write, an
anti-column is inserted to mark
the activity as deleted.
TAKEAWAY
Read-before-write is a smell. Try to model data as
a log of user "intent" rather than manhandling the
data into place.
•Keep 30% "buffer" for trims.
•Undo without read. (thumbsup)
•Large lists suck for this. (thumbsdown)
•CASSANDRA-5527
Built in two days.
Experience paid off.
Reusability is key to rapid rollout.
Great documentation eases concerns.
•C* 1.2.3
•vnodes, LeveledCompactionStrategy
•12 hi1.4xlarge (8-core, 60GB, 2T SSD)
•3 AZs, RF=3, CL W=TWO R=ONE
•8G heap, 800M NewSize
Initial Setup
1. Dial up Double Writes
2. Test with "Shadow" Reads
3. Dial up "Real" Reads
Rollout
commit 1c3d99a9e337f9383b093009dba074b8ade20768
Author: Rick Branson
Date: Mon May 6 14:58:54 2013 -0700
Bump C* inbox heap size 8G -> 10G, seeing heap pressure
Bootstrapping sucked because compacting
10,000 SSTables takes forever.
sstable_size_in_mb: 5 => 25
Monitor Consistency
$ nodetool netstats
Mode: NORMAL
Not sending any streams.
Not receiving any streams.
Read Repair Statistics:
Attempted: 3192520
Mismatch (Blocking): 0
Mismatch (Background): 11584
Pool Name Active Pending Completed
Commands n/a 0 1837765727
Responses n/a 1 1750784545
UPDATE COLUMN FAMILY
InboxActivitiesByUserID
WITH read_repair_chance = 0.01;
99.63% consistent
SSTable Size (again)
Saw lots of GC pressure related to buffer
garbage. Eventually they landed on a new
default in 1.2.9+ (160MB).
sstable_size_in_mb: 25 => 128
Cassandra at Instagram (August 2013)
Fetch & Deserialize Time (measured from app)
Mean vs P90 (ms), trough-to-peak
Space used (live): 180114509324
Space used (total): 180444164726
Memtable Columns Count: 2315159
Memtable Data Size: 112197632
Memtable Switch Count: 1312
Read Count: 316192445
Read Latency: 1.982 ms.
Write Count: 1581610760
Write Latency: 0.031 ms.
Pending Tasks: 0
Bloom Filter False Positives: 481617
Bloom Filter False Ratio: 0.08558
Bloom Filter Space Used: 54723960
Compacted row minimum size: 25
Compacted row maximum size: 545791
Compacted row mean size: 3020
20K 200-column slice reads/sec
30K 1-column mutations/sec
30% CPU utilization
48K clients
Peak Stats
Exciting Future Things
•Python Native Protocol Driver
•Read CPU Consumption Work
•Mass CQL Adoption
•Triggers
•CAS (for limited use cases)
Next 6 Months...
•Node repair visibility & monitoring
•Objects & Associations Storage API on
C* + memcache
•Migrate more from Redis
•New major use case
•Cassandra 2.0?
We're hiring!
1 of 70

More Related Content

Viewers also liked(20)

Cassandra at Instagram (August 2013)