Cassandra in EC2 at TalkbitsNetworkTopologyStrategy + EC2MultiRegionSnitch1 DC, 3 racks (availability zones in S3 Region), N nodes per rack.3N nodes total.Data stored in 3 local copies, 1 per zone.Write with LOCAL_QUORUM setting, read with 1 or 2.m1.large nodes (2 cores, 4CU, 7.5Gb RAM).Transaction log and data files are both on RAID0-ed ephemeraldrive (2 drives in array). Works for SSD or EC2 disks only!Other typical setup options for EC2:m1.xlarge (16Gb) / m2.4xlarge (64Gb) / hi1.4xlarge (SSD) nodesEBS-backed data volumes (not recommended. use fordevelopment only).
Cassandra consistency optionsDefinitionsN, R, W settings from Amazon Dynamo.N – replication factor. Set per keyspace on keyspace creation.Quorum: N / 2 + 1 (rounded down)RW consistency options:ANY, ONE, TWO, THREE, QUORUM, LOCAL_QUORUM &EACH_QUORUM (multi-dc), ALL.Set per query.
Cassandra consistency semanticsW + R > NEnsures strong consistency. Read will always reflect the most recentwrite.R = W = [LOCAL_]QUORUMStrong consistency. See quorum definition and formula above.W + R <= NEventual consistency.W = 1Good for fire-n-forget writes: logs, traces, metrics, page views etc.
Cassandra backups to S3Full backups•Periodic snapshots (daily, weekly)•Remove from local disk after upload to S3 to prevent diskoverflowIncremental backups•SSTable are compressed and copied to S3•Happens on IN_MOVED_TO, IN_CLOSE_WRITE events•Don’t turn on with leveled compaction (huge network trafficto S3)Continuous backups•Compress and copy transaction log to S3 with short timeintervals (for example - 5, 30, 60 mins)
Cassandra backups to S3 - toolsTableSnap from SimpleGeohttps://github.com/Instagram/tablesnap (most up-to-date fork)3 simple Python scripts is the whole tool (tablesnap, tableslurp,tablechop). Allows to upload SSTables in real-time, restore and removeold backups uploads from S3.Priam from Netflixhttps://github.com/Netflix/PriamFull-blown web application. Requires servlet container to run anddepends on Amazon SimpleDB service for distributed tokenmanagement.