Active Active - C* Behind the Scenes at Netflix

2,159 views
1,963 views

Published on

As more and more businesses move from enterprise IT solutions to web scale cloud solutions to cater to the growing customer needs, they need to be innovative and find ways the applications and infrastructures would to scale rapidly and be highly available.

High availability is an important requirement for any online business and trying to architect around failures and expecting infrastructure to fail and even then be highly available is the key to success. One such effort here at Netflix was the Active-Active implementation where we provided region resiliency. This presentation would discuss the brief overview of the active-active implementation and how it leveraged Cassandra’s architecture in the backend to achieve its goal. It will cover our journey though A-A from Cassandra’s perspective, the data validation we did to prove the backend would work without impacting customer experience. The various problems we ran into like long repair times and gc grace settings. Our lessons learnt and what would we do differently next time around?

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,159
On SlideShare
0
From Embeds
0
Number of Embeds
916
Actions
Shares
0
Downloads
31
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Active Active - C* Behind the Scenes at Netflix

  1. 1. ABOUT NETFLIX
  2. 2. NETFLIX
  3. 3. ACTIVE - ACTIVE
  4. 4. WHAT IS ACTIVE-ACTIVE Also called dual active, it is a phrase used to describe a network of independent processing nodes where each node has access to a replicated database giving each node access and usage of single application. In an active-active system all requests are load balanced across all available processing capacity, Where a failure occurs on a node, another node in the network takes its place.
  5. 5. DOES AN INSTANCE FAIL? • It can, plan for it • Bad code / configuration pushes • Latent issues • Hardware failure • Test with Chaos Monkey
  6. 6. DOES A ZONE FAIL? • Rarely, but happened before • Routing issues • DC-specific issues • App-specific issues within a zone • Test with Chaos Gorilla
  7. 7. DOES A REGION FAIL? • Full region – unlikely, very rare • Individual Services can fail region-wide • Most likely, a region-wide configuration issues • Test with Chaos Kong
  8. 8. EVERYTHING FAILS… EVENTUALLY • Keep your services running by embracing isolation and redundancy • Construct a highly agile and highly available service from ephemeral and assumed broken components
  9. 9. ISOLATION • Changes in one region should not affect others • Regional outage should not affect others • Network partitioning between regions should not affect functionality / operations
  10. 10. REDUNDANCY • Make more than one (of pretty much everything) • Specifically, distribute services across Availability Zones and regions
  11. 11. HISTORY: X-MAS EVE 2012 • Netflix multi-hour outage • US-East1 regional Elastic Load Balancing issue • “...data was deleted by a maintenance process that was inadvertently run against the production ELB state data”
  12. 12. ACTIVE-ACTIVE ARCHITECTURE
  13. 13. THE PROCESS
  14. 14. IDENTIFYING CLUSTERS FOR AA
  15. 15. SNITCH CHANGES EC2Snitch EC2MultiRegionSnitch Uses Private IPs Uses Public IPs
  16. 16. PRIAM.MULTIREGION.ENABLE =TRUE tcp 7101-7101 [ ] [10.190.21.36/32, 10.232.200.17/32, 10.33.573.26/32, 10.20.151.165/32, 10.226.99.46/32, 10.244.143.193/32] tcp 7103-7103 [ ] [54.196.221.136/32, 54.202.200.217/32, 54.203.57.226/32, 54.205.151.165/32, 54.226.99.46/32, 54.244.143.193/32]
  17. 17. SPIN UP NODES IN NEW REGION us-east-1 us-west-2 APP
  18. 18. UPDATE KEYSPACE Update keyspace <keyspace> with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {us-east : 3, us-west-2 : 3}; Existing region and replication factor New region and replication factor
  19. 19. REBUILD NEW REGION Run – nodetool rebuild us-east-1 on all us-west-2 nodes
  20. 20. RUN NODETOOL REPAIR
  21. 21. VALIDATION
  22. 22. BENCHMARKING GLOBAL CASSANDRA WRITE INTENSIVE TEST OF CROSS-REGION REPLICATION CAPACITY 16 X HI1.4XLARGE SSD NODES PER ZONE = 96 TOTAL 192 TB OF SSD IN SIX LOCATIONS UP AND RUNNING CASSANDRA IN 20 MINUTES Cassandra Replicas Zone A Cassandra Replicas Zone B Cassandra Replicas Zone C US-West-2 Region - Oregon Cassandra Replicas Zone A Cassandra Replicas Zone B Cassandra Replicas Zone C US-East-1 Region - Virginia Test Load Test Load Validation Load Interzone Traffic 1 Million Writes CL.ONE (Wait for One Replica to ack) 1 Million Reads after 500 ms CL.ONE with No Data Loss Interregional Traffic Up to 9Gbits/s, 83ms 18 TB backups from S3
  23. 23. TEST FOR THUNDERING HERD
  24. 24. TEST FOR RETRIES FAILURE RETRY
  25. 25. KEY METRICS USED • 99 /95 th Read Latency (Client & C*) • Dropped Metrics on C* • Exceptions on C* • Heap Usage on C* • CPU Usage (Client & C*) • Threads Pending on C*
  26. 26. CONFIGURATION FOR TEST • 24 Node C* SSDs • 220 Client instances • 70+ Jmeter Instances
  27. 27. C* IOPS
  28. 28. TOTAL READ IOPS TOTAL WRITE IOPS
  29. 29. 95th LATENCY 99th LATENCY
  30. 30. CHECK FOR CEILING
  31. 31. NETWORK PARTITION us-east-1 us-west-2
  32. 32. TAKEAWAYS
  33. 33. REPAIRS AFTER EXTENSION ARE PAINFUL !!
  34. 34. TIME TO REPAIR DEPENDS ON • Number of regions • Number of replicas • Data size • Amount of entropy
  35. 35. ADJUST GC_GRACE AFTER EXTENSION • Column Family Setting • Defined in seconds • Default 10 days • Tweak gc_grace settings to accommodate time taken to repair • BEWARE of deleted columns
  36. 36. RUNBOOK
  37. 37. PLAN FOR CAPACITY
  38. 38. CONSISTENCY LEVEL • Check the client for consistency level setting • In a Multiregional cluster QUORUM <> LOCAL_QUORUM • Recommended consistency levels LOCAL_ONE (CASSANDRA-6202) for reads and LOCAL_QUORUM for writes • For region resiliency avoid – ALL or QUORUM calls
  39. 39. HOW DO WE KNOW IT WORKS? CREATE CHAOS!!
  40. 40. Benchmark … Time Consuming But worth it!

×