Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Architectures for High Availability - QConSF


Published on

Architecture talk aimed at a well informed developer audience (i.e. QConSF Real Use Cases for NoSQL track), focused mainly on availability. Skips the Netflix cloud migration stuff that is in other talks.

Published in: Technology

Architectures for High Availability - QConSF

  1. 1. Architectural Patterns for High Anxiety Availability November 2012 Adrian Cockcroft @adrianco #netflixcloud #qconsf
  2. 2. The Netflix Streaming Service Now in USA, Canada, Latin America, UK, Ireland, Sweden, Denm ark, Norway and Finland@adrianco
  3. 3. US Non-Member Web Site Advertising and Marketing Driven@adrianco
  4. 4. Member Web Site Personalization Driven@adrianco
  5. 5. Streaming Device API@adrianco
  6. 6. Content Delivery Service Distributed storage nodes controlled by Netflix cloud services@adrianco
  7. 7. November 2012 Traffic@adrianco
  8. 8. Abstract• Netflix on Cloud – What, Why and When• Globally Distributed Architecture• Benchmarks and Scalability• Open Source Components• High Anxiety@adrianco
  9. 9. Blah Blah Blah (I’m skipping all the cloud intro etc. did that last year… Netflix runs in the cloud, if you hadn’t figured that out already you aren’t paying attention and should go read Infoq and
  10. 10. Things we don’t do@adrianco
  11. 11. Things We Do Do… In production at Netflix• Big Data/Hadoop 2009• AWS Cloud 2009• Application Performance Management 2010• Integrated DevOps Practices 2010• Continuous Integration/Delivery 2010• NoSQL, Globally Distributed 2010• Platform as a Service; Micro-Services 2010• Social coding, open development/github 2011@adrianco
  12. 12. How Netflix WorksConsumerElectronics User Data Web Site orAWS Cloud Discovery API Services PersonalizationCDN EdgeLocations DRM Customer Device Streaming API (PC, PS3, TV…) QoS Logging CDN Management and Steering OpenConnect CDN Boxes Content Encoding @adrianco
  13. 13. Web Server Dependencies Flow (Home page business transaction as seen by AppDynamics) Each icon is three to a few hundred instances across three AWS zones Cassandra memcached Web service Start Here S3 bucketPersonalization moviegroup chooser @adrianco
  14. 14. Component Micro-Services Test With Chaos Monkey, Latency Monkey@adrianco
  15. 15. Three Balanced Availability Zones Test with Chaos Gorilla Load Balancers Zone A Zone B Zone C Cassandra and Evcache Cassandra and Evcache Cassandra and Evcache Replicas Replicas Replicas@adrianco
  16. 16. Triple Replicated Persistence Cassandra maintenance affects individual replicas Load Balancers Zone A Zone B Zone C Cassandra and Evcache Cassandra and Evcache Cassandra and Evcache Replicas Replicas Replicas@adrianco
  17. 17. Isolated Regions US-East Load Balancers EU-West Load Balancers Zone A Zone B Zone C Zone A Zone B Zone C Cassandra Replicas Cassandra Replicas Cassandra Replicas Cassandra Replicas Cassandra Replicas Cassandra Replicas@adrianco
  18. 18. Failure Modes and EffectsFailure Mode Probability Mitigation PlanApplication Failure High Automatic degraded responseAWS Region Failure Low Wait for region to recoverAWS Zone Failure Medium Continue to run on 2 out of 3 zonesDatacenter Failure Medium Migrate more functions to cloudData store failure Low Restore from S3 backupsS3 failure Low Restore from remote archive@adrianco
  19. 19. Zone Failure Modes• Power Outage – Instances lost, ephemeral state lost – Clean break and recovery, fail fast, “no route to host”• Network Outage – Instances isolated, state inconsistent – More complex symptoms, recovery issues, transients• Dependent Service Outage – Cascading failures, misbehaving instances, human errors – Confusing symptoms, recovery issues, byzantine effects More detail on this topic at AWS Re:Invent later this month…@adrianco
  20. 20. Cassandra backed Micro-Services A highly scalable, available and durable deployment pattern@adrianco
  21. 21. Micro-Service Pattern One keyspace, replaces a single table or materialized view Single function Cassandra Many Different Single-Function REST Clients Cluster Managed by Priam Between 6 and 72 nodes Stateless Data Access REST Service Astyanax Cassandra Client OptionalEach icon represents a horizontally scaled service of three to Datacenterhundreds of instances deployed over three availability zones Update Flow Appdynamics Service Flow Visualization@adrianco
  22. 22. Stateless Micro-Service Architecture Linux Base AMI (CentOS or Ubuntu) Optional Apache frontend, memcache Java (JDK 6 or 7) d, non-java apps AppDynamics appagent monitoring Tomcat Monitoring Application war file, base servlet, Log rotation to S3 Healthcheck, status servlets, JMX platform, client interface jars, AppDynamics GC and thread dump interface, Servo autoscale Astyanax machineagent logging Epic/Atlas@adrianco
  23. 23. Astyanax Available at• Features – Complete abstraction of connection pool from RPC protocol – Fluent Style API – Operation retry with backoff – Token aware• Recipes – Distributed row lock (without zookeeper) – Multi-DC row lock – Uniqueness constraint – Multi-row uniqueness constraint – Chunked and multi-threaded large file storage@adrianco
  24. 24. Astyanax Query ExamplePaginate through all columns in a rowColumnList<String> columns;int pageize = 10;try { RowQuery<String, String> query = keyspace .prepareQuery(CF_STANDARD1) .getKey("A") .setIsPaginating() .withColumnRange(new RangeBuilder().setMaxSize(pageize).build()); while (!(columns = query.execute().getResult()).isEmpty()) { for (Column<String> c : columns) { } }} catch (ConnectionException e) {}@adrianco
  25. 25. Astyanax - Cassandra Write Data Flows Single Region, Multiple Availability Zone, Token Aware Cassandra •Disks •Zone A1. Client Writes to local Cassandra 3 2Cassandra If a node goes coordinator •Disks4 3•Disks 4 offline, hinted handoff2. Coodinator writes to •Zone C 1 •Zone B completes the write 2 other zones Token when the node comes3. Nodes return ack back up.4. Data written to Aware internal commit log Clients Requests can choose to disks (no more than Cassandra Cassandra wait for one node, a 10 seconds later) •Disks •Disks quorum, or all nodes to •Zone B •Zone C ack the write Cassandra 3 SSTable disk writes and •Disks 4 compactions occur •Zone A asynchronously @adrianco
  26. 26. Data Flows for Multi-Region Writes Token Aware, Consistency Level = Local Quorum1. Client writes to local replicas If a node or region goes offline, hinted handoff2. Local write acks returned to completes the write when the node comes back up. Client which continues when Nightly global compare and repair jobs ensure 2 of 3 local nodes are everything stays consistent. committed3. Local coordinator writes to remote coordinator. 100+ms latency Cassandra Cassandra4. When data arrives, remote • Disks • Zone A • Disks • Zone A coordinator node acks and Cassandra 2 2 Cassandra Cassandra 4Cassandra 6 • Disks • Disks 6 3 5• Disks6 4 Disks6 copies to other remote zones • Zone C 1 • Zone B • Zone C • • Zone B 45. Remote nodes ack to local US EU coordinator Clients Clients Cassandra 2 Cassandra Cassandra Cassandra6. Data flushed to internal • Disks • Zone B • Disks • Zone C 6 • Disks • Zone B • Disks • Zone C commit log disks (no more Cassandra 5 6Cassandra • Disks than 10 seconds later) • Zone A • Disks • Zone A @adrianco
  27. 27. Cassandra Instance Architecture Linux Base AMI (CentOS or Ubuntu) Tomcat and Priam on JDK Java (JDK 7) Healthcheck, Status AppDynamics appagent monitoring Cassandra Server Monitoring Local Ephemeral Disk Space – 2TB of SSD or 1.6TB disk holding Commit log and AppDynamics GC and thread dump SSTables machineagent logging Epic/Atlas@adrianco
  28. 28. Priam – Cassandra Automation Available at• Netflix Platform Tomcat Code• Zero touch auto-configuration• State management for Cassandra JVM• Token allocation and assignment• Broken node auto-replacement• Full and incremental backup to S3• Restore sequencing from S3• Grow/Shrink Cassandra “ring”@adrianco
  29. 29. Cassandra Backup• Full Backup Cassandra Cassandra Cassandra – Time based snapshot – SSTable compress -> S3 Cassandra Cassandra• Incremental S3 Backup Cassandra Cassandra – SSTable write triggers compressed copy to S3 Cassandra Cassandra• Archive Cassandra Cassandra – Copy cross region A@adrianco
  30. 30. Deployment at Netflix Over 50 Cassandra Clusters Over 500 m2.4xlg+hi1.4xlg Over 30TB of daily backups Biggest cluster 72 nodes 1 cluster over 250Kwrites/s@adrianco
  31. 31. Cassandra Explorer for Data Open source on github soon@adrianco
  32. 32. ETL for Cassandra• Data is de-normalized over many clusters!• Too many to restore from backups for ETL• Solution – read backup files using Hadoop• Aegisthus – – High throughput raw SSTable processing – Re-normalizes many clusters to a consistent view – Extract, Transform, then Load into Teradata@adrianco
  33. 33. Benchmarks and Scalability@adrianco
  34. 34. Cloud Deployment Scalability New Autoscaled AMI – zero to 500 instances from 21:38:52 - 21:46:32, 7m40s Scaled up and down over a few days, total 2176 instance launches, m2.2xlarge (4 core 34GB) Min. 1st Qu. Median Mean 3rd Qu. Max. 41.0 104.2 149.0 171.8 215.8 562.0@adrianco
  35. 35. Scalability from 48 to 288 nodes on AWS Client Writes/s by node count – Replication Factor = 31200000 10998371000000 800000 Used 288 of m1.xlarge 600000 4 CPU, 15 GB RAM, 8 ECU 537172 Cassandra 0.86 400000 Benchmark config only 366828 existed for about 1hr 200000 174373 0 0 50 100 150 200 250 300 350@adrianco
  36. 36. “Some people skate to the puck, I skate to where the puck is going to be” Wayne Gretzky@adrianco
  37. 37. Cassandra on AWSThe Past The Future• Instance: m2.4xlarge • Instance: hi1.4xlarge• Storage: 2 drives, 1.7TB • Storage: 2 SSD volumes, 2TB• CPU: 8 Cores, 26 ECU • CPU: 8 HT cores, 35 ECU• RAM: 68GB • RAM: 64GB• Network: 1Gbit • Network: 10Gbit• IOPS: ~500 • IOPS: ~100,000• Throughput: ~100Mbyte/s • Throughput: ~1Gbyte/s• Cost: $1.80/hr • Cost: $3.10/hr@adrianco
  38. 38. Cassandra Disk vs. SSD Benchmark Same Throughput, Lower Latency, Half Cost@adrianco
  39. 39. Netflix Open Source Strategy• Release PaaS Components git-by-git – Source at – we build from it… – Intros and techniques at – Blog post or new code every few weeks• Motivations – Give back to Apache licensed OSS community – Motivate, retain, hire top engineers – “Peer pressure” code cleanup, external contributions@adrianco
  40. 40. Instance creation Bakery & Build tools Asgard Base AMI Instance Autoscaling Application Odin Code scripts Image baked ASG / Instance started Instance Running@adrianco
  41. 41. Application Launch Governator Eureka (Guice) Async logging Archaius Edda Servo Service Application initializing Registry, configuration history@adrianco
  42. 42. Runtime Astyanax Priam Curator Chaos Monkey Latency Monkey NIWS Exhibitor LB Janitor Monkey REST Cass JMeter Dependency client Command Explorers Client Side Server Side Resiliency aids Components Components@adrianco
  43. 43. Open Source Projects Legend Github / Techblog Priam Exhibitor Servo and Autoscaling ScriptsApache Contributions Cassandra as a Service Zookeeper as a Service Astyanax Curator Honu Techblog Post Cassandra client for Java Zookeeper Patterns Log4j streaming to Hadoop Coming Soon CassJMeter EVCache Circuit Breaker Cassandra test suite Memcached as a Service Robust service pattern Cassandra Multi-region EC2 Eureka / Discovery Asgard - AutoScaleGroup datastore support Service Directory based AWS console Aegisthus Archaius Chaos Monkey Hadoop ETL for Cassandra Dynamics Properties Service Robustness verification Edda Explorers Latency Monkey Queryable config history Governator - Library lifecycle Server-side latency/error Janitor Monkey and dependency injection injection Odin REST Client + mid-tier LB Bakeries and AMI Workflow orchestration Async logging Configuration REST endpoints Build dynaslaves@adrianco
  44. 44. Cassandra Next Steps• Migrate Production Cassandra to SSD – Many clusters done – 100+ SSD nodes running• Autoscale Cassandra using Priam – Cassandra 1.2 Vnodes make this easier – Shrink Cassandra cluster every night• Automated Zone and Region Operations – Add/Remove Zone, split or merge clusters – Add/Remove Region, split or merge clusters@adrianco
  45. 45. YOLO@adrianco
  46. 46. Skynet A Netflix Hackday project that might just terminate the world… (hack currently only implemented in Powerpoint – luckly)@adrianco
  47. 47. The Plot (kinda)• Skynet is a sentient computer• Skynet defends itself if you try to turn it off• Connor is the guy who eventually turns it off• Terminator is the robot sent to kill Connor@adrianco
  48. 48. The Hacktors• Cass_skynet is a self-managing Cassandra cluster• Connor_monkey kills cass_skynet nodes• Terminator_monkey kills connor_monkey nodes@adrianco
  49. 49. The Hacktion• Cass_skynet stores a history of its world and action scripts that trigger from what it sees• Action response to losing a node – Auto-replace node and grow cluster size• Action response to losing more nodes – Replicate cluster into a new zone or region• Action response to seeing a Connor_monkey – Startup a Terminator_monkey@adrianco
  50. 50. Implementation• Priam – Autoreplace missing nodes – Grow cass_skynet cluster in zone, to new zones or regions• Cassandra Keyspaces – Actions – scripts to be run – Memory – record event log of everything seen• Cron job once a minute – Extract actions from Cassandra and execute – Log actions and results in memory• Chaos Monkey configuration – Terminator_monkey: pick a zone, kill any connor_monkey – Connor_monkey: kill any cass_skynet or terminator_monkey@adrianco
  51. 51. “Simulation”@adrianco
  52. 52. High Anxiety@adrianco
  53. 53. Takeaway Netflix has built and deployed a scalable global platform based on Cassandra and AWS.Key components of the Netflix PaaS are being released as Open Source projects so you can build your own custom PaaS. SSD’s in the cloud are awesome…. @adrianco