#lspe Q1 2013 dynamically scaling netflix in the cloud


Published on

Meetup presentation on how Netflix dynamically scales in the cloud. It covers topics primarily related to AWS autoscaling and provides some "day-in-the-life" data.

Published in: Technology
1 Comment
  • Need help increase p3nis
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

#lspe Q1 2013 dynamically scaling netflix in the cloud

  1. 1. Dynamically ScalingNetflix in the Cloud Coburn Watson Manager - Cloud Performance Engineering
  2. 2. Netflix, Inc.- Worlds leading internet television network- 33 Million subscribers in 40 countries- Over a billion hours streamed per month- Approximately 33% of all US Internet traffic at night- Increasing quantity of original content- Recent Technical Notables - Open Source Software - OpenConnect (homegrown CDN)
  3. 3. About Me- Manage Cloud Performance Engineering team- Focus on performance since 2000-ish - Large-scale billing applications, eCommerce, datacenter mgmt, etc. - Genentech, McKesson, Amdocs, Mercury Int., HP, etc.- Passion for tackling performance at cloud-scale- Looking for great performance engineers- cwatson@netflix.com
  4. 4. First things first- ASG = Autoscaling group- AWS description: "An Auto Scaling group is a representation of multiple Amazon EC2 instances that share similar characteristics, and that are treated as a logical grouping for the purposes of instance scaling and management. " "An Auto Scaling group starts by launching the minimum number (or the desired number, if specified) of EC2 instances and then increases or decreases the number of running EC2 instances automatically according to the conditions that you define."- Within Netflix (almost) all services are created as ASGs - Asgard (OSS) simplifies this process:
  5. 5. Dynamic Scaling @ Netflix- EC2 footprint autoscales 2500-3500 instances per day - order of tens of thousands of EC2 instances- Largest ASG* spans 200-600 m2.4xlarge (64GB RAM)Why:- Improved scalability during unexpected workloads- Avoid sizing capacity aggressively high - each service team determines their capacity- Creates "reserved instance troughs" for batch activity - on the order of hundreds of thousands of instance hours weekly* largest "autoscaling" ASG
  6. 6. How?- Discovery - AWS elastic load balancers "speak" autoscaling - mid-tier services utilize Eureka (OSS)- Leverage native AWS autoscaling capabilities- Publish our own metrics up to CloudWatch (Servo OSS)- Stateless
  7. 7. How?Two types of scaling behavior exposed in Asgard 1. rate-based autoscaling2. scheduled action autoscaling
  8. 8. AWS Autoscaling-Define policies on ASG - alarm, scaling unit (percent/amount), cooldown, evaluation interval and period- Cooldowns: - ASG-level versus policy-level (both exist) - cooldown start tied to last instance ready - should be tied closely to application/service startup time- Execute load or squeeze tests; measures capacity - Frequent pushes with SOA corresponds to possible frequent changes in per-instance capacity - (insert here) 10 second primer on squeeze tests
  9. 9. In Action- Example covers 3 services - 2 edge (A,B), 1 mid-tier (C) - C has more upstream services than simply A and B- Multiple autoscaling policy types - (A) System Load Average - (B) Request-rate based (tomcat requestCount) - (C) Request-rate based (internal library numCompleted)
  10. 10. Day in the life, instance counts - At peak 1,948 instances - without autoscaling: ~ 46.8 k instance hours - with autoscaling: ~ 31.2 k instance hours (~ 33% reduction in usage)
  11. 11. Day in the life, request rates - Total requests: 4.5x peak versus min - Per instance stays between 45-90 RPS
  12. 12. Day in the life, latency - Response variability greatest during initial scale-up events - Average response time primarily between 75-150 msec
  13. 13. Day in the life, CPU Utilization - Instance counts 3x, Request rate 4.5x (not shown) - Avg CPU utilization per instance: ~ 25-55% * * service A currently resolving concurrency issue; limits ideal CPU utilization
  14. 14. Unused capacity- Reserved Instance "troughs" = spare capacity -Align services along fewer instance types for fewer, larger pools- Current usage - Stand up "bonus" EMR cluster in off-peak hours- Planned usage - Framework being developed to share unused capacity "fairly" across multiple batch applications
  15. 15. Caveats- AWS Autoscaling - Simplified scaling policy capabilities - Cooldown is static, not dynamically configurable- Application resource profiles can change quickly (SOA)- When something goes wrong... 1. traffic rates can drop quickly 2. scale-down can kick in 3. thundering herd can knock you back down - lockout scale-down quickly - proactively protect yourself with Hystrix (OSS) against downstream service degradation or failure
  16. 16. Wrap-up- Autoscaling is a big win for Netflix- Dynamically scaling affords improved scalability- Our Open Source Software simplifies mgmt at scale next Netflix OSS meetup: Wednesday March 13th @ Netflix- Great projects, stunning colleagues: jobs.netflix.com