Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

I Don't Test Often ...

1,460 views

Published on

How Netflix tests in production to augment more traditional testing methods. This talk covers the Simian Army (Chaos Monkey & friends, code coverage in production, and canary testing.

Published in: Technology

I Don't Test Often ...

  1. 1. @garethbowles I Don’t Test Often … … But When I Do, I Test in Production
  2. 2. @garethbowles Building distributed systems is hard esting them exhaustively is even harde
  3. 3. @garethbowles Gareth Bowles
  4. 4. @garethbowles
  5. 5. @garethbowles television network with more than 57 million members in 50 countries enjoying more than one billion hours of TV shows and movies per month. We account for up to 34% of downstream US internet traffic. Source: http://ir.netflix.com
  6. 6. @garethbowles Personaliza- tion Engine User Info Movie Metadata Movie Ratings Similar Movies API Reviews A/B Test Engine 2B requests per day into the Netflix API 12B outbound requests per day to API dependencies
  7. 7. @garethbowles A Complex Distributed System
  8. 8. @garethbowles Our Deployment Platform
  9. 9. @garethbowles What AWS Provides • Machine Images (AMI) • Instances (EC2) • Elastic Load Balancers • Security groups / Autoscaling groups • Availability zones and regions
  10. 10. @garethbowles …No (single) body knows how everything works. Our system has become so complex that…
  11. 11. @garethbowles How AWS Can Go Wrong -1 • Service goes down in one or more availability zones • 6/29/12 - storm related power outage caused loss of EC2 and RDS instances in Eastern US • https://gigaom.com/2012/06/29/some-of- amazon-web-services-are-down-again/
  12. 12. @garethbowles How AWS Can Go Wrong - 2 • Loss of service in an entire region • 12/24/12 - operator error caused loss of multiple ELBs in Eastern US • http://techblog.netflix.com/2012/12/a- closer-look-at-christmas-eve-outage.html
  13. 13. @garethbowles How AWS Can Go Wrong - 3 • Large number of instances get rebooted • 9/25/14 to 9/30/14 - rolling reboot of 1000s of instances to patch a security bug • http://techblog.netflix.com/2014/10/a- state-of-xen-chaos-monkey- cassandra.html
  14. 14. @garethbowles Our Goal is Availability • Members can stream Netflix whenever they want • New users can explore and sign up • New members can activate their service and add devices
  15. 15. @garethbowles http://www.slideshare.net/reed2001/culture-1798664
  16. 16. @garethbowles Freedom and Responsibility • Developers deploy when they want • They also manage their own capacity and autoscaling • And are on-call to fix anything that breaks at 3am!
  17. 17. @garethbowles How the heck do you test this stuff ?
  18. 18. @garethbowles Failure is All Around Us • Disks fail • Power goes out - and your backup generator fails • Software bugs are introduced • People make mistakes
  19. 19. @garethbowles Design to Avoid Failure • Exception handling • Redundancy • Fallback or degraded experience (circuit breakers) • But is it enough ?
  20. 20. @garethbowles It’s Not Enough • How do we know we’ve succeeded ? • Does the system work as designed ? • Is it as resilient as we believe ? • How do we avoid drifting into failure ?
  21. 21. @garethbowles More Testing ! • Unit • Integration • Stress • Exhaustive testing to simulate all failure modes
  22. 22. @garethbowles Exhaustive Testing ~ Impossible • Massive, rapidly changing data sets • Internet scale traffic • Complex interaction and information flow • Independently-controlled services • All while innovating and building features
  23. 23. @garethbowles Another Way • Cause failure deliberately to validate resiliency • Test design assumptions by stressing them • Don’t wait for random failure. Remove its uncertainty by forcing it regularly
  24. 24. @garethbowles Introducing the Army
  25. 25. @garethbowles Chaos Monkey
  26. 26. @garethbowles Chaos Monkey • The original Monkey (2009) • Randomly terminates instances in a cluster • Simulates failures inherent to running in the cloud • During business hours • Default for production services
  27. 27. @garethbowles What did we do once we were able to handle Chaos Monkey ? Bring in bigger Monkeys !
  28. 28. @garethbowles Chaos Gorilla
  29. 29. @garethbowles Chaos Gorilla • Simulate an Availability Zone becoming unavailable • Validate multi-AZ redundancy • Deploy to multiple AZs by default • Run regularly (but not continually !)
  30. 30. @garethbowles Chaos Kong
  31. 31. @garethbowles Chaos Kong • “One louder” than Chaos Gorilla • Simulate an entire region outage • Used to validate our “active-active” region strategy • Traffic has to be switched to the new region • Run once every few months
  32. 32. @garethbowles Latency Monkey
  33. 33. @garethbowles Latency Monkey• Simulate degraded instances • Ensure degradation doesn’t affect other services • Multiple scenarios: network, CPU, I/O, memory • Validate that your service can handle degradation • Find effects on other services, then validate that they can handle it too
  34. 34. @garethbowles Conformity Monkey
  35. 35. @garethbowles Conformity Monkey• Apply a set of conformity rules to all instances • Notify owners with a list of instances and problems • Example rules • Standard security groups not applied • Instance age is too old • No health check URL
  36. 36. @garethbowles Failure Injection Testing (FIT) • Latency Monkey adds delay / failure on server side of requests • Impacts all calling apps - whether they want to participate or not • FIT decorates requests with failure data • Can limit failures to specific accounts or devices, then dial up • http://techblog.netflix.com/2014/10/fit-failure-injection- testing.html
  37. 37. @garethbowles Try it out ! • Open sourced and available at https://github.com/Netflix/SimianArmy and https://github.com/Netflix/security_monke y • Chaos, Conformity, Janitor and Security available now; more to come • VMware as well as AWS
  38. 38. @garethbowles What’s Next ? • New failure modes • Run monkeys more frequently and aggressively • Make chaos testing as well-understood as regular regression testing
  39. 39. @garethbowles A message from the owners “Use Chaos Monkey to induce various kinds of failures in a controlled environment.” AWS blog post following the mass instance reboot in Sep 2014: http://aws.amazon.com/blogs/aws/ec2- maintenance-update-2/
  40. 40. @garethbowles Production Code Coverage If you don’t run code coverage analysis in prod, you’re not doing it properly
  41. 41. @garethbowles What We Get • Real time code usage patterns • Focus testing by prioritizing frequently executed paths with low test coverage • Identify dead code that can be removed
  42. 42. @garethbowles How We Do It • Use Cobertura as it counts how many times each LOC is executed • Easy to enable - Cobertura JARs included in our base AMI, set a flag to add them to Tomcat’s classpath • Enable on a single instance • Very low performance hit
  43. 43. @garethbowles Canary Deployments
  44. 44. @garethbowles Canaries • Push changes to a small number of instances • Use Asgard for red / black push • Monitor closely to detect regressions • Automated canary analysis • Automatically cancel deployment if problems occur
  45. 45. @garethbowles Closing Thoughts • Don’t be scared to test in production ! • You’ll get tons of data that you couldn’t get from test … • … and hopefully sleep better at night
  46. 46. @garethbowles Thanks, QA or the Highway ! Email: gbowles@{gmail,netflix}.com Twitter: @garethbowles Linkedin: www.linkedin.com/in/garethbowles

×