@atseitlin
Resiliency through failure
Netflix's Approach to Extreme Availability in the Cloud
Ariel Tseitlin
http://www.linkedin.com/in/atseitlin
@atseitlin
@atseitlin
About Netflix
Netflix is the world’s
leading Internet
television network with
more than 36 million
members in 40
countries enjoying more
than one billion hours
of TV shows and movies
per month, including
original series[1]
[1] http://ir.netflix.com/
@atseitlin
A complex distributed system
@atseitlin
How Netflix Streaming Works
Customer Device
(PC, PS3, TV…)
Web Site or
Discovery API
User Data
Personalization
Streaming API
DRM
QoS Logging
OpenConnect
CDN Boxes
CDN
Management and
Steering
Content Encoding
Consumer
Electronics
AWS Cloud
Services
CDN Edge
Locations
Browse
Play
Watch
@atseitlin
@atseitlin
@atseitlin
Our goal is availability
• Members can stream Netflix whenever they
want
• New users can explore and sign up for the
service
• New members can activate their service and
add new devices
@atseitlin
Failure is all around us
• Disks fail
• Power goes out. And your generator fails.
• Software bugs introduced
• People make mistakes
Failure is unavoidable
@atseitlin
We design around failure
• Exception handling
• Clusters
• Redundancy
• Fault tolerance
• Fall-back or degraded experience (Hystrix)
• All to insulate our users from failure
Is that enough?
@atseitlin
It’s not enough
• How do we know if we’ve succeeded?
• Does the system work as designed?
• Is it as resilient as we believe?
• How do we prevent drifting into failure?
The typical answer is…
@atseitlin
More testing!
• Unit testing
• Integration testing
• Stress testing
• Exhaustive test suites to simulate and test all
failure mode
Can we effectively simulate a large-
scale distributed system?
@atseitlin
Building distributed systems is hard
Testing them exhaustively is even harder
• Massive data sets and changing shape
• Internet-scale traffic
• Complex interaction and information flow
• Asynchronous nature
• 3rd party services
• All while innovating and building features
Prohibitively expensive, if not impossible,
for most large-scale systems
@atseitlin
What if we could reduce variability of failures?
@atseitlin
There is another way
• Cause failure to validate resiliency
• Test design assumption by stressing them
• Don’t wait for random failure. Remove its
uncertainty by forcing it periodically
@atseitlin
And that’s exactly what we did
@atseitlin
Instances fail
@atseitlin
@atseitlin
Chaos Monkey taught us…
• State is bad
• Clusters are good
• Surviving single instance failure is not enough
@atseitlin
Lots of instances fail
@atseitlin
Chaos Gorilla
@atseitlin
Chaos Gorilla taught us…
• Hidden assumptions on deployment topology
• Infrastructure control plane can be a
bottleneck
• Large scale events are hard to simulate
• Rapidly shifting traffic is error prone
• Smooth recovery is a challenge
• Cassandra works as expected
@atseitlin
What about larger catastrophes?
Anyone remember Sandy?
@atseitlin
Chaos Kong (*some day soon*)
@atseitlin
The Sick and Wounded
@atseitlin
Latency Monkey
@atseitlin
@atseitlin
Hystrix, RxJava
http://techblog.netflix.com/2012/02/fault-tolerance-in-high-volume.html
@atseitlin
Latency Monkey taught us
• Startup resiliency is often missed
• An ongoing unified approach to runtime
dependency management is important (visibility &
transparency gets missed otherwise)
• Know thy neighbor (unknown dependencies)
• Fall backs can fail too
@atseitlin
Entropy
@atseitlin
Clutter accumulates
• Complexity
• Cruft
• Vulnerabilities
• Cost
@atseitlin
Janitor Monkey
@atseitlin
Janitor Monkey taught us…
• Label everything
• Clutter builds up
@atseitlin
Ranks of the Simian Army
• Chaos Monkey
• Chaos Gorilla
• Latency Monkey
• Janitor Monkey
• Conformity
Monkey
• Circus Monkey
• Doctor Monkey
• Howler Monkey
• Security Monkey
• Chaos Kong
• Efficiency Monkey
@atseitlin
Observability is key
• Don’t exacerbate real customer issues with
failure exercises
• Deep system visibility is key to root-cause
failures and understand the system
@atseitlin
Organizational elements
• Every engineer is an operator of the service
• Each failure is an opportunity to learn
• Blameless culture
Goal is to create a learning organization
@atseitlin
Assembling the Puzzle
@atseitlin
Open Source Projects
Github / Techblog
Apache Contributions
Techblog Post
Coming Soon
Priam
Cassandra as a Service
Astyanax
Cassandra client for Java
CassJMeter
Cassandra test suite
Cassandra
Multi-region EC2 datastore
support
Aegisthus
Hadoop ETL for Cassandra
AWS Usage
Spend analytics
Governator
Library lifecycle and dependency
injection
Odin
Cloud orchestration
Blitz4j Async logging
Exhibitor
Zookeeper as a Service
Curator
Zookeeper Patterns
EVCache
Memcached as a Service
Eureka / Discovery
Service Directory
Archaius
Dynamics Properties Service
Edda
Config state with history
Denominator
Ribbon
REST Client + mid-tier LB
Karyon
Instrumented REST Base Serve
Servo and Autoscaling Scripts
Genie
Hadoop PaaS
Hystrix
Robust service pattern
RxJava Reactive Patterns
Asgard
AutoScaleGroup based AWS
console
Chaos Monkey
Robustness verification
Latency Monkey
Janitor Monkey
Bakeries / Aminotor
Legend
@atseitlin
How does it all fit together?
@atseitlin
@atseitlin
Our Current Catalog of Releases
Free code available at http://netflix.github.com
@atseitlin
Takeaways
Regularly inducing failure in your production
environment validates resiliency and increases
availability
Use the NetflixOSS platform to handle the heavy
lifting for building large-scale distributed cloud-
native applications
@atseitlin
Thank you!
Any questions?
Ariel Tseitlin
http://www.linkedin.com/in/atseitlin
@atseitlin

Resiliency through failure @ QConNY 2013

Editor's Notes

  • #41 The genre box shots were chosen because we have rights to use them, we are starting to make specific logos for each project going forward.