Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Applying principles of chaos engineering to Serverless

497 views

Published on

Chaos engineering is a discipline that focuses on improving system resilience through experiments that expose the inherent chaos and failure modes in our system, in a controlled fashion, before these failure modes manifest themselves like a wildfire in production and impact our users.

Netflix is undoubtedly the leader in this field, but much of the publicised tools and articles focus on killing EC2 instances, and the efforts in the serverless community has been largely limited to moving those tools into AWS Lambda functions.

But how can we apply the same principles of chaos to a serverless architecture built around AWS Lambda functions?

These serverless architectures have more inherent chaos and complexity than their serverful counterparts, and, we have less control over their runtime behaviour. In short, there are far more unknown unknowns with these systems.

Can we adapt existing practices to expose the inherent chaos in these systems? What are the limitations and new challenges that we need to consider?

Published in: Technology
  • Be the first to comment

Applying principles of chaos engineering to Serverless

  1. 1. applying principles of chaos engineering to serverless
  2. 2. history of Smallpox est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy
  3. 3. history of Smallpox est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy 1798 first vaccine developed Edward Jenner
  4. 4. 1798 first vaccine developed 1980 history of Smallpox Edward Jenner WHO certified global eradication est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy
  5. 5. Vaccination is the most effective method of preventing infectious diseases
  6. 6. stimulates the immune system to recognize and destroy the disease before contracting the disease for real
  7. 7. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  8. 8. Yan Cui http://theburningmonk.com @theburningmonk Principal Engineer @
  9. 9. Yan Cui http://theburningmonk.com @theburningmonk Principal Engineer @
  10. 10. “Netflix for sports” offices in London, Leeds, Katowice and Tokyo
  11. 11. available in Austria, Switzerland, Germany, Japan and Canada Italy coming soon ;-)
  12. 12. available on 30+ platforms
  13. 13. ~500,000 concurrent viewers
  14. 14. “Netflix for sports” offices in London, Leeds, Katowice and Tokyo We’re hiring! Visit engineering.dazn.com to learn more. follow @DAZN_ngnrs for updates about the engineering team.
  15. 15. it’s about building confidence, NOT breaking things
  16. 16. I’m gonna inject you with a deadly disease now
  17. 17. http://principlesofchaos.org
  18. 18. STEP 1. define “Steady State” aka. what does normal, working condition looks like?
  19. 19. this is not a steady state
  20. 20. STEP 2. hypothesize steady state will continue in both control group & the experiment group ie. you should have a reasonable degree of confidence the system would handle the failure before you proceed with the experiment
  21. 21. explore unknown unknowns away from production
  22. 22. treat production with the care it deserves
  23. 23. the goal is NOT, to actually hurt production
  24. 24. If you know the system would break, and you did it anyway… then it’s NOT a chaos experiment. It’s called being IRRESPONSIBLE.
  25. 25. STEP 3. inject realistic failures e.g. server crash, network error, HD malfunction, etc.
  26. 26. https://github.com/Netflix/SimianArmy
  27. 27. https://github.com/Netflix/SimianArmy http://oreil.ly/2tZU1Sn
  28. 28. STEP 4. disprove hypothesis i.e. look for difference with steady state
  29. 29. if a WEAkNESS is uncovered, IMPROVE it before the behaviour manifests in the system at large
  30. 30. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  31. 31. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  32. 32. communication
  33. 33. ensure everyone knows what you’re doing
  34. 34. ensure everyone knows what you’re doing NO surprises!
  35. 35. communication Timing
  36. 36. run experiments during office hours
  37. 37. AVOID important dates
  38. 38. communication Timing contain Blast radius
  39. 39. smallest change that allows you to detect a signal that steady state is disrupted
  40. 40. rollback at the first sign of TROUBLE!
  41. 41. communication Timing contain Blast radius
  42. 42. don’t try to run before you know how to walk.
  43. 43. by Russ Miles @russmiles source https://medium.com/russmiles/chaos-engineering-for-the-business-17b723f26361
  44. 44. chaos monkey kills an EC2 instance latency monkey induces artificial delay in APIs chaos gorilla kills an AWS Availability Zone chaos kong kills an entire AWS region
  45. 45. there is no server…
  46. 46. there is no server… that you can kill
  47. 47. there are more inherent chaos and complexity in a Serverless architecture
  48. 48. smaller units of deployment but A LOT more of them!
  49. 49. more difficult to harden around boundaries serverful serverless
  50. 50. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES
  51. 51. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES more intermediary services, and greater variety too
  52. 52. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES more intermediary services, and greater variety too each with its own set of failure modes
  53. 53. serverful serverless more configurations, more opportunities for misconfiguration
  54. 54. more unknown failure modes in infrastructure that we don’t control
  55. 55. often there’s little we can do when an outage occurs in the platform
  56. 56. improperly tuned timeouts
  57. 57. missing error handling
  58. 58. missing fallback when downstream is unavailable
  59. 59. LATENCY INJECTION
  60. 60. STEP 1. define “Steady State” aka. what does normal, working condition looks like?
  61. 61. what metrics do you monitor?
  62. 62. 9X-percentile latency error count yield (% of requests completed) harvest (completeness of results)
  63. 63. STEP 2. hypothesize steady state will continue in both control group & the experiment group ie. you should have a reasonable degree of confidence the system would handle the failure before you proceed with the experiment
  64. 64. API Gateway
  65. 65. consider the effect of cold-starts & API Gateway overhead
  66. 66. use short timeout for API calls
  67. 67. the goal of a timeout strategy is to give HTTP requests the best chance to succeed, provided that doing so does not cause the calling function itself to err
  68. 68. fixed timeout are tricky to get right…
  69. 69. fixed timeout are tricky to get right… too short and you don’t give requests the best chance to succeed
  70. 70. fixed timeout are tricky to get right… too long and you run the risk of letting the request timeout the calling function
  71. 71. and it gets worse when you make multiple API calls in one function…
  72. 72. set the request timeout based on the amount of invocation time left
  73. 73. log the timeout incident with as much context as possible e.g. timeout value, correlation IDs, request object, …
  74. 74. report custom metrics
  75. 75. be mindful when you sacrifice precision for availability, user experience is the king
  76. 76. STEP 3. inject realistic failures e.g. server crash, network error, HD malfunction, etc.
  77. 77. where to inject latency?
  78. 78. hypothesis: function has appropriate timeout on its HTTP communications and can degrade gracefully when these requests time out
  79. 79. should also be applied to 3rd parties services we depend on, e.g. DynamoDB
  80. 80. what’s the blast radius?
  81. 81. http client public-api-a http client public-api-b internal-api
  82. 82. hypothesis: all functions have appropriate timeout on their HTTP communications to this internal API, and can degrade gracefully when requests are timed out
  83. 83. large blast radius, risky..
  84. 84. could be effective when used away from production environment, to weed out weaknesses quickly
  85. 85. not priming developers to build more resilient systems
  86. 86. development
  87. 87. development production
  88. 88. Priming (psychology): Priming is a technique whereby exposure to one stimulus influences a response to a subsequent stimulus, without conscious guidance or intention. It is a technique in psychology used to train a person's memory both in positive and negative ways.
  89. 89. make dev environments better resemble the turbulent conditions you should realistically expect your system to survive in production
  90. 90. hypothesis: the client app has appropriate timeout on their HTTP communication with the server, and can degrade gracefully when requests are timed out
  91. 91. STEP 4. disprove hypothesis i.e. look for difference with steady state
  92. 92. how to inject latency?
  93. 93. static weaver (e.g. AspectJ, PostSharp), or dynamic proxies
  94. 94. manually crafted wrapper library
  95. 95. configured in SSM Parameter Store
  96. 96. no injected latency
  97. 97. with injected latency
  98. 98. factory wrapper function (think bluebird’s promisifyAll function)
  99. 99. ERROR INJECTION
  100. 100. failures are INEVITABLE
  101. 101. the only way to truly know your system’s resilience against failures is to test it through controlled experiments
  102. 102. vaccinate your serverless architecture against failures
  103. 103. Yan Cui http://theburningmonk.com @theburningmonk
  104. 104. @theburningmonk theburningmonk.com github.com/theburningmonk
  105. 105. API Gateway and Kinesis Authentication & authorisation (IAM, Cognito) Testing Running & Debugging functions locally Log aggregation Monitoring & Alerting X-Ray Correlation IDs CI/CD Performance and Cost optimisation Error Handling Configuration management VPC Security Leading practices (API Gateway, Kinesis, Lambda) Canary deployments http://bit.ly/production-ready-serverless
  106. 106. API Gateway and Kinesis Authentication & authorisation (IAM, Cognito) Testing Running & Debugging functions locally Log aggregation Monitoring & Alerting X-Ray Correlation IDs CI/CD Performance and Cost optimisation Error Handling Configuration management VPC Security Leading practices (API Gateway, Kinesis, Lambda) Canary deployments http://bit.ly/production-ready-serverless get 40% off with code: ytcui

×