Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Applying principles of chaos engineering to serverless

104 views

Published on

Chaos engineering is a discipline that focuses on improving system resilience through experiments that expose the inherent chaos and failure modes in our system, in a controlled fashion, before these failure modes manifest themselves like a wildfire in production and impact our users.

Netflix is undoubtedly the leader in this field, but much of the publicised tools and articles focus on killing EC2 instances, and the efforts in the serverless community has been largely limited to moving those tools into AWS Lambda functions.

But how can we apply the same principles of chaos to a serverless architecture built around AWS Lambda functions?

These serverless architectures have more inherent chaos and complexity than their serverful counterparts, and, we have less control over their runtime behaviour. In short, there are far more unknown unknowns with these systems.

Can we adapt existing practices to expose the inherent chaos in these systems? What are the limitations and new challenges that we need to consider?

Join us in this talk as Yan Cui shares his thought experiments, and actual experiments, in his pursuit to understand how we can apply the principles of chaos to a serverless architecture.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Applying principles of chaos engineering to serverless

  1. 1. APPLYING PRINCIPLES to SERVERLESSt a b chaos engineering of A E S of
  2. 2. Yan Cui http://theburningmonk.com @theburningmonk
  3. 3. Offices in London and Katowice
  4. 4. Offices in London and Katowice WE’RE HIRING! https://bit.ly/2HfLzGE
  5. 5. history of Smallpox est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy
  6. 6. history of Smallpox est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy 1798 first vaccine developed Edward Jenner
  7. 7. 1798 first vaccine developed 1980 history of Smallpox Edward Jenner WHO certified global eradication est. 400K deaths per year in 18th Century Europe. earliest evidence of disease in 3rd Century BC Egyptian Mummy
  8. 8. Vaccination is the most effective method of preventing infectious diseases
  9. 9. stimulates the immune system to recognize and destroy the disease before contracting the disease for real
  10. 10. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  11. 11. it’s about building confidence, NOT breaking things
  12. 12. I’m gonna inject you with a deadly disease now
  13. 13. http://principlesofchaos.org
  14. 14. STEP 1. define “Steady State” aka. what does normal, working condition looks like?
  15. 15. this is not a steady state
  16. 16. STEP 2. hypothesize steady state will continue in both control group & the experiment group ie. you should have a reasonable degree of confidence the system would handle the failure before you proceed with the experiment
  17. 17. explore unknown unknowns away from production
  18. 18. treat production with the care it deserves
  19. 19. the goal is NOT, to actually hurt production
  20. 20. If you know the system would break, and you did it anyway… then it’s NOT a chaos experiment. It’s called being IRRESPONSIBLE.
  21. 21. STEP 3. inject realistic failures e.g. server crash, network error, HD malfunction, etc.
  22. 22. https://github.com/Netflix/SimianArmy
  23. 23. https://github.com/Netflix/SimianArmy http://oreil.ly/2tZU1Sn
  24. 24. STEP 4. disprove hypothesis i.e. look for difference with steady state
  25. 25. if a WEAkNESS is uncovered, improve it before the behaviour manifests in the system at large
  26. 26. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  27. 27. Chaos Engineering controlled experiments to help us learn about our system’s behaviour and build confidence in its ability to withstand turbulent conditions
  28. 28. communication
  29. 29. ensure everyone knows what you’re doing
  30. 30. ensure everyone knows what you’re doing NO surprises!
  31. 31. communication Timing
  32. 32. run experiments during office hours
  33. 33. avoid important dates
  34. 34. communication Timing contain Blast radius
  35. 35. smallest change that allows you to detect a signal that steady state is disrupted
  36. 36. rollback at the first sign of TROUBLE!
  37. 37. communication Timing contain Blast radius
  38. 38. Don’t try to run before you know how to walk.
  39. 39. by Russ Miles @russmiles source https://medium.com/russmiles/chaos-engineering-for-the-business-17b723f26361
  40. 40. chaos monkey kills an EC2 instance latency monkey induces artificial delay in APIs chaos gorilla kills an AWS Availability Zone chaos kong kills an entire AWS region
  41. 41. there is no server…
  42. 42. there is no server… that you can kill
  43. 43. there are more inherent chaos and complexity in a Serverless architecture
  44. 44. smaller units of deployment and A LOT more of them!
  45. 45. more difficult to harden around boundaries serverful serverless
  46. 46. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES
  47. 47. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES more intermediary services, and greater variety too
  48. 48. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES more intermediary services, and greater variety too each with its own set of failure modes
  49. 49. serverful serverless more configurations, more opportunities for misconfiguration
  50. 50. more unknown failure modes in infrastructure that we don’t control
  51. 51. often there’s little we can do when an outage occurs in the platform
  52. 52. improperly tuned timeouts
  53. 53. missing error handling
  54. 54. missing fallback when downstream is unavailable
  55. 55. LATENCY INJECTION
  56. 56. STEP 1. define “Steady State” aka. what does normal, working condition looks like?
  57. 57. what metrics do you monitor?
  58. 58. 9X-percentile latency error count yield (% of requests completed) harvest (completeness of results)
  59. 59. STEP 2. hypothesize steady state will continue in both control group & the experiment group ie. you should have a reasonable degree of confidence the system would handle the failure before you proceed with the experiment
  60. 60. API Gateway
  61. 61. consider the effect of cold-starts & API Gateway overhead
  62. 62. use short timeout for API calls
  63. 63. the goal of a timeout strategy is to give HTTP requests the best chance to succeed, provided that doing so does not cause the calling function itself to err
  64. 64. fixed timeout are tricky to get right…
  65. 65. fixed timeout are tricky to get right… too short and you don’t give requests the best chance to succeed
  66. 66. fixed timeout are tricky to get right… too long and you run the risk of letting the request timeout the calling function
  67. 67. and it gets worse when you make multiple API calls in one function…
  68. 68. set the request timeout based on the amount of invocation time left
  69. 69. log the timeout incident with as much context as possible e.g. timeout value, correlation IDs, request object, …
  70. 70. report custom metrics
  71. 71. be mindful when you sacrifice precision for availability, user experience is the king
  72. 72. STEP 3. inject realistic failures e.g. server crash, network error, HD malfunction, etc.
  73. 73. where to inject latency?
  74. 74. hypothesis: function has appropriate timeout on its HTTP communications and can degrade gracefully when these requests time out
  75. 75. should also be applied to 3rd parties services we depend on, e.g. DynamoDB
  76. 76. what’s the blast radius?
  77. 77. http client public-api-a http client public-api-b internal-api
  78. 78. hypothesis: all functions have appropriate timeout on their HTTP communications to this internal API, and can degrade gracefully when requests are timed out
  79. 79. large blast radius, risky..
  80. 80. could be effective when used away from production environment, to weed out weaknesses quickly
  81. 81. not priming developers to build more resilient systems
  82. 82. development
  83. 83. development production
  84. 84. Priming (psychology): Priming is a technique whereby exposure to one stimulus influences a response to a subsequent stimulus, without conscious guidance or intention. It is a technique in psychology used to train a person's memory both in positive and negative ways.
  85. 85. make dev environments better resemble the turbulent conditions you should realistically expect your system to survive in production
  86. 86. hypothesis: the client app has appropriate timeout on their HTTP communication with the server, and can degrade gracefully when requests are timed out
  87. 87. STEP 4. disprove hypothesis i.e. look for difference with steady state
  88. 88. how to inject latency?
  89. 89. static weaver (e.g. AspectJ, PostSharp), or dynamic proxies
  90. 90. https://theburningmonk.com/2015/04/design-for-latency-issues/
  91. 91. manually crafted wrapper library
  92. 92. configured in SSM Parameter Store
  93. 93. no injected latency
  94. 94. with injected latency
  95. 95. factory wrapper function (think bluebird’s promisifyAll function)
  96. 96. ERROR INJECTION
  97. 97. failures are INEVITABLE
  98. 98. the only way to truly know your system’s resilience against failures is to test it through controlled experiments
  99. 99. vaccinate your serverless architecture against failures
  100. 100. Offices in London and Katowice WE’RE HIRING! https://bit.ly/2HfLzGE
  101. 101. Yan Cui http://theburningmonk.com @theburningmonk
  102. 102. @theburningmonk theburningmonk.com github.com/theburningmonk

×