Advertisement
Advertisement

More Related Content

Slideshows for you(20)

Similar to How to build observability into Serverless (BuildStuff 2018)(20)

Advertisement

Recently uploaded(20)

Advertisement

How to build observability into Serverless (BuildStuff 2018)

  1. How to build observability into Serverless Yan Cui @theburningmonk
  2. Abraham Wald
  3. Abraham Wald
  4. Abraham Wald
  5. Abraham Wald Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely.
  6. Abraham Wald Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely.
  7. survivor bias in monitoring
  8. survivor bias in monitoring Only focus on failure modes that we were able to successfully identify through investigation and postmortem in the past. The bullet holes that shot us down and we couldn’t identify stay invisible, and will continue to shoot us down.
  9. What do I mean by “observability”?
  10. Monitoring watching out for known failure modes in the system, e.g. network I/O, CPU, memory usage, …
  11. Observability being able to debug the system, and gain insights into the system’s behaviour
  12. In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. https://en.wikipedia.org/wiki/Observability
  13. Known Success
  14. Known SuccessKnown Errors
  15. Known SuccessKnown Errors easy to monitor!
  16. Known SuccessKnown Errors Known Unknowns
  17. Known SuccessKnown Errors Known UnknownsUnknown Unknowns
  18. Known SuccessKnown Errors Known UnknownsUnknown Unknowns invisible bullet holes
  19. Known SuccessKnown Errors Known UnknownsUnknown Unknowns
  20. Known SuccessKnown Errors Known UnknownsUnknown Unknowns only alert on this
  21. Known SuccessKnown Errors Known UnknownsUnknown Unknowns alert on the absence of this!
  22. Known SuccessKnown Errors Known UnknownsUnknown Unknowns what went wrong?
  23. These are the four pillars of the Observability Engineering team’s charter: • Monitoring • Alerting/Visualization • Distributed systems tracing infrastructure • Log aggregation/analytics “ ” http://bit.ly/2DnjyuW- Observability Engineering at Twitter
  24. microservices death stars circa 2015
  25. mm… I wonder what’s going on here… microservices death stars circa 2015
  26. I got this! microservices death stars circa 2015
  27. About me ▪ Principal Engineer at DAZN ▪ AWS Serverless Hero ▪ Author of Production-Ready Serverless* by Manning ▪ Blogger** ▪ Speaker * https://bit.ly/production-ready-serverless ** https://theburningmonk.com
  28. https://www.ft.com/content/07d375ee-6ee5-11e8-92d3-6c13e5c92914
  29. https://www.theguardian.com/media/2018/may/14/streaming-service-dazn-netflix-sport-us-boxing-eddie-hearn
  30. About DAZN ▪ Available in 7 countries - Austria, Switzerland, Germany, Japan, Canada, Italy and USA ▪ Available on 30+ platforms
  31. About DAZN ▪~1,000,000 concurrent viewers at peak
  32. follow @dazneng for updates about the engineering team We’re hiring! Visit engineering.dazn.com to learn more. WE’RE HIRING!
  33. new challenges
  34. NO ACCESS to underlying OS
  35. NOWHERE to install agents/daemons
  36. •nowhere to install agents/daemons new challenges
  37. user request user request user request user request user request user request user request critical paths: minimise user-facing latency handler handler handler handler handler handler handler
  38. user request user request user request user request user request user request user request critical paths: minimise user-facing latency StatsD handler handler handler handler handler handler handler rsyslog background processing: batched, asynchronous, low overhead
  39. user request user request user request user request user request user request user request critical paths: minimise user-facing latency StatsD handler handler handler handler handler handler handler rsyslog background processing: batched, asynchronous, low overhead NO background processing except what platform provides
  40. •no background processing •nowhere to install agents/daemons new challenges
  41. EC2 concurrency used to be handled by your code
  42. EC2 Lambda Lambda Lambda Lambda Lambda now, it’s handled by the AWS Lambda platform
  43. EC2 logs & metrics used to be batched here
  44. EC2 Lambda Lambda Lambda Lambda Lambda now, they are batched in each concurrent execution, at best…
  45. HIGHER concurrency to log aggregation/telemetry system
  46. •higher concurrency to telemetry system •nowhere to install agents/daemons •no background processing new challenges
  47. Lambda cold start
  48. Lambda data is batched between invocations
  49. Lambda idle data is batched between invocations
  50. Lambda idle garbage collectiondata is batched between invocations
  51. Lambda idle garbage collectiondata is batched between invocations HIGH chance of data loss
  52. •high chance of data loss (if batching) •nowhere to install agents/daemons •no background processing •higher concurrency to telemetry system new challenges
  53. Lambda
  54. my code send metrics
  55. my code send metrics
  56. my code send metrics internet internet press button something happens
  57. http://bit.ly/2Dpidje
  58. ? functions are often chained together via asynchronous invocations
  59. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES
  60. ? SNS Kinesis CloudWatch Events CloudWatch LogsIoT DynamoDB S3 SES tracing ASYNCHRONOUS invocations through so many different event sources is difficult
  61. •asynchronous invocations •nowhere to install agents/daemons •no background processing •higher concurrency to telemetry system •high chance of data loss (if batching) new challenges
  62. These are the four pillars of the Observability Engineering team’s charter: • Monitoring • Alerting/Visualization • Distributed systems tracing infrastructure • Log aggregation/analytics “ ” http://bit.ly/2DnjyuW- Observability Engineering at Twitter
  63. LOGGING
  64. 2016-07-12T12:24:37.571Z 994f18f9-482b-11e6-8668-53e4eab441ae GOT is off air, what do I do now?
  65. 2016-07-12T12:24:37.571Z 994f18f9-482b-11e6-8668-53e4eab441ae GOT is off air, what do I do now? UTC Timestamp Request Id your log message
  66. one log group per function one log stream for each concurrent invocation
  67. logs are not easily searchable in CloudWatch Logs me
  68. CloudWatch Logs
  69. CloudWatch Logs is an async event source for Lambda
  70. Concurrent Executions Time regional max concurrency functions that are delivering business value
  71. Concurrent Executions Time regional max concurrency functions that are delivering business value ship logs
  72. either set concurrency limit on the log shipping function (and potentially lose logs due to throttling) or…
  73. 1 shard = 1 concurrent execution i.e. control the no. of concurrent executions with no. of shards
  74. CloudWatch Logs
  75. CloudWatch Logs
  76. use structured logging with JSON
  77. https://stackify.com/what-is-structured-logging-and-why-developers-need-it/ https://blog.treasuredata.com/blog/2012/04/26/log-everything-as-json/
  78. https://www.loggly.com/blog/8-handy-tips-consider-logging-json/
  79. traditional loggers are too heavy for Lambda
  80. CloudWatch Logs $0.50 per GB ingested $0.03 per GB archived per month
  81. CloudWatch Logs $0.50 per GB ingested $0.03 per GB archived per month 1M invocation of a 128MB function = $0.000000208 * 1M + $0.20 = $0.408
  82. DON’T leave debug logging ON in production
  83. have to redeploy ALL the functions along the call path to collect all relevant debug logs
  84. https://github.com/middyjs/middy
  85. EC2 Lambda Lambda Lambda Lambda Lambda Concurrency is handled by the AWS Lambda platform
  86. sampling decision has to be followed by an entire call chain
  87. Initial Request ID User ID Session ID User-Agent Order ID …
  88. every function needs to do the right thing and propagate information such as correlation IDs along to APIs, streams, queues, etc.
  89. invest in tools to make it easy to do the “right thing”
  90. nonintrusive
  91. nonintrusive extensible
  92. nonintrusive extensible consistent
  93. nonintrusive extensible consistent works for streams
  94. EC2 Lambda Lambda Lambda Lambda Lambda Concurrency is handled by the AWS Lambda platform
  95. store correlation IDs in global variable
  96. use middleware to auto-capture incoming correlation IDs
  97. extract correlation IDs from invocation event, and store them in the correlation-ids module reset
  98. logger to always include captured correlation IDs
  99. HTTP and AWS SDK clients to auto-forward correlation IDs on
  100. context.awsRequestId get-index
  101. context.awsRequestId x-correlation-id get-index
  102. { “headers”: { “x-correlation-id”: “…” }, … } get-index
  103. { “body”: null, “resource”: “/restaurants”, “headers”: { “x-correlation-id”: “…” }, … } get-index get-restaurants
  104. get-restaurants global.CONTEXT global.CONTEXT x-correlation-id = … x-correlation-xxx = … get-index headers[“User-Agent”] headers[“Debug-Log-Enabled”] headers[“User-Agent”] headers[“Debug-Log-Enabled”] headers[“x-correlation-id”] capture forward function event log.info(…)
  105. nonintrusive extensible consistent works for streams
  106. MONITORING
  107. •no background processing •nowhere to install agents/daemons new challenges
  108. my code send metrics internet internet press button something happens
  109. those extra 10-20ms for sending custom metrics would compound when you have microservices and multiple APIs are called within one slice of user event
  110. Amazon found every 100ms of latency cost them 1% in sales. http://bit.ly/2EXPfbA
  111. console.log(“hydrating yubls from db…”); console.log(“fetching user info from user-api”); console.log(“MONITORING|1489795335|27.4|latency|user-api-latency”); console.log(“MONITORING|1489795335|8|count|yubls-served”); timestamp metric value metric type metric namemetrics logs
  112. CloudWatch Logs AWS Lambda ELK stack logs m etrics CloudWatch
  113. trade-off delay cost concurrency
  114. trade-off delay cost concurrency no latency overhead
  115. API Gateway send custom metrics asynchronously
  116. SNS KinesisS3API Gateway … send custom metrics asynchronously send custom metrics as part of function invocation
  117. TRACING
  118. X-Ray
  119. don’t span over async invocations good for identifying dependencies of a function, but not good enough for tracing the entire call chain as user request/data flows through the system via async event sources.
  120. don’t span over non-AWS services
  121. write structured logs
  122. instrument your code
  123. make it easy to do the right thing
  124. API Gateway and Kinesis Authentication & authorisation (IAM, Cognito) Testing Running & Debugging functions locally Log aggregation Monitoring & Alerting X-Ray Correlation IDs CI/CD Performance and Cost optimisation Error Handling Configuration management VPC Security Leading practices (API Gateway, Kinesis, Lambda) Canary deployments http://bit.ly/prod-ready-serverless get 40% off with: ytcui
  125. @theburningmonk theburningmonk.com github.com/theburningmonk API Gateway and Kinesis Authentication & authorisation (IAM, Cognito) Testing Running & Debugging functions locally Log aggregation Monitoring & Alerting X-Ray Correlation IDs CI/CD Performance and Cost optimisation Error Handling Configuration management VPC Security Leading practices (API Gateway, Kinesis, Lambda) Canary deployments http://bit.ly/prod-ready-serverless get 40% off with: ytcui
Advertisement