Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Throttling APIs before it reaches application


Published on

Throttling APIs before it reaches application

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Throttling APIs before it reaches application

  1. 1. Throttling APIs Before It Reaches The Application By Aditya Patawari -
  2. 2. About Me ● System Engineer and Devops Engineer since 2011. ● Founder and Principal Consultant at DevopsNexus. ● Contributor to projects like Kubernetes and Fedora Project. ● Authored a couple of tech books. ● Regular speaker at various conferences Rootconf, FOSDEM, Flock and FOSSASIA
  3. 3. What Are We Doing Today? Saving the world from us!
  4. 4. What is a web API? ● Nope.. I am not talking about that definition we learned in college ● Today we’ll focus on: ○ We send a valid request ○ We receive a valid response
  5. 5. API Abuse ● Someone coded an infinite loop to check status. ● Someone is not happy with you. ● Bots are too happy with you.
  6. 6. Conventional / Popular Methods ● Middleware and libraries like rack-attack and ratelimit ○ Requests hit the app ○ Can eat up significant resources and increase cost ● Off-the-shelf WAFs ○ Sometimes they are expensive ○ Sometimes they are less flexible
  7. 7. AWS WAF ● Good news, it can rate limit. ● Bad news, it can only rate limit in a 5 minute window. ● Worse news, it can only rate limit on the basis of IP addresses.
  8. 8. Standard Nginx ● Ratelimits before hitting the application ● Can handle IP addresses, basic auth user names and other parameters ● Cannot handle customer tiers
  9. 9. Nginx + Lua + Redis ● Nginx receives and responds to the requests ● Lua manages the logic of rate limiting ● Redis would keep track of current state
  10. 10. The Big Plan ● Assign a number of tokens to each user and store it in Redis bank. ● Each requests costs a token. ● Zero balance, no response to the API.
  11. 11. Demo Time
  12. 12. Benchmark ● Sending 10000 requests with a concurrency of 50. ● 5000 users with random number of tokens, test user with 7000 tokens. ● Proxy_pass to a local server Data Plain Nginx Nginx with Lua and Redis 50% requests were served within X seconds each 3 ms 5 ms 99% requests were served within X seconds each 28 ms 34 ms Longest request 48 ms 54 ms Accuracy N/A Out of 3000, 2982 rejected Total time for all requests 1.176 seconds 1.571 seconds Mean time 5.879 ms 7.853 ms
  13. 13. Question?
  14. 14. Catch Me! ● Twitter: @adityapatawari ● Email: ● Website: