If you didn’t fail with microservices at least once you didn’t really try anything new! Even though microservices are an established architectural style in the industry, they still come with their own challenges. This session from nginx.conf 2016 focuses on a topic that is usually overlooked in the early stages of building a microservices architecture: traffic management. It comes into the picture after we fail an SLA, whether the cause is a misbehaving client, a legitimate increase of traffic, or a DDoS attack. We then start asking questions like how to ensure a fair usage policy for clients across microservices, how to protect clients from an abusive peer that is generating a spike in traffic, and how to protect microservices themselves from abusive clients. NGINX comes with options for rate limiting that usually work great for a single node. Extending NGINX's capabilities to distributed environments increases the complexity of the solution. Can rate limiting be applied transparently without visible impact on latency? Is it easy to scale? Is it reliable? In this session, Adobe's Dragos Dascalita Haut introduces an open source solution contributed by Adobe I/O and used with success in real-life scenarios. The solution is based on an asynchronous communication model that supports high-throughput scenarios with minimum impact on latency. If you've had similar problems in the past or if you're concerned about how clients interact with your microservices then this session is for you.