Be the first to like this
Alex Hokanson + Brett Inman, Docker
Microservice architectures can be difficult to implement. Specifically how to route to the a service correctly and ensure that traffic is spread across all instances of that service. What happens in a cloud environment where it is normal to lose and gain service instances as a part of daily operations? How do you configure something to be able to consistently route to your service when you don’t even know where your service is running!? At Docker, we developed our own highly available and automated API server on top of HAProxy with deep integration with Consul. Our API server acts as a service discovery and load balancing service to ensure availability in a highly dynamic environment. In addition to running such a complex application, we need to support thousands of requests per second while being able to monitor every request that comes through--that is no small feat!
In addition to running a highly available API server, we also recently migrated it from running natively on Ubuntu 14.04 to run all components inside of containers by using Kubernetes with Docker Enterprise. With the containerization journey came some benefits along with new challenges that were not foreseen.