Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

How Postman adopted Docker


Published on

This deck is the story around how Postman was able to scale to serve 10+ million requests per hour in one year from startup and how docker became an integral part of it's frugal operations

Published in: Technology

How Postman adopted Docker

  1. 1. least friction and overhead to getting started and adopting docker in production
  2. 2. About Postman 3+ million users 1.5+ million mau 30+ member team
  3. 3. Our Stack 25+ micro services 10+ mill peak req/hr 140+ gb in/day nodejs docker jan 2017
  4. 4. postman’s operations middleware
  5. 5. early adopters late 2014 ⇢ small team ⇢ frugal operations early goal was to ship and validate and spend less time on operations
  6. 6. why docker? cloud is vm ⇢ no chef/puppet expertise ⇢ exactness since cloud is a VM in some form, no guarantee that VM based system will work if a startup needs to switch cloud provider
  7. 7. operational flexibility centralised (hub) ⇢ code coupled ⇢ maintainable ⇢ unobtrusive common base image for all applications image per application is stored alongside code docker image file is treated as code (goes through PR, etc) common server setup and standardisation is moved to base image and remaining overrides to per-service image å
  8. 8. developer prepares code and environment ci tests code using centrally inherited image image is orchestrated into production services
  9. 9. zero downtime hot reboot ⇢ live deployments ⇢ live rollback no need to reboot entire server build and deploy applications as a secondary image and then swap
  10. 10. auto instrumentation log stream (volumes) ⇢ system tuning written once in base image and then re-used across all services
  11. 11. public network segment private network segment server 1 server 2 server n load balancer orchestrator public network segment private network segment server 1 server 2 server n load balancer orchestra service one service two communication bus | shared resources | service discovery and registration we use Beanstalk for orchestration and conceptually this can be Kubernetes or anything else
  12. 12. production emulation faster debugging ⇢ portable configuration (env) any system (even a local machine) can be connected to any load balancer and hence facilitate debugging
  13. 13. security surface centrally controlled ⇢ managed updates
  14. 14. Summarising • Just “choose” docker to get started (even if using minimal features) and see value in long run • Start using docker in deployment keeping individual docker files in source code. • Use a single source image (to inherit from) for all deployments • Configure security, logging and other basics in the source environment and keep on improving it.
  15. 15. our window to developer freedom
  16. 16. fun and freedom billing notification ⇢ team setup ⇢ one-click docker do not bother how much cost your team is raking up until s**t hits the roof once the service has been tested here (or locally), we move it to production instances in AWS
  17. 17. @shamasis