Successfully reported this slideshow.
Your SlideShare is downloading. ×

The scaling story of Postman

Ad

managing scale with lean team leveraging
the power of docker and aws

Ad

About Postman
3+ million users
1.5+ million mau
30+ member team

Ad

Our Stack
27+ micro services
10+ mill peak req/hr
140+ gb in/day
sailsjs on nodejs
docker
managed by aws
feb 2017

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Loading in …3
×

Check these out next

1 of 20 Ad
1 of 20 Ad
Advertisement

More Related Content

Advertisement
Advertisement

The scaling story of Postman

  1. 1. managing scale with lean team leveraging the power of docker and aws
  2. 2. About Postman 3+ million users 1.5+ million mau 30+ member team
  3. 3. Our Stack 27+ micro services 10+ mill peak req/hr 140+ gb in/day sailsjs on nodejs docker managed by aws feb 2017
  4. 4. the story of making small big decisions postman started 2014 ⇢ small team ⇢ frugal operations early goal was to ship and validate and spend less time on operations we were already raking up 500k+ downloads of our chrome app by listening to users and iterating
  5. 5. the sync service enabling api collaboration ⇢ SailsJS at that time, this was the best framework to choose to go to market fastest we used whatever minimum the framework needed to get started: one server, mysql and redis we were trying to validate that collaborative api development was a solution the world needs
  6. 6. time to choose 1.5+ mil users ⇢ steady service adoption (growing mau) we knew we needed to make devops easier if we can get any actual development done at all walk the tightrope of solving short term problems while following the long term vision this meant, larger server, load balancer, constant mysql and redis performance tweaking on came tedious (and manual) deployment, traffic spikes, hung processes for no apparent reason
  7. 7. beanstalk + docker solved all pressing problems ⇢ abstraction to be flexible choosing docker (even though we did not need it right away) allowed us to build the things that we expected we might need to (without actually building them at that time) we needed to auto scale during load and at that time, we felt aws could give us that quickly application deployment, infrastructure creation, auto scaling and auto healing all in one package
  8. 8. rds + elasticcache no more resizing disks ⇢ no more tuning configurations reliable and automated backups (and restore), encryption of data communication and storage and hassle free replication there could be a million points why rds might appear “limiting”, but we adopted our product development mantra into devops as well and…
  9. 9. near zero downtime hot reboot ⇢ live deployments ⇢ live rollback ⇢ alarms + auto healing no need to reboot entire server, just the docker image build and deploy applications as a secondary image and then swap we were prototyping faster than ever fridays were back
  10. 10. developer prepares code and environment, then ci tests code using centrally inherited image ci tested image is orchestrated into production services by beanstalk POSTMAN STACK WAS BORN
  11. 11. simple stack works common code repository structure code + tests + beanstalk extensions + dockerfile we inherited from our own docker base image and that enabled us to control on base stack, easily permeate stack changes into all micro-services and test using production base image on CI goal was to make our first micro-service a fully portable concept with all info in code repository
  12. 12. we gave ourselves a 48 hour challenge can you release a new production quality service? soon we reached 2+ million users and added many more services including api documentation service and api monitoring service
  13. 13. nginx and beanstalk extensions on ec2 Docker NodeJS + pm2 SailsJS + Hooks orm + socket.io & express Business Logic elastic load balancer auto scaling group and other AWS resources managed by Beanstalk in an onion shell
  14. 14. time to validate 10x traffic ⇢ 5+ live deployments/day ⇢ difficult to debug
  15. 15. centralised and auto- instrumented logging cloudwatch & elk with grafana ⇢ SailsJS hooks easily added to all service via beanstalk extensions and docker root image since we made all services on SailsJS and as such one hook works everywhere
  16. 16. auto-instrumented monitoring cloudwatch ⇢ beanstalk enhanced health checks ⇢ beanstalk extensions for custom metrics with beanstalk extensions, one can customise just about anything and we auto added all our alarms, instance CPU, memory, disk and event-loop monitoring in no time grafana did all the reporting for us
  17. 17. dogfooding Use newman (Postman CLI) for API testing on CI platforms Use own monitoring service to perform complex api health checks Use own API documentation service to collaborate micro-service development
  18. 18. up next centralised service discovery ⇢ decentralised message bus ⇢ aws inspector + aws waf + aws shield
  19. 19. in short Solve short-term problems while accounting for long-term applications Use containerisation or some other way to find a common entry point and standardisation of all your services Don’t spend time solving for operational issues that you might never face
  20. 20. @shamasis getpostman.com

Editor's Notes

  • speak slowly
    dont digress
    pause before slides
    watch the time
  • Postman is an API Development tool helping
    build apis
    collaborate on api development
    continuously test
    and publish APIs

    3+ million users
    1.5+ million mau
    30+ member team
  • 27+ micro services
    10+ mill peak req/hr
    140+ gb in/day
    sailsjs on nodejs
    in docker and managed by aws
    chrome + native apps in major os
    we will outline the decisions we took to get here from ops perspective
  • started 2014, 500k+ app installs
    small team
    focus and iterate
    create value + save operational costs
  • chose the tech we knew best
    developed ops around only what we needed to go live
  • we needed to switch cloud provider
    to speed our dev and
    improve service quality
    we could have lost focus and over-engineered
  • goal was to have a resource formation + …
    beanstalk + docker
    one stop solution for problem at hand
    docker abstracted us enough to reduce risk (if we had to switch again)
    choosing docker was for the long haul (though it was very early)
  • rds took care of
    horizontal and vertical scaling
    security
    performance
    elasticcache
  • {{go fast}}
    all goodies with minimal effort
  • {{go fast}}
    one service working, we now needed to replicate
    thus we standardised our stack
  • common repo structure
    ensured replicability
    standard code + tests = less managing/onboarding as team grew
  • {{go fast}}
  • at heart: our business logic as MVCS
    sails js with common hooks + orm + socket.io + express (all from sails)
    NodeJS and pm2
    docker
    nginx + beanstalk extensions on ec2
    autoscaled and load balanced
    accessing RDS and elastic cache
    orchestrated by beanstalk
  • {{go fast}}
  • beanstalk extensions to install log and metric collection
    grafana to visualise
    sails js hooks to auto-implement standard logging for all services
  • {{go fast}}
    beanstalk enhanced health
    beanstalk extensions for custom metrics
  • {{go fast}}
    monitoring
    documentation
    testing
  • {{go fast}}

    centralised service discovery (use something or build one)
    a decentralised message bus for intra-service transactional comm
    (already built a decentralised session service ~ sails hooks allowed us to have a part of one service in every other)
    new aws offerings
    inspector for pen test while in production use
    web application firewall
    aws shield
  • summary
    use docker or some other common entry + ops standardisation (avoid complex ops until you need)
    be practical and not utopian
    build operations accounting product+people
    and not only standards+technology
    solve short-term, account long term

×