Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

JUST EAT: Embracing DevOps


Published on

How we make a Windows-based ecommerce platform work (with AWS)
@petemounce & @justeat_tech

Published in: Software, Technology
  • Be the first to comment

JUST EAT: Embracing DevOps

  1. 1. JUST EAT: embracing DevOps Or: How we make a Windows-based ecommerce platform work (with AWS) @petemounce & @justeat_tech
  2. 2. JUST EAT: Who are we? ● In business since 2001 in DK, 2005 in UK ● Tech team is ~50 people in UK, ~20 people in Ukraine ● Cloud native in AWS ○ Except for the bits that aren’t (yet) ● Very predictable load ● ~900 orders/minute at peak in UK ● We’re recruiting! ○ ○ platform-services/ ○ Lots of other roles
  3. 3. JUST EAT: Who are we? Oh, yeah - we do online takeaway. We’re an extra sales channel for our restaurant partners. We do the online part. Challenging! We make this work. On Windows.
  4. 4. What are we? We do high-volume ecommerce. Windows platform. Most production code is C#, .NET 4 or 4.5. Most automation is ruby 1.9.x. Some powershell. Ongoing legacy transformation; no big rewrites. Splitting up a monolithic system into SOA/APIs, incrementally.
  5. 5. Architecture, before AWS
  6. 6. Data centre life, pre 2013 Physical hardware Snowflake servers - no configuration management tooling Manual deployments, done by operations team No real time monitoring - SQL queries only Monolithic applications, not much fast-running test coverage … But at least we had source control and decent continuous integration! (since 2010)
  7. 7. Architecture, post AWS migration
  8. 8. Estate & High Availability by default At peak, we run ~500-600 EC2 instances We migrated from the single data centre in DK, to eu-west-1. We run everything multi-AZ, auto-scaling by default. (Almost).
  9. 9. Delivery pipeline Very standard. Nothing to see here. Multi-tenant. Tenants are isolated against bad-neighbour issues; individually scalable. This basically means our tools take a tenant parameter as well as an environment parameter.
  10. 10. Tech organisation structure We stole from AWS - “two-pizza teams” (we understand metrics couched in terms of food) We have a team each for ● consumer web app ● consumer native apps (one iOS, one Android) ● restaurant apps ● business-support apps ● APIs (actually, four teams in one unit) ● PaaS ○ responsible for internal services; monitoring/alerting/logs ○ systems automation
  11. 11. Tech culture “You ship it, you operate it” Each team owns their own features, infrastructure-up. Minimise dependencies between teams. Each team has autonomy to work on what they want within some constraints. Rules: ● don’t break backwards compatibility ● use what you want - but operate it yourself ● other teams must be able to launch & verify your stuff in their environments
  12. 12. But how? Table-stakes for this to work (well): 1. Persistent group chat 2. Real-time monitoring 3. Real-time alerting 4. Centralised logging Make it easier to debug in production without a debugger.
  13. 13. Persistent group chat We use HipChat. You could use IRC / Campfire / Hangouts. ● Persistent - jump in, read up ● Searchable history ● Integrate other tools to it ● hubot for fun and profit ○ @jebot trg pd emergency with msg “we’re out of champagne in the office fridge”
  14. 14. Real-time monitoring Microsoft’s SCOM requires an AD Publish OS-level performance counters with perftap - windows analogue of collectd we found and customised Receive metrics into statsd Visualise time-series data with graphite ○ 10s granularity retained for 13 months ○ AWS’ CloudWatch gives you 1min / 2 weeks Addictive!
  15. 15. Real-time alerting This is the 21st century; emailing someone their server is down doesn’t cut it. seyren runs our checks. Publishes to ● HipChat ● PagerDuty ● SMS ● statsd event metrics (coming soon, hopefully)
  16. 16. Centralised logging Windows doesn’t have syslog. Out of the box EventLog isn’t quite it. Publish logs via nxlog agent. Receive logs into logstash cluster. Filter, transform and enrich into elasticsearch cluster. Query, visualise and dashboard via kibana.
  17. 17. Without these things, operating a distributed system on Windows is hard. Windows at scale assumes that you have an Active Directory. We don’t. ● No Windows network load-balancing. ● No centrally trusted authentication. ● No central monitoring (SCOM) to harvest performance counters. ● No easy remote command execution (WinRM wants an AD, too) ● Other stuff; these are the highlights.
  18. 18. Open source & build vs buy We treat Microsoft as just another third party vendor dependency. We lean on open-source libraries and tools a lot.
  19. 19. Anatomy of a feature We decompose the platform into its component parts Imaginatively, we call these “platform features” For example ● consumer web app == publicweb ● back office tools == handle, guard ● etc
  20. 20. Platform features Features are defined by AWS CloudFormation. ● Everything is pull-deployment, from S3. ● No state is kept (for long) on the instance itself. ● No external actor can tell an instance to do something, beyond what the feature itself allows. Instances boot, and then bootstrap themselves from content in S3 based on CloudFormation::Init metadata
  21. 21. Platform feature: Servers We have several “baseline” AMIs. These have required system dependencies like .NET framework, ruby, 7-zip, etc. Periodically we update them for OS-level patches, and roll out new baseline AMIs. We deprecate the older AMIs.
  22. 22. Platform feature: Infrastructure Defined by CloudFormation. Each one stands up everything that feature needs to run, excluding cross-cutting dependencies (like DNS, firewall rules). Mostly standard: ● ELB ● AutoScaling Group + Launch Configuration ● IAM as necessary ● … anything else required by the feature
  23. 23. Platform feature: Infrastructure
  24. 24. Platform feature: code package ● A standardised package containing ○ built code (website, service, combinations) ○ configuration + deltas to run any tenant/environment ○ automation to deploy the feature ● CloudFormation::Init has a configSet to ○ unzip ○ install automation dependencies ○ execute the deployment automation ○ warm up the feature, post-install
  25. 25. What have we gained? Instances are disposable and short lived. ● Enables “shoot it in the head” debugging ● Disks no longer ever fill up ● Minimal environmental differences ● New environment == mostly automated ● Infrastructure as code == testable, repeatable - and we do!
  26. 26. Culture again: On-call Teams are on-call for their features. Decide own rota; coverage minimums for peak-time But: teams (must!) have autonomy to improve their features so they don’t get called as often. Otherwise, constant fire-fighting
  27. 27. Things still break! Page me once, shame on you. Page me twice, shame on me. Teams do root-cause analysis of incidents that triggered incidents. … An operations team / NOC does not. Warn call-centre proatively Take action proactively Automate mitigation steps! Feature toggles: not just for launching new stuff.
  28. 28. The role of our PaaS team Enablement. ● Run monitoring & alerting ● Run centralised logging ● Run deployment service ● Apply security updates
  29. 29. Why not Azure / OpenStack et al? Decision to migrate to AWS made in late 2011. AWS was more mature than alternatives at the time. It offered many hosted services on top of the IaaS offering. Still is, even accounting for Azure’s recent advances.
  30. 30. The future Immutable/golden instances; faster provisioning. Failover to secondary region (we operate in CA). Always: more test coverage, more confidence. Publish some of our tools as OSS
  31. 31. The most important things ● Culture ● Principles that everyone lives by ● Devolve autonomy down to people on the ground ● (Tools)
  32. 32. Did we mention we’re hiring? We’re pragmatic. We’re successful. We support each other. We use sharp tools that we pick ourselves based on merit. Join us! ○ ○ platform-services/ ○ Lots of other roles
  33. 33. Any questions?