• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Microservices Without the Macrocost
 

Microservices Without the Macrocost

on

  • 1,075 views

Lessons learned while implementing microservices at REA.

Lessons learned while implementing microservices at REA.

Statistics

Views

Total Views
1,075
Views on SlideShare
1,075
Embed Views
0

Actions

Likes
5
Downloads
16
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • - has anyone heard of them before? - some words that start with d - disposable - if they interact using a common interface (like REST) you are free to implement them how you like. Use them to experiment with different languages. If they get too complex, scrap and rewrite them - digestable - James Lewis has an answer to “how big should a microservice be” - “as big as my head”. You should be able to load one into your head and understand that part of the bigger system. Like zooming in. - demarcated - use them to define bounded contexts around particular entities or concepts in a larger system. - decoupled - only expose to clients exactly what they need to reduce coupling.
  • - hands up who hasn’t seen this before - multiple clients coupled to a single database schema - data warehousing, web based systems, invoicing and reconciliation systems - 10s of applications - all depend on one or more chunks of data - changes to the schema hard to manage and prone to error - apps talking directly to the database make it hard to manage schema changes - data in one lump == tragedy of the commons, nobody really owns the data, things can start to decay
  • - long path to get to where we want - first, give the data a definitive owner - wedge microservices in between clients and the data they need - encapsulate data and only expose what is required - standardise communication on REST + JSON - clients become adept at speaking REST and manipulating data - data may not always live with a particular service but standardising this makes it easier to switch - eventually clients become coupled to the service owning the data instead of the schema - to make things better we had to first make them worse
  • - data can be partitioned into individual databases owned by the services - data has a clear owner! - because data is exchanged using HTTP and JSON, clients don’t actually know or care how the data is stored - the god database can eventually be taken to the farm - this was the journey we embarked on around a year ago - I left early July but from all reports this is progressing well but still nowhere near this picture
  • - what did we learn along the way? - microservices by definition are fine grained - they are small things that perform one job well - building microservices using our default approach was going to be costly as the number of services grew - we had a bunch of linear costs, the types of cost where the total cost increases steadily as more services are added - created new services by ripping the parts we needed out of a similar service - time consuming to strip out the cruft from the old project - provisioned the environments manually via the existing ops department - lead times and coordination costs - deployments were not entirely manual - we automated a certain amount to begin with and automated more as it made sense. You only start to feel the pain around this as you add more services
  • - integration testing components became an exponential cost - we started by building our test environments on AWS - instances were cheap so we tried automated certification builds with a set of environments per component - this quickly became unwieldly as the number of components grew - end to end testing and a copy of the certification environment per component quickly became unmanageable
  • - these costs were affecting how we designed our services - when we looked at certain responsibilities in some services, they clearly belonged in their own service - we were piggy backing endpoints and capabilities onto existing services to reduce deployment and provisioning costs - we understood why microservices should be fine-grained but couldn’t wear the cost to achieve that goal - answer was to reduce the pull of those forces
  • - we decided to chase economies of scale with the creation and management of our microservices - involves early and ongoing investment in reducing the costs I mentioned earlier - growing our capability to spawn and manage new services more cheaply - set up a team dedicated to building this capability through tooling and other means - formally meet up with other teams several times a week but often informally several times a day
  • - move to EC2 - built command line tools to make provisioning and deployment trivial - setting up a new environment took a matter of minutes - kept chipping away
  • - git project with a vanilla skeleton for a service - standard stack (Rack, Webmachine, Nagios, NGinx, Splunk) - standard packaging and deployment (RPM) - encoded best practice for project setup - spawning a new service took seconds, clone project - continually improved
  • - wanted to focus more on the point to point relationships between components rather than end to end testing - consumer driven contracts - consumer of a service specifying what it requires from the provider - provider will offer a superset of that - encode contracts as tests, run them as part of the producer build - specific tests failing should tell you that you have broken specific consumers - first implemented as unit tests that lived with the producer - fell out of step with the reality of what the consumer was really interested in
  • - why not just have contracts driven from real examples? - stub out the producer and record what the consumer expected - serialise contract, play it back in the producer build to ensure it is doing the right thing - hacked this into the consumer project we had at the time - pulled out into internal gem - external gem
  • - consumer build, talks to stubbed out producer - declare interactions, what we expect when we ask the producer for certain things - record mode - record a pact file, JSON serialisation of interactions - copy this to producer build and run producer tests to ensure it honours the pact - like playback mode
  • - web of contracts joining different consumers and producers - can use a separate build to publish pact files between consumer and producer builds - pretty fast feedback when a consumer expectation is unrealistic or the producer has a regression - can replace a lot of automated end to end testing but we also supplement with manual exploratory end to end testing
  • Fred George advocates replacing unit tests with monitoring transactions and responding. This still makes me uncomfortable, I’d do both. We didn’t get this far

Microservices Without the Macrocost Microservices Without the Macrocost Presentation Transcript

  • Microservices Minus the Macrocost Lessons Learned While Building Microservices @ REA @brentsnook github.com/brentsnook Friday, 20 September 13
  • MICROSERVICES • digestable • disposable • demarcated • decoupled • defenestrable Friday, 20 September 13 - has anyone heard of them before? - some words that start with d - disposable - if they interact using a common interface (like REST) you are free to implement them how you like. Use them to experiment with different languages. If they get too complex, scrap and rewrite them - digestable - James Lewis has an answer to “how big should a microservice be” - “as big as my head”.You should be able to load one into your head and understand that part of the bigger system. Like zooming in. - demarcated - use them to define bounded contexts around particular entities or concepts in a larger system. - decoupled - only expose to clients exactly what they need to reduce coupling.
  • THE GOD DATABASE Friday, 20 September 13 - hands up who hasn’t seen this before - multiple clients coupled to a single database schema - data warehousing, web based systems, invoicing and reconciliation systems - 10s of applications - all depend on one or more chunks of data - changes to the schema hard to manage and prone to error - apps talking directly to the database make it hard to manage schema changes - data in one lump == tragedy of the commons, nobody really owns the data, things can start to decay
  • INTRODUCE MICROSERVICES Friday, 20 September 13 - long path to get to where we want - first, give the data a definitive owner - wedge microservices in between clients and the data they need - encapsulate data and only expose what is required - standardise communication on REST + JSON - clients become adept at speaking REST and manipulating data - data may not always live with a particular service but standardising this makes it easier to switch - eventually clients become coupled to the service owning the data instead of the schema - to make things better we had to first make them worse
  • PHASE OUT SCHEMA ACCESS Friday, 20 September 13
  • PHASE OUT SCHEMA ACCESS Friday, 20 September 13
  • SPLIT DATA Friday, 20 September 13 - data can be partitioned into individual databases owned by the services - data has a clear owner! - because data is exchanged using HTTP and JSON, clients don’t actually know or care how the data is stored - the god database can eventually be taken to the farm - this was the journey we embarked on around a year ago - I left early July but from all reports this is progressing well but still nowhere near this picture
  • LINEAR COSTS •creating a new service •provisioning a new set of environments •configuring a new set of environments •manual deployment Friday, 20 September 13 - what did we learn along the way? - microservices by definition are fine grained - they are small things that perform one job well - building microservices using our default approach was going to be costly as the number of services grew - we had a bunch of linear costs, the types of cost where the total cost increases steadily as more services are added - created new services by ripping the parts we needed out of a similar service - time consuming to strip out the cruft from the old project - provisioned the environments manually via the existing ops department - lead times and coordination costs - deployments were not entirely manual - we automated a certain amount to begin with and automated more as it made sense.You only start to feel the pain around this as you add more services
  • EXPONENTIAL COSTS •integration testing new builds Friday, 20 September 13 - integration testing components became an exponential cost - we started by building our test environments on AWS - instances were cheap so we tried automated certification builds with a set of environments per component - this quickly became unwieldly as the number of components grew - end to end testing and a copy of the certification environment per component quickly became unmanageable
  • COST OPPOSES GRANULARITY Friday, 20 September 13 - these costs were affecting how we designed our services - when we looked at certain responsibilities in some services, they clearly belonged in their own service - we were piggy backing endpoints and capabilities onto existing services to reduce deployment and provisioning costs - we understood why microservices should be fine-grained but couldn’t wear the cost to achieve that goal - answer was to reduce the pull of those forces
  • ECONOMIES OF SCALE Friday, 20 September 13 - we decided to chase economies of scale with the creation and management of our microservices - involves early and ongoing investment in reducing the costs I mentioned earlier - growing our capability to spawn and manage new services more cheaply - set up a team dedicated to building this capability through tooling and other means - formally meet up with other teams several times a week but often informally several times a day
  • •moved production environments to EC2 •took on more ops responsibility •built tools for: • provisioning • deployment • network configuration CHEAPER ENVIRONMENTS Friday, 20 September 13 - move to EC2 - built command line tools to make provisioning and deployment trivial - setting up a new environment took a matter of minutes - kept chipping away
  • MICROSERVICE STENCIL Friday, 20 September 13 - git project with a vanilla skeleton for a service - standard stack (Rack,Webmachine, Nagios, NGinx, Splunk) - standard packaging and deployment (RPM) - encoded best practice for project setup - spawning a new service took seconds, clone project - continually improved
  • CONSUMER DRIVEN CONTRACTS Friday, 20 September 13 - wanted to focus more on the point to point relationships between components rather than end to end testing - consumer driven contracts - consumer of a service specifying what it requires from the provider - provider will offer a superset of that - encode contracts as tests, run them as part of the producer build - specific tests failing should tell you that you have broken specific consumers - first implemented as unit tests that lived with the producer - fell out of step with the reality of what the consumer was really interested in
  • github.com/uglyog/pact Friday, 20 September 13 - why not just have contracts driven from real examples? - stub out the producer and record what the consumer expected - serialise contract, play it back in the producer build to ensure it is doing the right thing - hacked this into the consumer project we had at the time - pulled out into internal gem - external gem
  • PACT FILES Friday, 20 September 13 - consumer build, talks to stubbed out producer - declare interactions, what we expect when we ask the producer for certain things - record mode - record a pact file, JSON serialisation of interactions - copy this to producer build and run producer tests to ensure it honours the pact - like playback mode
  • CONSUMER - RECORD PACT # spec/time_consumer_spec.rb require 'pact/consumer/rspec' require 'httparty' class TimeConsumer include HTTParty base_uri 'localhost:1234' def get_time time = JSON.parse(self.class.get('/time').body) "the time is #{time['hour']}:#{time['minute']} ..." end end Pact.service_consumer 'TimeConsumer' do has_pact_with 'TimeProvider' do mock_service :time_provider do port 1234 end end end describe TimeConsumer do context 'when telling the time', :pact => true do it 'formats the time with hours and minutes' do time_provider. upon_receiving('a request for the time'). with({ method: :get, path: '/time' }). will_respond_with({status: 200, body: {'hour' => 10, 'minute' => 45}}) expect(TimeConsumer.new.get_time).to eql('the time is 10:45 ...') end end end https://github.com/brentsnook/pact_examples Friday, 20 September 13
  • # spec/service_providers/pact_helper.rb class TimeProvider def call(env) [ 200, {"Content-Type" => "application/json"}, [{hour: 10, minute: 45, second: 22}.to_json] ] end end Pact.service_provider "Time Provider" do app { TimeProvider.new } honours_pact_with 'Time Consumer' do pact_uri File.dirname(__FILE__) + '/../pacts/timeconsumer-timeprovider.json' end end PRODUCER - PLAY BACK PACT https://github.com/brentsnook/pact_examples Friday, 20 September 13
  • WEB OF PACTS/CONTRACTS Friday, 20 September 13 - web of contracts joining different consumers and producers - can use a separate build to publish pact files between consumer and producer builds - pretty fast feedback when a consumer expectation is unrealistic or the producer has a regression - can replace a lot of automated end to end testing but we also supplement with manual exploratory end to end testing
  • SWITCH FROM PREVENTIONTO DETECTION Friday, 20 September 13 Fred George advocates replacing unit tests with monitoring transactions and responding. This still makes me uncomfortable, I’d do both. We didn’t get this far
  • •invest in building economies of scale •automating the crap out of things is generally a good way to reduce costs •standardise service architecture to save on creation and maintenance costs BUT •don’t forget to use new services/rewrites to experiment with different technologies and approaches SO... Friday, 20 September 13
  • Microservice Architecture (Fred George) http://www.youtube.com/watch?v=2rKEveL55TY Microservices - Java the Unix Way (James Lewis) http://www.infoq.com/presentations/Micro-Services How Big Should a Micro-Service Be? (James Lewis) http://bovon.org/index.php/archives/350 Consumer Driven Contracts (Ian Robinson) http://martinfowler.com/articles/consumerDrivenContracts.html github.com/uglyog/pact WOULDYOU LIKETO KNOW MORE? Friday, 20 September 13