Advertisement
Advertisement

More Related Content

Slideshows for you(20)

Advertisement

More from Cisco DevNet(20)

Advertisement

building microservices

  1. BUILDING MICROSERVICES Cloud Native patterns, Microservices platforms November 2015@SteveSfartz
  2. MICROSERVICES • Introduction (why, when, how) • Cloud Native Patterns – Resilience – Service Discovery alternatives – Microservices platforms – Gateway • Organizational changes • Comparing SOA styles • Sump up
  3. 100 - INTRODUCTION TO MICROSERVICES Why, Monolithic vs MicroServices, scaling the Monolith /! If you already know about microservices, jump to slide 28
  4. Microservices (2014) In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. Martin Fowler, James Lewis
  5. THE MONOLITHIC STYLE • An application built as a single unit, 3 main parts – client-side UI, a database , and a server-side application – code specific to each layer (UI, business logic, DB) – generally a minimum of 3 languages • This server-side application is a monolith – a single logical executable – any change requires building and deploying a new version • Locally built and tested on devs’ machines • CI/CD pipeline to secure production
  6. SCALING THE MONOLITH • Several Monolith instances behind a load balancer + Quickest way to scale + High availability - Routing traffic complexity - Very large code base - Change cycles tied together - Limited scalability  lack of modularity Load Balancer Monolith Client Client Client
  7. MODULARIZING THE MONOLITH • Specialized instances of a single codebase – Modular invocation Load Balancer Real time business logic Client Client Client Monolith MonolithAuthentication Monolith Batches Load Balancer - Re-engineering alternative to a rewrite - Generally does the trick - Prepare for future by introducing “APIs”
  8. MONOLITHIC VERSUS MICROSERVICES http://martinfowler.com/articles/microservices.html A monolithic application puts all its functionnality into a single process… … and scales by replacing the monolith on multiple servers A microservices architecture puts each element of functionnality into a separate service… … and scales by distributing these services across servers, replicating as needed
  9. GILT TESTIMONIAL http://www.infoq.com/news/2015/04/scaling-microservices-gilt
  10. GILT TESTIMONIAL http://www.infoq.com/news/2015/04/scaling-microservices-gilt
  11. GOOGLE TRENDS
  12. STATE OF THE ART • Pioneers : Amazon, Netflix, Twitter – Democratized the Microservices architecture style – Started opensourcing their building blocks in 2010 • Today – Large corporations for complex systems that need to evolve frequently – Digital startups to ensure their core business will scale and embrace the long tail
  13. AMAZON DNA « Services by design », Jeff Besos, 2002
  14. NETFLIX ECOSYSTEM • 100s of microservices • 1,000s of daily production changes • 10,000s of instances • 100,000s of customer interactions per minute • 1,000,000s of customers • 1,000,000,000s of metrics • 10,000,000,000 hours of streamed • 10s of operations engineers http://fr.slideshare.net/AmazonWebServices/ dvo203-the-life-of-a-netflix-engineer-using-37-of-the-internet
  15. 200 – DESIGNING FOR MICROSERVICES Common characteristics, State of the art
  16. COMMON CHARACTERISTICS 1. Componentization via Services 2. Organized around Business Capabilities 3. Products not Projects 4. Smart endpoints and dumb pipes 5. Decentralized governance 6. Decentralized data management 7. Infrastructure automation 8. Design for failure 9. Evolutionary design http://martinfowler.com/articles/microservices.html
  17. 1. COMPONENTIZATION VIA SERVICES • Services as components rather than libraries • Services avoid tight coupling by using explicit remote call mechanisms. • Services are independently deployable and scalable • Each service also provides a firm module boundary – allowing for different services to be written in different languages, and be managed by different teams . • A microservice may consist of multiple processes – that will always be developed and deployed together, – Ex : an application process and a database that's only used by that service.
  18. 2. ORGANIZED AROUND BUSINESS CAPABILITIES Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure. Melvyn Conway, 1967.
  19. 2. ORGANIZED AROUND BUSINESS CAPABILITIES • Microservices to solve Conway’s anti-pattern Cross-functional teams…. … organized around capabilities
  20. 3. PRODUCTS NOT PROJECTS • Standard project model – deliver pieces of software which are then considered to be completed, – hand over to a maintenance organization and disband the project team • The Microservices style – a team should own a product over its full lifetime – Amazon : You build => You run it
  21. 4. SMART ENDPOINTS AND DUMB PIPES • Be as decoupled and as cohesive as possible – own domain logic, – act more as filters in the classical Unix sense – using simple RESTish protocols and lightweight messaging • Smarts live in the services, not in-between the endpoints – No central tool / bus that includes sophisticated routing, transformations, process, business rules • Pre-requisite : turn the chatty in-process communication of the monolith into coarser-grained network messaging
  22. 5. DECENTRALIZED GOVERNANCE • The common microservices practice is to choose the best tool for each job – Tools are shared inside and outside the organization – Focus on common problems : data storage, IPC, infrastructure automation – Example : Netflix opensource libraries • Favor independent evolution – Consumer-driven Service contracts, – Tolerant Reader Pattern Minimal over-heads create an opportunity to have teams responsible for all aspects - Build their microservices and operate with 24/7 SLAs
  23. 6. DECENTRALIZED DATA MANAGEMENT • No unique data model approach – the conceptual model differs between microservices – Reinforce the separation of concerns • Decentralized data storage – Polyglot persistence is frequent in microservices architectures • Manage inconsistencies via the business practices in place throughout the organization – Common design : reversal processes versus 2PC distributed transactions
  24. 6. DECENTRALIZED DATA MANAGEMENT
  25. 7. INFRASTRUCTURE AUTOMATION • CI/CD, Real-time monitoring, global system and fine grained resources dashboards, • Has become a standard practice – thanks to public cloud providers, – but also opensource tools • a QA stake for monoliths • a pre-requisite for microservices
  26. 8. DESIGN FOR FAILURE • Any service call can fail – async consumption • Best practice : 1 to ZERO sync call – Circuit Breaker pattern • Ex : Netflix Stack (Hystrix / Eureka / Ribbon) – /! provide implementations for various languages • Infrastructure pre-requisites – monitor and restore at scale – simulation tools • Ex : Netflix Simian Army (Chaos Monkey)
  27. 9. EVOLUTIONARY DESIGN • More granular release planning • Goal : change tolerance – Identify change impacts quickly (via automation, service contracts, automated dependency maps) – Fix rather than revert – In case of mandatory breaking change, use versioning • a last resort option in the microservices world
  28. 200 – RESILIENCE Scaling Microservices
  29. SOLID COMMUNICATIONS • Choose a messaging infrastructure adapted to your business – RPC / REST style, – Request/Response, Streaming – HTTP, HTTP2, TCP, UDP • Support async consumption / aggregation – Leverage parallel code structures on your clients – ES7 (BabelJS, Google Traceur), go, Scala, Java8 • Ease consumption via clients SDKs – Automated generation from API definitions or IDL – Web APIS : Swagger (OADF), RAML – RPC style : Google gRPC, Twitter Finagle, Apache Thrift
  30. SOLID COMMUNICATIONS • Basic REST is not an option for microservices • Provide a large set of features – load balancing, – fault tolerance, – caching, – service discovery, – mutiple transport and protocols (HTTP, TCP, streaming) • Example : Netflix Ribbon
  31. RESILIENCE IN DISTRIBUTED SYSTEMS • Failures are common • An application that depends on 30 services – where each service has 99.99% uptime – 99.9930 = 99.7% uptime – 0.3% of 1 billion requests = 3,000,000 failures – 2+ hours downtime/month even if all dependencies have excellent uptime. • Reality is generally worse.
  32. WHEN 1 BACKEND BECOMES LATENT
  33. AT 50+ REQ/S, ALL REQUEST THREADS CAN BLOCK IN SECONDS
  34. NETFLIX HYSTRIX • Latency and Fault Tolerance library – designed to isolate points of access to remote systems, services and 3rd party libraries, – stop cascading failure and enable resilience where failure is inevitable. – real time monitoring via Dashboard
  35. HYSTRIX DASHBOARD
  36. HYSTRIX DASHBOARD • Circuit Breaker : Open
  37. HYSTRIX DASHBOARD
  38. 300 – SERVICES DISCOVERY Scaling Microservices
  39. SERVICES DISCOVERY • Typically, a microservice gets created and destroyed often – It must be reconfigured on the fly – In near real-time, others should find it • Routing messages is somewhat a challenge – Service discovery (and registration) – Load balancing (ingress and egress calls) • Popular configuration services – Apache ZooKeeper , HashiCorp Consul, CoreOS Etcd, Netflix Eureka
  40. INTRODUCING PATTERNS • Cloud Native Applications Architecture Patterns – Core of the solution to build scalable applications with a microservices architecture style • Microservices Discovery patterns 1. Discovery via DNS 2. Discovery via a central Load Balancer 3. Discovery via local Load Balancers 4. Discovery via embedded Load Balancers Cloud Native PATTERN
  41. DISCOVERY VIA A DNS SERVICE • dynamic DNS (BIND-DLZ) • DNS service Record (SRV) • programmatic DNS-aaS – Amazon Route53, Apache Mesos-DNS • leverage CNAMEs to abstract traffic routing • but … propagation delays, random routing, no priorization, not consumer oriented  can do the trick for the first 10s microservices – /! Resilience – /! Slow upscaling, no 1 to ZERO Cloud Native ANTI PATTERN
  42. APACHE MESOS-DNS
  43. DISCOVERY VIA A CENTRAL LOAD BALANCER • either Home-made LB (Nginx, HA-proxy…) – /! Secure your SPOF • or levereage a public cloud provider LB – Rock solid AWS ELB, automation, model driven, autoscaling – Route53 CNAMEs to ELBs, Private ELB in VPCs • Limited « Smartness » – Version, HTTP header, URI path routing • Limited automation – ? Health checks customization, no circuit breaker • fails short for microservices at scale Cloud Native ANTI PATTERN
  44. DOCKER INTERLOCK PLUGIN https://github.com/ehazlett/interlock
  45. DISCOVERY VIA EMBEDDED LOAD BALANCERS • In-process Load Balancer – Embedded in each service consumer – Reads information for Service Registry + lightweight + composable - language dependency • ex: Netflix Ribbon – java glue around Eureka + Hystrix (+ gateway) Cloud Native PATTERN
  46. NETFLIX RIBBON • In-Process Fault Tolerant Smart Load Balancer – DIY from Netflix github repos – or leverage Spring Cloud http://start.spring.io/ Dynamic Load balancer (rules library) Eureka client HTTP, TCP, UDP Transport (based on RxNetty) Hystrix Netflix Eureka (service registry) Microservices register reads
  47. DISCOVERY VIA LOCAL LOAD BALANCERS • Load balancer component on localhost – also referred as « Sidecar » approach – ex: AirBnB SmartStack + leverage HA-proxy / nginx skills + easy to integrate (any language, any IT tools) + TCP, HTTP,HTTP2 support - heavier than in-process load balancers (IPC intermediary) - hasardous load balancing (individual vs sync’ed) - chatty at scale (numerous health checks)  Leverage as an intermediary step Cloud Native PATTERN
  48. AIRBNB SMARTSTACK http://nerds.airbnb.com/smartstack-service-discovery-cloud/
  49. AIRBNB SMARTSTACK 1. MicroS registers to a local Synapse process 2. Local Synapse retreives dependencies locations from ZooKeeper and registers watches 3. Local Synapse creates the local HAProxy configuration 4. MicroS registers to a local Nerve process 5. Local Nerve registers the MicroS into ZooKeeper 6. ZooKeeper fires events to Synapses through watches 7. Each synapse instances updates the HA-proxy configuration 8. MicroS sends traffic through its local proxy (egress) Note : Can be extended with a reverse proxy for ingress (microservice Gateway Pattern)
  50. 200 – MICROSERVICES PLATFORMS Scaling Microservices at Yelp, Netflix
  51. MEET PAASTA • Yelp’s PaaS for Microservices • integrates existing opensource components – Docker : code delivery and containment – Apache Mesos : code execution and scheduling (incl Docker) – Marathon for managing long-running services – Chronos for running things on a timer (nightly batches) – SmartStack for service registration and discovery – Sensu for monitoring/alerting – Jenkins (optionally) for continuous deployment • opensourced on Nov 2015 10th – https://github.com/Yelp/paasta/
  52. PAASTA DEPLOYMENT PIPELINE Airbnb SmartStack Chronos Microservices Microservice (Docker containers) Soa-config (versionned in git) Jenkins (optional) devops package describe Apache Mesos + Marathon Sensu Monitoring Facebook Scribe Docker PaaSTA CLI read updates - Devs prepare a Docker Image - DSL to describe how the service is to be run (cpu, mem, instances, nerve, bounce, volumes, monitoring, blacklists, healthchecks) - git commit the desired state (DSL) - PaaSTA CLI to provision your microservices
  53. THE NETFLIX MICROSERVICES JOURNEY • Migration to AWS AWS re-invent 2015
  54. NETFLIX STACK • Solid communications – Hystrix : latency and fault tolerance library – Eureka : registry for resilient mid-tier load balancing and failover – Ribbon : client based smart load balancer – Servo : monitoring library – EVCache : distributed in-memory data store for AWS EC2 – RxNetty : reactive extension adaptor for netty – Karyon : blueprint of a cloud ready microservice – Zuul : edge gateway – Falcor : js library for efficient data fetching
  55. NETFLIX STACK : AUTOMATION TOOLS • Asgard (transitionning to Spinnaker) – specialized AWS console (app deployments, management) • Spinnaker – microservices console (clusters, pipelines) • Atlas – near realtime operational insights • Vector – exposes hand picked high resolution metrics from PCP – Performance Co-Pilot hosts • SimianArmy – services (Monkeys) in the cloud for generating various kinds of failures, detecting abnormal conditions, and testing our ability to survive them • Dependencies automatically documented from real traffic analysis
  56. DEPENDENCY GRAPH BASED ON REAL TRAFFIC
  57. VECTOR
  58. NETFLIX FALCOR • JSON Virtual Resources to consume Microservices http://netflix.github.io/falcor/starter/what-is-falcor.html
  59. 200 – MICROSERVICES PLATFORMS Best of breed architecture
  60. MICROSERVICES PLATFORM • Devtime definition – Resources, Ports – Dependencies • Deployment configuration – System desired state – Scalability model • Runtime – Discovery, Monitoring, – Fill gap current vs desired state Requirements for Cloud Native Microservices – Packager, Deployment pipeline, Scheduler
  61. MICROSERVICES PLATFORM Gateway Load Balancer MicroServices Logs Metrics Health chk Container, VM, AppServer, PaaS Deployment configuration Discovery Monitoring Deployment Devtime definition Executable artifact MicroServices MicroServices Platform
  62. MEET NETFLIX SPINNAKER • Multi-cloud continuous delivery platform – Release changes with high velocity and confidence – AWS, Google, Cloud Foundry, (Azure coming) • Opensourced on Nov 2015, 16th deployment manager cluster manager
  63. SPINNAKER DEPLOYMENT PIPELINE
  64. SPINNAKER MANUAL STEP
  65. SPINNAKER CLUSTER MANAGEMENT
  66. SPINNAKER INTERNALS http://spinnaker.io/
  67. 300 – MICROSERVICES GATEWAY
  68. MICROSERVICES GATEWAY • Edge service to access back-end microservices – Authentication and Security – Dynamic routing – Insights and Monitoring – Stress testing : gradually increasing the traffic to a cluster in order to gauge performance. – Load Shedding : allocating capacity for each type of request and dropping requests that go over the limit. – Static Response handling : building some responses directly at the edge instead of forwarding them to an internal cluster – Multi-region Resiliency : routing requests across regions in order to move the edge closer to users
  69. MICROSERVICES GATEWAY • Integrate with the other building blocks of your Microservices communication infrastructure – Example : Netflix Zuul • Think also Bridge / Adapter – REST  RPC : gRPC-gateway, Finatra • Lots of building blocks – Embedded gateways (pick your dev language favorite gateway) – HA-Proxy plugins – Nginx, Nginx OpenResty plugins – Dedicated process with central configuration
  70. MASHAPE KONG
  71. MASHAPE KONG DEMO a programmatic gateway as nginx plugins
  72. 200 – ORGANIZE FOR MICROSERVICES
  73. COMMITTED TEAMS • Ownership core to team organization – built into the management of the organization – make sure that teams have sufficient time to truly own the applications that they are in charge • “Products versus Projects” principle • “Functional versus Divisional” organisations – And give them 360 view on operations
  74. SOUNDCLOUD TESTIMONIAL
  75. SOUNDCLOUD TESTIMONIAL • Actual workflow to go live – 2 months, 11 steps – Main elephant is all the dance between front-end and back-end development – 11 days are doing actual development work
  76. SOUNDCLOUD TESTIMONIAL • Decision : pairing back-end and front-end devs – pair fully dedicated to a feature until its completion – Individually, each person ended up spending more time doing work per feature
  77. SOUNDCLOUD TESTIMONIAL Designer, Product manager, and front-end Developer working close to each other
  78. SOUNDCLOUD TESTIMONIAL • The irreducible complexity of the monolith – Why do we need Pull Requests? – Why do people make mistakes so often? – Why is the code base so complex? – Why do we need a single code base to implement the many components? – Why can’t we have economies of scale for multiple, smaller, systems?
  79. SOUNDCLOUD TESTIMONIAL • Isolated new features in dedicated microservices, isolated from the monolith • New organization with team of 3 to 4 people • Each team is responsible for decided whether parts of the Monolith are extracted and rewritten , or kept
  80. COMMITTED TEAMS • Ownership core to team organization – built into the management of the organization – make sure that teams have sufficient time to truly own the applications that they are in charge • “Products versus Projects” principle • “Functional versus Divisional” organisations – And give them 360 view on operations
  81. FAQ How big should be my microservices ? RESTish microservices ? Isn’t it SOA ? The end of the monoliths ?
  82. HOW BIG ? • To ensure ownership, each team encomprises all traditional product functions – functional vs Divisional organisations – Ppoduct Management, Development, QA, Documentation, Support • Sizing depends on the system you’re building – Amazon 2PT principle - Two Pizza Teams – 6 to 10 people to build/deploy/manitain a microservice – an average microservice at Gilt consists of 2000 lines of code, 5 source files, and is run on 3 instances in production
  83. RESTISH MICROSERVICES ? • REST – Web langa franca, 100% interoprable – development cost is generally higher – best practices : provide client sdks, (ex : generated from Swagger/RAML or other API description languages) – performance issues if not well-designed (chatiness) – best practices : experience based and coarser grained APIs • RPC – optimized communication via binary formats – automated generation from IDL, polyglot by default – integrated support multples scenarios : req/response, streaming, bi-directional streaming
  84. RESTISH MICROSERVICES ? • RPC or REST style depends on the system you’re building and existing skillset • Whatever the style, your microservices architecture MUST provide – Services Discovery, – Reliable Communications, – Operations insights (logs, monitoring, alerts, real time analysis)
  85. GOT IT, BUT ISN’T IT SOA ? • SOA so what ? – Enterprise SOA – Event-driven architecture (Pub/Sub) – Streaming Services (Realtime, TimeSeries) – Container-Services (ala Docker) – Nano-ervices (ala AWS Lambda) • Simply stated : Microservices are a SOA style for a usually large system whose first goal is scalability  in details, let’s see how microservices differ from…
  86. MICROSERVICES VS ENTERPRISE SOA • Enterprise SOA is often seen as – multi-year initiatives, costs millions – complex protocols with productivity and interoperability challenges – central governance model that inhibits change • Enterprise SOA is more about integrating siloed monoliths – generally via a smart and centralized service bus • Microservices is scalable SOA – an architectural style to design, develop and deploy a large and complex system, so that it is easier to scale and evolve
  87. √ Componentization via Services √ Organized around Business Capabilities Products not Projects Smart endpoints and dumb pipes Decentralized governance Decentralized data management Infrastructure automation v Design for failure √ Evolutionary design VS ENTERPRISE SOA Microservices differentiators
  88. VS CORBA • CORBA is a communication middleware, relying on a common set of • See microservices as CORBA done right – Much more lightweight – Centered on Service Contracts rather than a global homogeneous message infrastructure – Much more scalable, flexible and dynamic thanks to programmatic infrastructure and CI/CD – Built out of OSS well-know components
  89. VS EVENT DRIVEN ARCHITECTURE • EDA fits well with Document oriented systems and information flows • Communication between microservices can be a mix of RPC (ie, P2P calls) and EDA calls • See EDA as a communication pattern for your microservices • Can address choregraphy, orchestration, pipeline requirements
  90. VS STREAMING SERVICES • Streaming services fit well – if you have large volumes (log entries, IoT), – and/or if you aim at real time analysis • Data ingestion endpoint of a microService – Real time analysis of a mobile app usage aws:reinvent 2015 - Zillow
  91. VS CONTAINER SERVICES • Containers provide infrastructure components to deploy microservices - independently one from another • See Container Services as a building block of your global microservices platform – see 200 – Design for Microservices Architectures
  92. VS NANOSERVICES • Nanoservices are small pieces of code, ie functions – Example : AWS Lambda, Auth0 webtasks • A icroservice may leverage 1+ nanoservices
  93. SUMP UP
  94. MICROSERVICES PATTERNS • Solid communications – Fault tolerant librairies – Service discovey • Committed teams – Devops culture – Code/Test/Deploy/Support, 24/7 – Automation • Ownership – Organisation aligned with the strategy – Insights via real time monitoring
  95. PREPARE YOUR MONOLITHS FOR CHANGE • Velocity of innovation for complex systems – Keep your monolith as is if you don’t need to speed up features delivery • To prepare for the journey • switch from layered architecture to internal APIs, • automate integration and deployment, • reorganize from divisional to functional teams committed to business and owning their code
  96. MICROSERVICES IS A LONG JOURNEY • Several years to implement – Communications, infrastructure, automation, monitoring, team organization • Leverage existing tools and stacks – Infrastructure as a Service and Containers – Continuous delivery platform – Netflix stack, DIY from OSS building blocks
  97. LONG IS THE ROAD TO MICROSERVICES • leverage OSS stacks, programmatic IaaS and containers + continuous delivery • mandatory CEO and Business teams support Where to start from ? • provide multiple glues, ala spring cloud for JVMs… • leverage « side car » approaches, ex : Netflix Prana Multiple langages support • numerous building blocks requiring continuous updates • automate, automate, automate « Technical debt » at scale • keep control of your microservices segmentation • dynamic cartography Dependency Hell • is your organisation ready ? • change management Product Ownership
  98. THE END OF MONOLITHS ? http://www.stavros.io/posts/microservices-cargo-cult/
  99. TO GO FURTHER microservices.io
  100. REFERENCES Microservices, Martin Fowler, James Lewis – http://martinfowler.com/articles/microservices.html A day in the life of a Netflix Engineer using 37% of the internet – http://fr.slideshare.net/AmazonWebServices/dvo203-the-life-of-a- netflix-engineer-using-37-of-the-internet Dzone – Geting started with Microservices – https://dzone.com/refcardz/getting-started-with-microservices SoundCloud – How we ended up with microservices – http://philcalcado.com/2015/09/08/how_we_ended_up_with_micros ervices.html gRPC - boilerplate to high-performance scalable APIs – http://fr.slideshare.net/AboutYouGmbH/robert-kubis-grpc-boilerplate- to-highperformance-scalable-apis-codetalks-2015
  101. REFERENCES SmartStack: Service Discovery in the Cloud – http://nerds.airbnb.com/smartstack-service-discovery-cloud/ Mesosphere advanced course – https://open.mesosphere.com/advanced-course/ Opensource Service Discovery – http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the- cloud/ Why Use a Container Sidecar for Microservices? – https://www.voxxed.com/blog/2015/01/use-container-sidecar- microservices/
  102. THANK YOU Stève SFARTZ • technical program manager • enterprise architect, web API specialist, • scalable & interoperable platforms builder • dev & ops teams lead Keeping in touch • steve@sfartz.com • twitter : @SteveSfartz
  103. BACK IN 1981…
  104. … THAT WAS MICRO Sinclair ZX81 introduced in March 1981 US$ 149.95 assembled 500,000 in first 12 months CPU: NEC Z-80A, 3.25MHz RAM: 1K, 64K max OS: ROM BASIC Display:22 X 32 texthooks to TV 2 ports: memory, cassette 1 peripheral: Sinclair thermal printer

Editor's Notes

  1. Fail Fast, Fail Silent, Fallback Wrapping all calls to external systems (or “dependencies”) which typically executes within a separate thread Timing-out calls that take longer than thresholds you define. Maintaining a small thread-pool for each dependency; if it becomes full, requests immediately rejected instead of queued up. Measuring successes, failures (exceptions thrown by client), timeouts, and thread rejections. Tripping a circuit-breaker to stop all requests to a particular service for a period of time, either manually or automatically (error percentage threshold) Performing fallback logic when a request fails, is rejected, times-out, or short-circuits. Monitoring metrics and configuration changes in near real-time.
  2. http://mesosphere.github.io/mesos-dns/ Mesos-DNS supports service discovery in Apache Mesos clusters. It allows applications and services running on Mesos to find each other through the domain name system (DNS), similarly to how services discover each other throughout the Internet. Applications launched by Marathon or Aurora are assigned names like search.marathon.mesos or log-aggregator.aurora.mesos. Mesos-DNS translates these names to the IP address and port on the machine currently running each application. To connect to an application in the Mesos datacenter, all you need to know is its name. Every time a connection is initiated, the DNS translation will point to the right machine in the datacenter. Mesos-DNS is designed to be a minimal, stateless service that is easy to deploy and maintain. The figure below depicts how it works: Mesos-DNS periodically queries the Mesos master(s), retrieves the state of all running tasks from all running frameworks, and generates DNS records for these tasks (A and SRV records). As tasks start, finish, fail, or restart on the Mesos cluster, Mesos-DNS updates the DNS records to reflect the latest state. The configuration of Mesos-DNS is minimal. You simply point it to the Mesos masters at launch. Frameworks do not need to communicate with Mesos-DNS at all. Applications and services running on Mesos slaves can discover the IP addresses and ports of other applications they depend upon by issuing DNS lookup requests or by issuing HTTP request through a REST API. Mesos-DNS replies directly to requests for tasks launched by Mesos. For DNS requests for other hostnames or services, Mesos-DNS uses an external nameserver to derive replies. Alternatively, you can configure your existing DNS server to forward only the requests for Mesos tasks to Mesos-DNS. Mesos-DNS is simple and stateless. It does not require consensus mechanisms, persistent storage, or a replicated log. This is possible because Mesos-DNS does not implement heartbeats, health monitoring, or lifetime management for applications. This functionality is already available by the Mesos master, slaves, and frameworks. Mesos-DNS can be made fault-tolerant by launching with a framework like Marathon, that can monitor application health and re-launch it on failures. On restart after a failure, Mesos-DNS retrieves the latest state from the Mesos master(s) and serves DNS requests without further coordination. It can be easily replicated to improve availability or to load balance DNS requests in clusters with large numbers of slaves.
  3. https://bitcontainer.wordpress.com/2015/09/18/scaling-microservices-with-docker-compose-interlock-and-haproxynginx/
  4. Good to know : other sidecar approaches Netflix Prana, Docker Ambassador container
  5. Yelp’s platform-as-a-service. It allows developers to declare, in config files, exactly how they want the code in their git repo to be built, deployed, routed, and monitored. PaaSTA powers a significant number of production services at Yelp, and has been running in production for more than 1.5 years. It even powers this very blog!
  6. Facebook Scribe : Scribe is a server for aggregating log data streamed in real time from a large number of servers.
  7. Spinnaker is composed of the following micro services: Clouddriver - encapsulates all cloud operations. Deck - User Interface. Echo - event service that forwards events from Spinnaker. Echo is responsible for triggering pipeline executions and forwarding pipeline events to listeners. Front50 - data store for pipelines, notifications and applications. Gate - service gateway responsible for providing an API to end users and UI. Igor - interface to Jenkins, Stash and GitHub. Orca - orchestration engine responsible for running Spinnaker pipelines and tasks. Rosco - bakery responsible for creating images for deployment. Rush - general purpose scripting engine
  8. Git clone Kong VagrantFile Vagrant up Kong start Proxy Web API Call Web API Add Rate Limiter Call Web API Kong stop Vagrant halt
  9. http://www.site-reliability-engineering.info/2014/04/what-is-site-reliability-engineering.html
  10. The actual flow was something like this: 1. Somebody comes up with an idea for a feature. They then write a fairly lightweight spec, with some screen mockups, and stores that in a Google Drive document. 2. The spec stays in this document until somebody has time to actually work on it. 3.The very small design team would get the spec and design the user experience for it. This would then become a card in the Trello board owned by the Web team. 4. The card would sit on the Trello board for a while, at least a 2-weeks iteration, until an engineer was free to pick it up. 5.The engineer would start working on it. After converting the design into a proper browser-based experience using fake/static data, the engineer would write down what changes in the Rails API they would need for this experience to work. This would go into Pivotal Tracker, the tool of choice for the App team. 6. The card would sit on Pivotal until somebody from the App team was free to look at it, often taking another two-week iteration. 7. The App team member would write the code, integration tests and everything required to get the API live. Then they would update the Trello issue, letting the Web team know their part was done. 8. The updated Trello card would sit in the backlog for a while more, waiting for the engineer in the Web team to finish whatever they started doing while waiting for the back-end work. 9. The Web team developer would make their client-side code match whatever kinks from the back-end implementation, and would give the green light for a deploy. 10. As deployments were risky, slow, and painful, the App team would wait for several features to land into the master branch before deploying it to production. This means the feature would be sitting in source control for a couple of days, and that very frequently your feature would be rolled back because of a problem in a completely unrelated part of the code. 11. At some point, the feature finally gets deployed to production.
  11. http://www.site-reliability-engineering.info/2014/04/what-is-site-reliability-engineering.html
  12. Thanks DAD
Advertisement