Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Building Applications with the Typesafe Reactive Platform at Skills Matter, London, November 17, 2015


Published on

The Typesafe Reactive Platform recently gained some interesting new features, geared towards advanced usage scenarios. We are going to introduce them in some detail and discuss their strengths and limitations.

Split Brain Resolver - define a strategy to survive network partitioning in your Akka Cluster application

Reactive Monitoring - get valuable metrics for your Akka actors

Play User Quota - restrict quality of service for some, in order to provide availability for all

Published in: Software
  • Be the first to comment

  • Be the first to like this

Building Applications with the Typesafe Reactive Platform at Skills Matter, London, November 17, 2015

  1. 1. Building Applications with the Typesafe Reactive Platform Lutz Huehnken - Solutions Architect @lutzhuehnken
  2. 2. What is the Reactive Platform (RP)? ● RP is targeted towards enterprises that are launching and maintaining Reactive applications in production. ● A curated, certified build for simplified development and deployment ● Additional features not available in the open source projects.
  3. 3. Reactive Platform Versioning 15v01p05 year month patch = Scala 2.11.2 Akka 2.3.3 … 15v09p01 = Scala 2.11.7 Akka 2.3.12 …
  4. 4. Split Brain Resolver (Slides courtesy of Konrad Malawski)
  5. 5. Split Brain Resolver • Fundamental Problem in all distributed systems • SBR helps to make decisions, is not a magic wand • A set of pre-built strategies 
 for when to down nodes in a cluster. • Strategies: • Static Quorum (like zoo-keeper) • Keep Majority • Keep Oldest • Keep Referee
  6. 6. Heartbeats A heartbeats heartbeats
  7. 7. Heartbeats A heartbeats heartbeats everyone is down!
  8. 8. Heartbeats A `n-1` is down! I’ll take over `A`!
  9. 9. Heartbeats A `n-1` is down! I’ll take over `A`! A good if: n-1 really is down. bad: if n-1 is just very unresponsive Fundamentally, it is hard to distinguish the two states in distributed systems.
  10. 10. Static Quorum (3 (> (n/2 +1)) A
  11. 11. Static Quorum (3 (> (n/2 +1)) we need to down ourselves A
  12. 12. Keep Majority (aka. dynamic quorum) A
  13. 13. Keep Majority (aka. dynamic quorum) A we need to down ourselves
  14. 14. referee node Keep Referee A down-all-if-less-than-nodes
  15. 15. referee node Keep Referee A can’t see referee node! down-all-if-less-than-nodes
  16. 16. oldest node Keep Oldest A can’t see oldest node! down-if-alone oldest node can change, if “up until now oldest node” leaves the cluster. This is more dynamic than keep-referee.
  17. 17. Monitoring
  18. 18. Monitoring Async Apps—New Challenges • Contextislost • Stacktraceslessuseful • Expensivetocollectallsteps
  19. 19. Monitoring Async Apps—Too Much Data • Whichactorsdowetrack? • Whatdowekeep? • Whatdowefilterout? • Whatdoweaggregate?
  20. 20. Instrumentation • InstrumentedReactivePlatform • Configurableactormetrics • Actor-specificevents • Tracesacrossactors
  21. 21. Instrumentation • InstrumentedReactivePlatform • Configurableactormetrics • Actor-specificevents • Tracesacrossactors
  22. 22. Play User Quotas
  23. 23. Play User Quotas • Control the service level that you provide to your users. • Quotas lets you track each user’s usage and restrict access when usage exceeds limits that you set.
  24. 24. Play User Quotas Use Cases • On public websites, to stop users scraping your site. • When you’re providing a developer API for your website. APIs enforce rate limits - to get access to higher limits a developers need to validate their identity or pay a fee. • In your organization. You can provision and allocate resources fairly, you can ensure other internal users get clear feedback when they exceed agreed usage. • To limit how much data users upload in a period of time, to prevent your service being overloaded. • To slow down login attempts or other sensitive actions.
  25. 25. Play User Quotas • Per „account“ - can be IP, username, etc. • Works standalone and in a cluster!
  26. 26. ConductR
  27. 27. Reactive for DevOps What is ConductR? 28 ConductR is a solution for deploying and managing reactive applications across a cluster of machines.
  28. 28. Microservice Trade-offs 29 ..according to Mr. Martin Fowler
 - Distribution - Eventual Consistency - Operational Complexity Reactive for DevOps
  29. 29. Reactive for DevOps Microservice Trade-offs 30 - You need a mature operations team to manage lots of services, which are being redeployed regularly. And/or a tool that significantly simplifies that!
  30. 30. Reactive for DevOps Operational Complexity 31 So what do you really need in your operations environment to simplify things? - Convenient deployment format (e.g. in single-file format, ensuring consistency) - Convenient interface (to deploy, run, scale) - Service Lookup - Resiliency
  31. 31. Reactive for DevOps Deployment format 32 ConductR bundles - Contain all library dependencies of your app - And the configuration - SHA is generated and encoded in the file name - Unique identification, and consistency check - easy to create, with sbt or shazar
  32. 32. Reactive for DevOps Deploy, run, scale 33 - ConductR control protocol is a REST api - Great for automation, building REST clients should be easy enough - In fact, we provide a simple one for the command line conduct load conduct run 274dfbc conduct run —scale=3 274dfbc conduct stop 274dfbc conduct unload 274dfbc
  33. 33. Reactive for DevOps Service lookup 34 - Lookup service by assigned name - No need for additional infrastructure - „static“ (fail fast) or „dynamic“ BundleKeys.endpoints := Map(
 "ferry" -> Endpoint("http", services = Set(URI("http://:9666/ferry")))) === LocationService.getLookupUrl("/ferry", "")
  34. 34. Reactive for DevOps Resilience 35 - Any operations environment can only do so much. There’s no magic. - ConductR improves resiliency through: - Location transparency / proxying - handling node failure - the control protocol
  35. 35. HA Proxy integration (incl. re-configuration through cluster events) for resiliency
  36. 36. ©Typesafe 2015 – All Rights Reserved©Typesafe 2015 – All Rights Reserved