Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Akka Streams - From Zero to Kafka


Published on

Slides from my madlab presentation on Akka Streams & Reactive Kafka (October 2015), full slides and source here:

Published in: Software
  • Be the first to comment

  • Be the first to like this

Akka Streams - From Zero to Kafka

  1. 1. AKKA STREAMS FROM ZERO TO KAFKA Createdby /MarkHarrison @markglh
  2. 2. HOW IT ALL BEGAN “Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. This encompasses efforts aimed at runtime environments (JVM and JavaScript) as well as network protocols.”
  3. 3. WHY Ef ciently processing large indeterminate streams is hard Avoiding blocking is essential to maximise performance Every stage in the stream needs to be able to push and pull We don't want to overload (or starve!) downstream consumers...
  4. 4. HOW Treat data as a stream of elements Asynchronous non-blocking data and demand ows Demand ows upstream, causing data to ow downstream Data ow is therefore restricted by demand Back Pressure!! Demand happens on a separate ow!
  5. 5. WHAT The Reactive Streams speci cation is just that A collection of interfaces methods and protocols Provides example implementations and a TCK for veri cation Aimed at providing a way to build common implementations
  7. 7. DESIGN PRINCIPLES Explicitness over magic (I'm looking at you Shapeless!) Fully composable Each component, or set of componenents can be combined Each building block is immutable Fully compatible with other Reactive Stream implementations
  9. 9. BUILDING BLOCKS CONT... Source Traditionally known as a producer Supplies messages that will ow downstream Exactly one output stream Sink Traditionally known as a consumer End point of the stream, this is where messages end up
  10. 10. BUILDING BLOCKS CONT... Flow A processing stage in the Stream Used to compose Streams Exactly one input and one output stream See also BidirectionalFlow (two in -> two out)
  11. 11. BUILDING BLOCKS CONT... RunnableGraphs A pre-assembled set of Stream components, packaged into a Graph. All exposed ports are connected (between a Source and Sink) This can then be Materialized
  12. 12. BUILDING BLOCKS CONT... Composite Flows It is possible to wrap several components into more complex ones This composition can then be treated as one block Partial Flow Graphs An incomplete Flow (Graph) Can be used to construct more complex Graphs easily
  13. 13. BUILDING BLOCKS CONT... Materializer Once complete, the ow is Materialized in order to start stream processing Supports fully distributed stream processing Each step must be either serializable immutable values or ActorRefs Fails immediately at runtime if the Graph isn't complete
  14. 14. ERRORS VS FAILURES Errors handlied within the stream as normal data elements Passed using the onNext function Failure means that the stream itself has failed and is collapsing Raises the onError signal... (???) Each block in the ow can choose to absorb or propagate the errors Possibly resulting the the complete collapse of the ow
  15. 15. FIRST THINGS FIRST We need to create an ActorSystem and Materializer implicit val system = ActorSystem("actors") implicit val materializer = ActorMaterializer()
  16. 16. SIMPLE STREAM We need to create an ActorSystem and Materializer Source(1 to 5) .filter(_ < 3) // 1, 2 .map(_ * 2) // 2, 4 .to(Sink.foreach(println)) .run() //prints 2 4
  17. 17. COMPOSING ELEMENTS TOGETHER We can combine multiple components together Composing elements together val nestedSource = Source(1 to 5) .map(_ * 2) val nestedFlow = Flow[Int] .filter(_ <= .map(_ + 2) val sink = Sink.foreach(println) //link up the Flow to a Sink val nestedSink = // Create a RunnableGraph - and run it! Prints 4 6
  18. 18. COMPOSING ELEMENTS TOGETHER CONT... Alternatively we could do this, linking them in one step nestedSource .via(nestedFlow) .to(Sink.foreach(println(_)))
  20. 20. GRAPH PROCESSING STAGES Fan Out Broadcast[T] – (1 input, N outputs) Balance[T] – (1 input, N outputs) ... Fan In Merge[In] – (N inputs , 1 output) ... Timer Driven groupedWithin(Int, Duration) Groups elements when either the number or duration is reached (whichever is rst). Very useful for batching messages. See the Akka Stream docs for more!
  22. 22. THE GRAPH DSL Whenever you want to perform multiple operations to control the Flow of a Graph, manually constructing them as above can become very clumbersome and tedius, not to mentioned hard to maintain. For this reason the Akka team have written a DSL to help write complex Graphs.
  23. 23. THE GRAPH DSL val g = FlowGraph.closed() { implicit builder: FlowGraph.Builder[Unit] => //This provides the DSL import FlowGraph.Implicits._ val in = Source(1 to 3) val out = Sink.foreach(println) //2 outputs, 2 inputs val bcast = builder.add(Broadcast[Int](2)) val merge = builder.add(Merge[Int](2)) val f1, f2, f3, f4 = Flow[Int].map(_ + 10) in ~> f1 ~> bcast ~> f2 ~> merge ~> f3 ~> out bcast ~> f4 ~> merge } //Prints 31 31 32 32 33 33
  24. 24. THE GRAPH DSL CONT...
  25. 25. EXAMPLE - REACTIVE KAFKA The guys at SoftwareMill have implemented a wrapper for Apache Kafka Tried and tested by yours truly
  26. 26. EXAMPLE - REACTIVE KAFKA CONT... Source is a Kafka Consumer Sink is a Kafka Publisher val kafka = new ReactiveKafka() val publisher: Publisher[StringKafkaMessage] = kafka.consume( ConsumerProperties(...) ) val subscriber: Subscriber[String] = kafka.publish( ProducerProperties(...) ) Source(publisher).map(_.message().toUpperCase) .to(Sink(subscriber)).run()
  28. 28. A REAL WORLD EXAMPLE CONT... FlowGraph.closed() { implicit builder: FlowGraph.Builder[Unit] => import FlowGraph.Implicits._ val in = Source(kafkaConsumer) val out = Sink.foreach(println) val bcast = builder .add(Broadcast[StringKafkaMessage](2)) val merge = builder .add(Merge[StringKafkaMessage](2)) val parser1, parser2 = Flow[StringKafkaMessage] .map(...) val group = Flow[StringKafkaMessage].grouped(4) in ~> bcast ~> parser1 ~> merge ~> group ~> out bcast ~> parser2 ~> merge }.run()
  29. 29. IT'S BEEN EMOTIONAL... Slides at Follow me Slides @markglh