Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Stateful Stream Processing at In-Memory Speed

4,421 views

Published on

This presentation describes results from a real-world system where I used Apache Flink's stateful stream processing capabilities to eliminate the key-value store bottleneck and the burden of the Lambda Architecture while also improving accuracy and gaining huge improvements in hardware efficiency!

Published in: Data & Analytics
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Stateful Stream Processing at In-Memory Speed

  1. 1. Stateful Stream Processing at In-Memory Speed Jamie Grier @jamiegrier jamie@data-artisans.com
  2. 2. Who am I? • Director of Applications Engineering at data Artisans • Previously working on streaming computation at Twitter, Gnip and Boulder Imaging • Involved in various kinds of stream processing for about a decade • High-speed video, social media streaming, general frameworks for stream processing
  3. 3. Overview • In stateful stream processing the bottleneck has often been the key-value store • Accuracy has been sacrificed for speed • Lambda Architecture was developed to address shortcomings of stream processors • Can we remove the key-value store bottleneck and enable processing at in-memory speeds? • Can we do this accurately without Lamba Architecture?
  4. 4. Problem statement • Incoming message rate: 1.5 million/sec • Group by several dimensions and aggregate over 1 hour event-time windows • Write hourly time series data to database • Respond to queries both over historical data and the live in-flight aggregates
  5. 5. Input and Queries Stream tweet-id: 1, event: url- click, time: 01:01:01 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:03 tweet-id: 2, event: url- click, time: 02:01:01 tweet-id: 1, event: impression, time: 02:02:02 Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1
  6. 6. Input and Queries Stream tweet-id: 1, event: url- click, time: 01:01:03 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:01 tweet-id: 2, event: url- click, time: 02:02:01 tweet-id: 1, event: impression, time: 02:01:02 Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1
  7. 7. Input and Queries Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1 Stream tweet-id: 1, event: url- click, time: 01:01:03 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:01 tweet-id: 2, event: url- click, time: 02:02:01 tweet-id: 1, event: impression, time: 02:01:02
  8. 8. Input and Queries Stream tweet-id: 1, event: url- click, time: 01:01:03 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:01 tweet-id: 2, event: url- click, time: 02:02:01 tweet-id: 1, event: impression, time: 02:01:02 Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1
  9. 9. Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1 Input and Queries Stream tweet-id: 1, event: url- click, time: 01:01:03 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:01 tweet-id: 2, event: url- click, time: 02:02:01 tweet-id: 1, event: impression, time: 02:01:02
  10. 10. Stream tweet-id: 1, event: url- click, time: 01:01:03 tweet-id: 2, event: url- click, time: 01:01:02 tweet-id: 1, event: impression, time: 01:01:01 tweet-id: 2, event: url- click, time: 02:02:01 tweet-id: 1, event: impression, time: 02:01:02 Query Result tweet-id: 1, event: url- click, time: 01:00:00 1 tweet-id: 1, event: *, time: 01:00:00 2 tweet-id: *, event: *, time: 01:00:00 3 tweet-id: *, event: impression, time: 02:00:00 1 tweet-id: 2, event: *, time: 02:00:00 1 Input and Queries
  11. 11. Time Series Data 0 25 50 75 100 125 01:00:00 02:00:00 03:00:00 04:00:00 Tweet Impressions Tweet 1 Tweet 2
  12. 12. Any questions so far?
  13. 13. Legacy System Stream Processor Hadoop Lambda Architecture Streaming Batch
  14. 14. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  15. 15. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  16. 16. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  17. 17. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  18. 18. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  19. 19. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  20. 20. Legacy System Lambda Architecture Hadoop Streaming Batch Stream Processor
  21. 21. Legacy System Lambda Architecture Hadoop Streaming Batch
  22. 22. • Aggregates built directly in key/value store • Read/modify/write for every message • Inaccurate: double-counting, lost pre-aggregated data • Hadoop job improves results after 24 hours Legacy System (Lambda Architecture)
  23. 23. Any questions so far?
  24. 24. Goals for Prototype System • Feature parity with existing system • Attempt to reduce hardware footprint by 100x • Exactly once semantics: compute correct results in real- time with or without failures. Failures should not lead to missing data or double counting • Satisfy realtime queries with low latency • One system: No Lambda Architecture! • Eliminate the key/value store bottleneck (big win)
  25. 25. My road to Apache Flink • Interested in Google Cloud Dataflow • Google nailed the semantics for stream processing • Unified batch and stream processing with one model • Dataflow didn’t exist in open source at the time (or so I thought) and I wanted to build it. • My wife wouldn’t let me quit my job! • Dataflow SDK is now open source as Apache Beam and Flink is the most complete runner.
  26. 26. Why Apache Flink? • Basically identical semantics to Google Cloud Dataflow • Flink is a true fault-tolerant stateful stream processor • Exactly once guarantees for state updates • The state management features might allow us to eliminate the key-value store • Windowing is built-in which makes time series easy • Native event time support / correct time based aggregations • Very fast data shuffling in benchmarks: 83 million msgs/sec on 30 machines • Flink “just works” with no tuning - even at scale!
  27. 27. Prototype System Apache Flink Streaming
  28. 28. Prototype System Apache Flink Streaming
  29. 29. Prototype System Apache Flink Streaming
  30. 30. Prototype System Apache Flink Streaming
  31. 31. Prototype System Apache Flink Streaming
  32. 32. Prototype System Apache Flink Streaming
  33. 33. Prototype System Apache Flink Streaming
  34. 34. Prototype System Apache Flink Streaming
  35. 35. Prototype System Apache Flink Streaming
  36. 36. Prototype System Apache Flink We now have a sharded key/value store inside the stream processor Streaming
  37. 37. Prototype System Apache Flink Why not just query that! We now have a sharded key/value store inside the stream processor Streaming
  38. 38. Prototype System Apache Flink Query Servic e Why not just query that! We now have a sharded key/value store inside the stream processor
  39. 39. Prototype System • Eliminates the key-value store bottleneck • Eliminates the batch layer • No more Lambda Architecture! • Realtime queries over in-flight aggregates • Hourly aggregates written to database
  40. 40. The Results • Uses 0.5% of the resources of the legacy system: An improvement of 200x with zero tuning! • Exactly once analytics in realtime • Complete elimination of batch layer and Lambda Architecture • Successfully eliminated the key-value store bottleneck
  41. 41. How is 200x improvement possible? • The key is making use of fault-tolerant state inside the stream processor • Computation proceeds at in-memory speeds • No need to make requests over the network to update values in external store • Dramatically less load on the database because only the completed window aggregates are written there. • Flink is extremely efficient at network I/O and data shuffling, and has highly optimized serialization architecture
  42. 42. Does this matter at smaller scale? • YES it does! • Much larger problems on the same hardware investment • Exactly-once semantics and state management is important at any scale! • Engineering time invested can be expensive at any scale if things don’t “just work”.
  43. 43. Summary • Used stateful operator features in Flink to remove the key/value store bottleneck • Dramatic reduction in hardware costs (200x) • Maintained feature parity by providing low-latency queries for in flight aggregates as well as long- term storage of hourly time series data • Actually improved accuracy of aggregations: Exactly-once vs. at least once semantics
  44. 44. Questions?
  45. 45. Thanks!

×