Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Voxxed days thessaloniki 21/10/2016 - Streaming Engines for Big Data

418 views

Published on

Voxxed Days presentation

Published in: Technology
  • Be the first to comment

Voxxed days thessaloniki 21/10/2016 - Streaming Engines for Big Data

  1. 1. Streaming Engines for Big Data Spark Streaming: a case study Stavros Kontopoulos Senior Software Engineer @ Lightbend, M.Sc. 21st October 2016, Thessaloniki #VoxxedDaysThessaloniki
  2. 2. 2 Who Am I? Fast Data Team Engineer @ Lightbend OSS contributor (Apache Spark on Mesos) https://github.com/skonto #VoxxedDaysThessaloniki
  3. 3. 3 ● A bit of history... ● Streaming Engines for Big Data ○ Key concepts - Design Considerations ○ Modern analysis of infinite streams ○ Streaming Engines Examples ○ Which one to use? ● Spark Streaming A Case Study ○ DStream API ○ Structured Streaming #VoxxedDaysThessaloniki
  4. 4. Who likes history? #VoxxedDaysThessaloniki 4
  5. 5. Why Streaming? 5 #VoxxedDaysThessaloniki
  6. 6. Big Data - The story ● One decade ago people started looking to the problem of how to process massive data sets (Velocity, Variety, Volume). ● The Apache Hadoop project appeared at that time and became the golden solution for batch processing running on commodity hardware. Later became an ecosystem of several other projects: Pig, Hive, HBase etc. present GFS paper 2003 Mapreduce Paper 2004 Hadoop project, 0.1.0 release 2006 2009 Hadoop sorts 1 Petabyte Spark on Yarn by Clouder, Yarn in production 2010 Hadoop 2.4, 2.5, 2.6 releases 2014 HBase, Pig, Hive graduate 2013 2015 Hadoop 2.7 release #VoxxedDaysThessaloniki 6
  7. 7. Big Data - The story X Y Z MAP MAP SHUFFLE MAP MAP-REDUCE A B A REDUCE REDUCE Q W #VoxxedDaysThessaloniki 7
  8. 8. Big Data - The story Hadoop pros/cons ● Batch jobs usually take hours if not days to complete, in many applications that is not acceptable anymore. ● Traditionally focus is on throughput than latency. Frameworks like Hadoop were designed with that in mind. ● Accuracy is the best you can get. #VoxxedDaysThessaloniki 8
  9. 9. Big Data - The story ● Giuseppe DeCandia et al., ”Dynamo: amazon's highly available key-value store.” changed the DataBase world in 2007. ● NoSQL Databases along with general system like Hadoop solve problems cannot be solved with traditional RDBMs. ● Technology facts: Cheap memory, SSDs, HDDs are the new tape, more cpus over more powerful cpus. #VoxxedDaysThessaloniki 9
  10. 10. Big Data - The story ● Disruptive companies need to utilize ML and latest information to come up with smart decisions sooner. ● And so we need streaming in the enterprise… We no longer talk about Big Data only, its Fast Data first. Searching Recommendations Real-time financial activities Fraud Detection #VoxxedDaysThessaloniki 10
  11. 11. Big Data - The story OpsClarity Report Summary: ● 92% plan to increase their investment in stream processing applications in the next year ● 79% plan to reduce or eliminate investment in batch processing ● 32% use real time analysis to power core customer-facing applications ● 44% agreed that it is tedious to correlate issues across the pipeline ● 68% identified lack of experience and underlying complexity of new data frameworks as their barrier to adoption http://info.opsclarity.com/2016-fast-data-streaming-applications-report.html #VoxxedDaysThessaloniki 11
  12. 12. #VoxxedDaysThessaloniki 12 Key Concepts
  13. 13. Streams ● A Stream is flow of data. The flow consists of ephemeral data elements flowing from a source to a sink. ● Streams become useful when a set of operations/transformations are applied on them. ● Can be infinite or finite in size. This translates to the notions of bounded/ unbounded data. #VoxxedDaysThessaloniki 13
  14. 14. Stream Processing Stream Processing: processing done on an (un)bounded data stream. Not all data are available. Source Sink Processing #VoxxedDaysThessaloniki 14
  15. 15. Stream Processing Multiple StreamsSource 1 Sink Processing Source 2 #VoxxedDaysThessaloniki 15
  16. 16. Stream Processing Processing can be… ● Stream management: connect, iterate... ● Data manipulation: map, flatmap… ● Input/Output Graph as the abstraction for defining how all the pieces are put together and how data flows between them. Some systems use a DAG. 16 #VoxxedDaysThessaloniki Map Reduce Count Distinct DFS DB DFS
  17. 17. Stream Processing - Parallelism Source Sink #VoxxedDaysThessaloniki map map 17 partitioner
  18. 18. Stream Processing - Execution Model Map your graph to an execution plan and run it. Execution Model Abstractions: Job, Task etc. Actors: JobManager, TaskManager. Where TaskManager and Tasks run? Threads, nodes etc… Important: code runs close to the data… Serialize and send over the network the task code along with any dependencies, communicate back the results to the application... 18 #VoxxedDaysThessaloniki
  19. 19. Stream vs Batch Processing Batch processing is processing done on finite data set with all data available. Two types of engines: batch and streaming engines which can actually be used for both types of processing! 19 #VoxxedDaysThessaloniki
  20. 20. Streaming Applications User code that materializes streams and applies stream processing. ... ... 20 #VoxxedDaysThessaloniki
  21. 21. Streaming Engines for Big Data Streaming Engines allows to building streaming applications: Streaming Engines for Big data provide in addition: ● A rich ecosystem built around them for example connectors for common sources, outputs to different sinks etc. ● Fault tolerance, scalability (cluster management support), management of strugglers ● ML, Graph, CEP, processing capabilities + API Streaming App 21 #VoxxedDaysThessaloniki
  22. 22. Streaming Engines for Big Data A big data system at minimum needs: ● A data processing framework eg. a streaming engine. ● A Distributed File System. 22 #VoxxedDaysThessaloniki
  23. 23. 23 Designing A Streaming Engine
  24. 24. Design Considerations of A Streaming Engine ● Strong consistency. If a machine fails how my results are affected? ○ Exactly once processing. ○ Checkpointing ● Appropriate semantics for integrating time. Late data? ● API (Language Support, DAG, SQL Support etc) 24 #VoxxedDaysThessaloniki
  25. 25. Design Considerations of A Streaming Engine ● Execution Model - integration with cluster manager(s) ● Elasticity - Dynamic allocation ● Performance: Throughput vs Latency ● Libraries for CEP, Graph, ML, SQL based processing 25 #VoxxedDaysThessaloniki
  26. 26. Design Considerations of A Streaming Engine ● Deployment modes: local vs cluster mode ● Streaming vs Batch mode, Code looks the same? ● Logging ● Local state management ● Support for session state 26 #VoxxedDaysThessaloniki
  27. 27. Design Considerations of A Streaming Engine ● Backpressure ● Off Heap Management ● Caching ● Security ● UI ● CLI env for interactive sessions 27 #VoxxedDaysThessaloniki
  28. 28. 28 State of the Art Stream Analysis
  29. 29. Analyzing Infinite Data Streams ● Recent advances in Streaming are a result of the pioneer work: ○ MillWheel: Fault-Tolerant Stream Processing at Internet Scale, VLDB 2013. ○ The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing, Proceedings of the VLDB Endowment, vol. 8 (2015), pp. 1792-1803 29 #VoxxedDaysThessaloniki
  30. 30. Analyzing Infinite Data Streams ● Two cases for processing: ○ Single event processing: event transformation, trigger an alarm on an error event ○ Event aggregations: summary statistics, group-by, join and similar queries. For example compute the average temperature for the last 5 minutes from a sensor data stream. 30 #VoxxedDaysThessaloniki
  31. 31. Analyzing Infinite Data Streams ● Event aggregation introduces the concept of windowing wrt the notion of time selected: ○ Event time (the time that events happen): Important for most use cases where context and correctness matter at the same time. Example: billing applications, anomaly detection. ○ Processing time (the time they are observed during processing): Use cases where I only care about what I process in a window. Example: accumulated clicks on a page per second. ○ System Arrival or Ingestion time (the time that events arrived at the streaming system). ● Ideally event time = Processing time. Reality is: there is skew. 31 #VoxxedDaysThessaloniki
  32. 32. Time in Modern Data Stream Analysis Windows come in different flavors: ● Tumbling windows discretize a stream into non-overlapping windows. ○ Eg. report all distinct users every 10 seconds ● Sliding Windows: slide over the stream of data. ○ Eg. report all distinct users for the last 10 minutes every 1 minute. 32 #VoxxedDaysThessaloniki
  33. 33. Analyzing Infinite Data Streams ● Watermarks: indicates that no elements with a timestamp older or equal to the watermark timestamp should arrive for the specific window of data. ○ Allows us to mark late data. Late data can either be added to the window or discarded. ● Triggers: decide when the window is evaluated or purged. ○ Allows complex logic for window processing 33 #VoxxedDaysThessaloniki
  34. 34. Analyzing Infinite Data Streams ● Apache Beam is the open source successor of Google’s DataFlow ● It is becoming the standard api streaming. Provides the advanced semantics needed for the current needs in streaming applications. 34 #VoxxedDaysThessaloniki
  35. 35. Streaming Engines for Big Data OSS ● Apache Flink ● Apache Spark Streaming ● Apache Storm ● Apache Samza ● Apache Apex ● Apache Kafka Streams (Confluent Platform) ● Akka Streams/Gearpump ● Apache Beam Cloud: ● Amazon Kinesis ● Google Dataflow 35 #VoxxedDaysThessaloniki
  36. 36. Streaming Engines for Big Data - Pick one Many criteria: use case at hand, existing infrastructure, performance, customer support, cloud vendor, features Recommend to first to look at: ● Apache Flink for low latency and advanced semantics ● Apache Spark for its maturity and rich set of functionality: ML, SQL, GraphX ● Apache Kafka Streams for simple data transformations from and back to Kafka topics 36 #VoxxedDaysThessaloniki
  37. 37. 37 Apache Spark 2.0
  38. 38. Spark in a Nutshell Apache Spark: A memory optimized distributed computing framework. Supports caching of data in memory for speeding computations. 38 #VoxxedDaysThessaloniki
  39. 39. Spark in a Nutshell - RDDs Represents a bounded dataset as an RDD (Resilient Distributed Dataset). An RDD can be seen as an immutable distributed collection. Two types of operations can be applied on an RDD: transformations like map and actions like collect. Transformations are lazy while actions trigger computation on the cluster. Operations like groupBy cause shuffle of data across the network. 39 #VoxxedDaysThessaloniki
  40. 40. Spark in a Nutshell - Deployment Mode 40 #VoxxedDaysThessaloniki
  41. 41. Spark in a Nutshell - Basic Components 41 #VoxxedDaysThessaloniki
  42. 42. 42 #VoxxedDaysThessaloniki Spark Batch Sample Word Count https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016
  43. 43. Spark in a nutshell - Key Features Dynamic Allocation Memory management (Project Tungsten + off heap operations) Cluster managers: Yarn, StandAlone, Mesos Scala, Python, Java, R Micro-batch engine SQL API, ML library, GraphX Monitoring UI 43 #VoxxedDaysThessaloniki
  44. 44. Spark Streaming Two flavors of Streaming: ● DStream API Spark 1.X -> mature API ● Structured Streaming (Alpha), Spark 2.0 -> Don’t go to production yet “Based on Spark SQL. User does not need to reason about streaming end to end” 44 #VoxxedDaysThessaloniki
  45. 45. Spark Streaming DStream API Discretizes the stream based on batchDuration (batch interval) which is configured once. Provides exactly one semantics with KafkaDirect for DStream or with WAL enabled for reliable receivers/drivers plus checkpointing for driver context recovery. Many transformations and actions you get on a RDD you can get them on DStream as well. 45 #VoxxedDaysThessaloniki
  46. 46. Spark Structured Streaming ● Integrates with DF and Dataset API (Spark SQL) for structured queries ● Allows for end-to-end exactly once for specific sources/sinks (HDFS/S3) ○ Requires replayable sources and idempotent sinks ● Input is sent to a query and output of the query is written to a sink. Two types of output implemented: ● Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table. ● Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change. 46 #VoxxedDaysThessaloniki
  47. 47. Spark Structured Streaming - Not Yet Implemented ● More Sources/Sinks ● Watermarks ● Late data management ● State Sessions 47 #VoxxedDaysThessaloniki
  48. 48. 48 #VoxxedDaysThessaloniki DStream API Example reportMax rdd.map(data => data.toInt).max() https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016
  49. 49. 49 #VoxxedDaysThessaloniki reportMax rdd.map(data => data.toInt).max() DStream API Example CheckPointing get or create the streaming context All streaming code goes here https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016
  50. 50. 50 Spark SQL - Batch https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016
  51. 51. 51 Structured Streaming mean code same as batch readStream instead of read writeStream instead of write Session creation is the same as with batch case https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016
  52. 52. Thank You! Questions? #VoxxedDaysThessaloniki
  53. 53. References 1. http://data-artisans.com/batch-is-a-special-case-of-streaming/ 2. http://www.slideshare.net/rolandkuhn/reactive-streams 3. https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101 4. https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102 5. http://www.slideshare.net/FlinkForward/flink-case-study-capital-one 6. http://flink.apache.org/poweredby.html 7. https://en.wikipedia.org/wiki/Apache_Hadoop 8. http://data-artisans.com/how-apache-flink-enables-new-streaming-applications-part-1/ 9. http://data-artisans.com/batch-is-a-special-case-of-streaming/ 10. https://databricks.com/blog/2015/01/15/improved-driver-fault-tolerance-and-zero-data-loss-in-spark- streaming.html 11. Ellen Friedman & Kostas Tzoumas, Introduction to Apache Flink, Oreilly 2016 12. http://spark.apache.org/docs/latest/sql-programming-guide.html 13. https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html 53 #VoxxedDaysThessaloniki

×