Voxxed days thessaloniki 21/10/2016 - Streaming Engines for Big Data
1. Streaming Engines for Big Data
Spark Streaming: a case study
Stavros Kontopoulos
Senior Software Engineer @ Lightbend, M.Sc.
21st October 2016, Thessaloniki
#VoxxedDaysThessaloniki
2. 2
Who Am I?
Fast Data Team Engineer @ Lightbend
OSS contributor (Apache Spark on Mesos)
https://github.com/skonto
#VoxxedDaysThessaloniki
3. 3
● A bit of history...
● Streaming Engines for Big Data
○ Key concepts - Design Considerations
○ Modern analysis of infinite streams
○ Streaming Engines Examples
○ Which one to use?
● Spark Streaming A Case Study
○ DStream API
○ Structured Streaming
#VoxxedDaysThessaloniki
6. Big Data - The story
● One decade ago people started looking to the problem of how to process
massive data sets (Velocity, Variety, Volume).
● The Apache Hadoop project appeared at that time and became the golden
solution for batch processing running on commodity hardware. Later became
an ecosystem of several other projects: Pig, Hive, HBase etc.
present
GFS paper
2003
Mapreduce
Paper
2004
Hadoop
project, 0.1.0
release
2006 2009
Hadoop sorts
1 Petabyte
Spark on Yarn
by Clouder,
Yarn in
production
2010
Hadoop 2.4,
2.5, 2.6
releases
2014
HBase, Pig,
Hive graduate
2013 2015
Hadoop 2.7
release
#VoxxedDaysThessaloniki
6
7. Big Data - The story
X
Y
Z
MAP
MAP
SHUFFLE
MAP
MAP-REDUCE
A
B
A
REDUCE
REDUCE
Q
W
#VoxxedDaysThessaloniki
7
8. Big Data - The story
Hadoop pros/cons
● Batch jobs usually take hours if not days to complete, in many applications
that is not acceptable anymore.
● Traditionally focus is on throughput than latency. Frameworks like Hadoop
were designed with that in mind.
● Accuracy is the best you can get.
#VoxxedDaysThessaloniki
8
9. Big Data - The story
● Giuseppe DeCandia et al., ”Dynamo: amazon's highly available key-value
store.” changed the DataBase world in 2007.
● NoSQL Databases along with general system like Hadoop solve problems
cannot be solved with traditional RDBMs.
● Technology facts: Cheap memory, SSDs, HDDs are the new tape, more cpus
over more powerful cpus.
#VoxxedDaysThessaloniki
9
10. Big Data - The story
● Disruptive companies need to utilize ML and latest information to come up
with smart decisions sooner.
● And so we need streaming in the enterprise… We no longer talk about Big
Data only, its Fast Data first.
Searching Recommendations Real-time financial activities
Fraud Detection
#VoxxedDaysThessaloniki
10
11. Big Data - The story
OpsClarity Report Summary:
● 92% plan to increase their investment in stream processing applications in the
next year
● 79% plan to reduce or eliminate investment in batch processing
● 32% use real time analysis to power core customer-facing applications
● 44% agreed that it is tedious to correlate issues across the pipeline
● 68% identified lack of experience and underlying complexity of new data
frameworks as their barrier to adoption
http://info.opsclarity.com/2016-fast-data-streaming-applications-report.html
#VoxxedDaysThessaloniki
11
13. Streams
● A Stream is flow of data. The flow consists of ephemeral data elements
flowing from a source to a sink.
● Streams become useful when a set of operations/transformations are applied
on them.
● Can be infinite or finite in size. This translates to the notions of bounded/
unbounded data.
#VoxxedDaysThessaloniki
13
14. Stream Processing
Stream Processing: processing done on an (un)bounded data stream. Not all
data are available.
Source Sink
Processing
#VoxxedDaysThessaloniki
14
16. Stream Processing
Processing can be…
● Stream management: connect, iterate...
● Data manipulation: map, flatmap…
● Input/Output
Graph as the abstraction for defining how all the pieces are put together and how
data flows between them. Some systems use a DAG.
16
#VoxxedDaysThessaloniki
Map Reduce
Count
Distinct DFS
DB
DFS
18. Stream Processing - Execution Model
Map your graph to an execution plan and run it.
Execution Model Abstractions: Job, Task etc.
Actors: JobManager, TaskManager.
Where TaskManager and Tasks run? Threads, nodes etc…
Important: code runs close to the data… Serialize and send over the network the
task code along with any dependencies, communicate back the results to the
application...
18
#VoxxedDaysThessaloniki
19. Stream vs Batch Processing
Batch processing is processing done on finite data set with all data available.
Two types of engines: batch and streaming engines which can actually be used
for both types of processing!
19
#VoxxedDaysThessaloniki
21. Streaming Engines for Big Data
Streaming Engines allows to building streaming applications:
Streaming Engines for Big data provide in addition:
● A rich ecosystem built around them for example connectors for common
sources, outputs to different sinks etc.
● Fault tolerance, scalability (cluster management support), management of
strugglers
● ML, Graph, CEP, processing capabilities
+ API Streaming App
21
#VoxxedDaysThessaloniki
22. Streaming Engines for Big Data
A big data system at minimum needs:
● A data processing framework eg. a streaming engine.
● A Distributed File System.
22
#VoxxedDaysThessaloniki
24. Design Considerations of A Streaming Engine
● Strong consistency. If a machine fails how my results are
affected?
○ Exactly once processing.
○ Checkpointing
● Appropriate semantics for integrating time. Late data?
● API (Language Support, DAG, SQL Support etc)
24
#VoxxedDaysThessaloniki
25. Design Considerations of A Streaming Engine
● Execution Model - integration with cluster manager(s)
● Elasticity - Dynamic allocation
● Performance: Throughput vs Latency
● Libraries for CEP, Graph, ML, SQL based processing
25
#VoxxedDaysThessaloniki
26. Design Considerations of A Streaming Engine
● Deployment modes: local vs cluster mode
● Streaming vs Batch mode, Code looks the same?
● Logging
● Local state management
● Support for session state
26
#VoxxedDaysThessaloniki
27. Design Considerations of A Streaming Engine
● Backpressure
● Off Heap Management
● Caching
● Security
● UI
● CLI env for interactive sessions
27
#VoxxedDaysThessaloniki
29. Analyzing Infinite Data Streams
● Recent advances in Streaming are a result of the pioneer work:
○ MillWheel: Fault-Tolerant Stream Processing at Internet Scale, VLDB
2013.
○ The Dataflow Model: A Practical Approach to Balancing Correctness,
Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data
Processing, Proceedings of the VLDB Endowment, vol. 8 (2015), pp.
1792-1803
29
#VoxxedDaysThessaloniki
30. Analyzing Infinite Data Streams
● Two cases for processing:
○ Single event processing: event transformation, trigger an alarm on an error event
○ Event aggregations: summary statistics, group-by, join and similar queries. For example
compute the average temperature for the last 5 minutes from a sensor data stream.
30
#VoxxedDaysThessaloniki
31. Analyzing Infinite Data Streams
● Event aggregation introduces the concept of windowing wrt the notion of time
selected:
○ Event time (the time that events happen): Important for most use cases where context and
correctness matter at the same time. Example: billing applications, anomaly detection.
○ Processing time (the time they are observed during processing): Use cases where I only care
about what I process in a window. Example: accumulated clicks on a page per second.
○ System Arrival or Ingestion time (the time that events arrived at the streaming system).
● Ideally event time = Processing time. Reality is: there is skew.
31
#VoxxedDaysThessaloniki
32. Time in Modern Data Stream Analysis
Windows come in different flavors:
● Tumbling windows discretize a stream into non-overlapping windows.
○ Eg. report all distinct users every 10 seconds
● Sliding Windows: slide over the stream of data.
○ Eg. report all distinct users for the last 10 minutes every 1 minute.
32
#VoxxedDaysThessaloniki
33. Analyzing Infinite Data Streams
● Watermarks: indicates that no elements with a timestamp older or equal to
the watermark timestamp should arrive for the specific window of data.
○ Allows us to mark late data. Late data can either be added to the window or discarded.
● Triggers: decide when the window is evaluated or purged.
○ Allows complex logic for window processing
33
#VoxxedDaysThessaloniki
34. Analyzing Infinite Data Streams
● Apache Beam is the open source successor of Google’s DataFlow
● It is becoming the standard api streaming. Provides the advanced semantics
needed for the current needs in streaming applications.
34
#VoxxedDaysThessaloniki
36. Streaming Engines for Big Data - Pick one
Many criteria: use case at hand, existing infrastructure, performance, customer
support, cloud vendor, features
Recommend to first to look at:
● Apache Flink for low latency and advanced semantics
● Apache Spark for its maturity and rich set of functionality: ML, SQL, GraphX
● Apache Kafka Streams for simple data transformations from and back to
Kafka topics
36
#VoxxedDaysThessaloniki
38. Spark in a Nutshell
Apache Spark: A memory optimized distributed computing framework.
Supports caching of data in memory for speeding computations.
38
#VoxxedDaysThessaloniki
39. Spark in a Nutshell - RDDs
Represents a bounded dataset as an RDD (Resilient Distributed Dataset).
An RDD can be seen as an immutable distributed collection.
Two types of operations can be applied on an RDD: transformations like map
and actions like collect.
Transformations are lazy while actions trigger computation on the cluster.
Operations like groupBy cause shuffle of data across the network.
39
#VoxxedDaysThessaloniki
40. Spark in a Nutshell - Deployment Mode
40
#VoxxedDaysThessaloniki
41. Spark in a Nutshell - Basic Components
41
#VoxxedDaysThessaloniki
43. Spark in a nutshell - Key Features
Dynamic Allocation
Memory management (Project Tungsten + off heap operations)
Cluster managers: Yarn, StandAlone, Mesos
Scala, Python, Java, R
Micro-batch engine
SQL API, ML library, GraphX
Monitoring UI
43
#VoxxedDaysThessaloniki
44. Spark Streaming
Two flavors of Streaming:
● DStream API Spark 1.X -> mature API
● Structured Streaming (Alpha), Spark 2.0 -> Don’t go to production yet
“Based on Spark SQL. User does not need to
reason about streaming end to end”
44
#VoxxedDaysThessaloniki
45. Spark Streaming DStream API
Discretizes the stream based on batchDuration (batch interval) which is configured
once.
Provides exactly one semantics with KafkaDirect for DStream or with WAL
enabled for reliable receivers/drivers plus checkpointing for driver context
recovery.
Many transformations and actions you get on a RDD you can get them on
DStream as well.
45
#VoxxedDaysThessaloniki
46. Spark Structured Streaming
● Integrates with DF and Dataset API (Spark SQL) for structured queries
● Allows for end-to-end exactly once for specific sources/sinks (HDFS/S3)
○ Requires replayable sources and idempotent sinks
● Input is sent to a query and output of the query is written to a sink.
Two types of output implemented:
● Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to
decide how to handle writing of the entire table.
● Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage.
This is applicable only on the queries where existing rows in the Result Table are not expected to change.
46
#VoxxedDaysThessaloniki
47. Spark Structured Streaming - Not Yet Implemented
● More Sources/Sinks
● Watermarks
● Late data management
● State Sessions
47
#VoxxedDaysThessaloniki
51. 51
Structured Streaming
mean code same as batch
readStream instead of read
writeStream instead of write
Session creation is the
same as with batch case
https://github.com/skonto/talks/tree/master/voxxed-days-thess-2016