1. Apache Flink
Building a Stream Processor for fast analytics,
event-driven applications, event time, and tons of
state
Robert Metzger
@rmetzger_
robert@data-artisans.com
1
3. Agenda for today
§ Streaming history / definition
§ Apache Flink intro
§ Production users and use cases
§ Building blocks
§ APIs
§ (Implementation)
§ Q&A
3
4. What is Streaming and Streaming Processing?
§ First wave for streaming was lambda architecture
• Aid batch systems to be more real-time
§ Second wave was analytics (real time and lag-time)
• Based on distributed collections, functions, and windows
§ The next wave is much broader:
A new architecture to bring together analytics and event-
driven applications
4
6. Apache Flink
§ Apache Flink is an open source stream processing framework
• Low latency
• High throughput
• Stateful
• Distributed
§ Developed at the Apache Software Foundation,
§ Flink 1.4.0 is the latest release, used in production
6
7. What is Apache Flink?
7
Batch Processing
process static and
historic data
Data Stream
Processing
realtime results
from data streams
Event-driven
Applications
data-driven actions
and services
Stateful Computations Over Data Streams
8. What is Apache Flink?
8
Queries
Applications
Devices
etc.
Database
Stream
File / Object
Storage
Stateful computations over streams
real-time and historic
fast, scalable, fault tolerant, in-memory,
event time, large state, exactly-once
Historic
Data
Streams
Application
9. Hardened at scale
9
AthenaX Streaming SQL
Platform Service
trillion messages per day
Streaming Platform as a Service
Fraud detection
Streaming Analytics Platform
100s jobs, 1000s nodes, TBs state
metrics, analytics, real time ML
Streaming SQL as a platform
12. 12
@
Social network implemented using event sourcing and
CQRS (Command Query Responsibility Segregation) on
Kafka/Flink/Elasticsearch/Redis
More: https://data-artisans.com/blog/drivetribe-cqrs-apache-flink
13. Popular Apache Flink use cases
§ Streaming Analytics and Pipelines (e.g., Netflix)
§ Streaming ML and Search (e.g., Alibaba)
§ Streaming SQL Metrics, Analytics, Alerting Platform (e.g., Uber)
§ Streaming fraud Detection (e.g., ING)
§ CQRS Social Network (e.g., DriveTribe)
§ Streaming Trade Processing (reference upon request)
• Reporting (compliance), position keeping, balance sheets, risk
13
15. 15
Event Streams State (Event) Time Snapshots
The Core Building Blocks
real-time and
hindsight
complex
business logic
consistency with
out-of-order data
and late data
forking /
versioning /
time-travel
18. Stateful Event & Stream Processing
18
Scalable embedded state
Access at memory speed &
scales with parallel operators
19. Stateful Event & Stream Processing
19
Re-load state
Reset positions
in input streams
Rolling back computation
Re-processing
20. Time: Different Notions of Time
20
Event Producer Message Queue
Flink
Data Source
Flink
Window Operator
partition 1
partition 2
Event
Time
Ingestion
Time
Window
Processing
Time
Broker
Time
21. Time: Event Time Example
21
1977 1980 1983 1999 2002 2005 2015
Processing Time
Episode
IV
Episode
V
Episode
VI
Episode
I
Episode
II
Episode
III
Episode
VII
Event Time
22. 22
Event Streams State (Event) Time Snapshots
Recap: The Core Building Blocks
real-time and
hindsight
complex
business logic
consistency with
out-of-order data
and late data
forking /
versioning /
time-travel
24. The APIs
24
Process Function (events, state, time)
DataStream API (streams, windows)
Table API (dynamic tables)
Stream SQL
Stream- &
Batch Processing
Analytics
Stateful
Event-Driven
Applications
25. Process Function
25
class MyFunction extends ProcessFunction[MyEvent, Result] {
// declare state to use in the program
lazy val state: ValueState[CountWithTimestamp] = getRuntimeContext().getState(…)
def processElement(event: MyEvent, ctx: Context, out: Collector[Result]): Unit = {
// work with event and state
(event, state.value) match { … }
out.collect(…) // emit events
state.update(…) // modify state
// schedule a timer callback
ctx.timerService.registerEventTimeTimer(event.timestamp + 500)
}
def onTimer(timestamp: Long, ctx: OnTimerContext, out: Collector[Result]): Unit = {
// handle callback when event-/processing- time instant is reached
}
}
26. Data Stream API
26
val lines: DataStream[String] = env.addSource(
new FlinkKafkaConsumer<>(…))
val events: DataStream[Event] = lines.map((line) => parse(line))
val stats: DataStream[Statistic] = stream
.keyBy("sensor")
.timeWindow(Time.seconds(5))
.sum(new MyAggregationFunction())
stats.addSink(new RollingSink(path))
38. Rescaling State / Elasticity
▪ Similar to consistent hashing
▪ Split key space into key groups
▪ Assign key groups to tasks
38
Key space
Key group #1 Key group #2
Key group #3Key group #4
39. Rescaling State / Elasticity
▪ Rescaling changes key group
assignment
▪ Maximum parallelism
defined by #key groups
▪ Rescaling happens through
restoring a savepoint using
the new parallism
39
41. Time: Different Notions of Time
41
Event Producer Message Queue
Flink
Data Source
Flink
Window Operator
partition 1
partition 2
Event
Time
Ingestion
Time
Window
Processing
Time
Broker
Time
54. Queryable State: Implementation
54
Query Client
State
Registry
window(
)/
sum()
Job Manager Task Manager
ExecutionGraph
State Location Server
deploy
status
Query: /job/operation/state-name/key
State
Registry
window(
)/
sum()
Task Manager
(1) Get location of "key-partition"
for "operator" of" job"
(2) Look up
location
(3)
Respond location
(4) Query
state-name and key
local
state
register