SlideShare a Scribd company logo
Changelog
Stream
Processing with
Apache Flink
Timo Walther
@twalthr
–
Flink Forward 2022
2022-08-03
About me
Open source
● Long-term committer since 2014 (before ASF)
● Member of the project management committee (PMC)
● Top 5 contributor (commits), top 1 contributor (additions)
● Among core architects of Flink SQL
Career
● Early software engineer @ DataArtisans
● SDK team @ DataArtisans/Ververica (acquisition by Alibaba)
● SQL team lead @ Ververica
● Co-founder @ Immerok
2
What is Apache Flink?
3
Building Blocks for Stream Processing
4
Time
● Synchronize
● Progress
● Wait
● Timeout
● Fast-forward
● Replay
State
● Store
● Buffer
● Cache
● Model
● Grow
● Expire
Streams
● Pipeline
● Distribute
● Join
● Enrich
● Control
● Replay
Snapshots
● Backup
● Version
● Fork
● A/B test
● Time-travel
● Restore
What is Apache Flink used for?
5
Transactions
Logs
IoT
Interactions
Events
…
Analytics
Event-driven
Applications
Data
Integration
ETL
Messaging
Systems
Files
Databases
Key/Value Stores
Applications
Messaging
Systems
Files
Databases
Key/Value Stores
Apache Flink’s APIs
6
API Stack
7
Dataflow Runtime
Low-Level Stream Operator API
Optimizer / Planner
Table / SQL API
DataStream API Stateful Functions
DataStream API
8
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(STREAMING);
DataStream<Integer> stream = env.fromElements(1, 2, 3);
stream.executeAndCollect().forEachRemaining(System.out::println);
Properties
● Exposes the building blocks for stream processing
● Arbitrary operator topologies using map(), process(), connect(), ...
● Business logic is written in user-defined functions
● Arbitrary user-defined record types flow in-between
● Conceptually always an append-only / insert-only log!
1
2
3
Output
Table / SQL API
9
TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode());
// Programmatic
Table table = env.fromValues(row(1), row(2), row(3));
// SQL
Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))");
table.execute().print();
Properties
● Abstracts the building blocks for stream processing
● Operator topology is determined by planner
● Business logic is declared in SQL and/or Table API
● Internal record types flow, Flink’s Row type is exposed in Table API
● Conceptually a table, but a changelog under the hood!
+----+-------------+
| op | f0 |
+----+-------------+
| +I | 1 |
| +I | 2 |
| +I | 3 |
Output
DataStream API ↔Table / SQL API
10
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
// Stream -> Table
DataStream<?> inStream1 = ...
Table appendOnlyTable = tableEnv.fromDataStream(inStream1)
DataStream<Row> inStream2 = ...
Table anyTable = tableEnv.fromChangelogStream(inStream2)
// Table -> Stream
DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class)
DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable)
Mix and match APIs!
Changelog Stream
Processing
11
Change is the law of life and those who look only to
the past or present are certain to miss the future.
John F. Kennedy
Data Processing is a Stream of Changes
12
● Business data is always a stream: bounded or unbounded
● Every record is a changelog entry: insertion as the default
● Batch processing is just a special case in the runtime
now
past future
start end of stream
bounded stream unbounded stream
unbounded stream
How do I Work with Streams in Flink SQL?
13
● You don’t. You work with dynamic tables!
● A concept similar to materialized views
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 145
Bob 10
So, is Flink SQL a database? No, bring your own data and systems!
Stream-Table Duality - Basics
14
● A stream is the changelog of a dynamic table
● Sources, operators, and sinks work on changelogs under the hood
● Each component declares the kind of changes it consumes/produces
only +I Appending/Insert-only
contains -… Updating
contains -U Retracting
never –U but +U Upserting
Short name Long name
+I Insertion Default for scans + output of bounded results.
-U Update Before Retracts a previously emitted result.
+U Update After Updates a previously emitted result.
Requires a primary key if -U is omitted for idempotent updates.
-D Delete Removes the last result.
Stream-Table Duality - Example
15
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
changelog
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Stream-Table Duality - Example
16
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(PRIMARY KEY(name) …)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Save ~50% of traffic if downstream system supports upserting!
Stream-Table Duality - Propagation
17
● Sources declares set of emitted changes i.e. changelog mode
● Optimizer tracks changelog mode and primary key through pipeline
● Sink declares changes it can digest
CREATE TABLE …
… WITH ('connector'='filesystem')
… WITH ('connector'='kafka')
… WITH ('connector'='kafka-upsert')
… WITH ('connector'='jdbc')
… WITH ('connector'='kafka', 'format' = 'debezium-json')
+I
+I
+I -D
+I -U +U -D
+I
(for sources)
Retract vs. Upsert
18
Retract
● No primary key requirements
● Works for almost every external system
● Supports duplicate rows
● In distributed system often unavoidable
à most flexible changelog mode
à default mode
Upsert
● Traffic + computation optimization
● In-place updates (idempotency)
SELECT c, COUNT(*) FROM (
SELECT COUNT(*) AS c
FROM T
GROUP BY user
)
GROUP BY c
Count 1
Subtask 1
Count 2
Subtask 1
Subtask 2
+U[1]
+U[2]
+I[…]
1=>1
2=>1
Subtask 2
+I[…]
Changelog Insights – Append-only
19
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …); // accepts all changes
INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating
20
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
21
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
22
CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Mode Transitions
23
Append-only
Retracting
Updating
through operation
if operator/sink requires it
ChangelogNormalize
if sink requires it
UpsertMaterialize
Mode Transitions – Characteristics
24
Append-only
● Event-time column backed
by watermarks
● Highly state efficient due to
notion of completeness
● Usually no event-time
column
● State usage needs to
be kept in mind
● Pure materialized view
maintenance
Retracting
Updating
aka "TABLE"
aka "STREAM"
aka ?
Demo
29
Summary
TLDR
● Flink's SQL engine is a powerful changelog processor
● Flexible tool for integrating systems with different semantics
There is more…
● CDC connector ecosystem
à 2.6k Github stars
https://flink-packages.org/packages/cdc-connectors
● Table Store
unified storage engine for dynamic tables
à native changelog support
à deep integration into Flink SQL "like a DB"
https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html
30
Thanks
Timo Walther
@twalthr
mrsql@immerok.com

More Related Content

What's hot

Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
Flink Forward
 
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Flink Forward
 
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
Materialized Column: An Efficient Way to Optimize Queries on Nested ColumnsMaterialized Column: An Efficient Way to Optimize Queries on Nested Columns
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
Databricks
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Databricks
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Battle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWaveBattle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWave
Yingjun Wu
 
Apache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink MeetupApache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink Meetup
Stephan Ewen
 
Unified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache FlinkUnified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache Flink
DataWorks Summit/Hadoop Summit
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
Databricks
 
Real time stock processing with apache nifi, apache flink and apache kafka
Real time stock processing with apache nifi, apache flink and apache kafkaReal time stock processing with apache nifi, apache flink and apache kafka
Real time stock processing with apache nifi, apache flink and apache kafka
Timothy Spann
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
CDC Stream Processing with Apache Flink
CDC Stream Processing with Apache FlinkCDC Stream Processing with Apache Flink
CDC Stream Processing with Apache Flink
Timo Walther
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
Flink Forward
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
Databricks
 
Introduction To Flink
Introduction To FlinkIntroduction To Flink
Introduction To Flink
Knoldus Inc.
 
Iceberg: a fast table format for S3
Iceberg: a fast table format for S3Iceberg: a fast table format for S3
Iceberg: a fast table format for S3
DataWorks Summit
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production DeploymentUsing the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 

What's hot (20)

Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
 
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...
 
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
Materialized Column: An Efficient Way to Optimize Queries on Nested ColumnsMaterialized Column: An Efficient Way to Optimize Queries on Nested Columns
Materialized Column: An Efficient Way to Optimize Queries on Nested Columns
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
 
Battle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWaveBattle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWave
 
Apache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink MeetupApache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink Meetup
 
Unified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache FlinkUnified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache Flink
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
 
Real time stock processing with apache nifi, apache flink and apache kafka
Real time stock processing with apache nifi, apache flink and apache kafkaReal time stock processing with apache nifi, apache flink and apache kafka
Real time stock processing with apache nifi, apache flink and apache kafka
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
CDC Stream Processing with Apache Flink
CDC Stream Processing with Apache FlinkCDC Stream Processing with Apache Flink
CDC Stream Processing with Apache Flink
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Introduction To Flink
Introduction To FlinkIntroduction To Flink
Introduction To Flink
 
Iceberg: a fast table format for S3
Iceberg: a fast table format for S3Iceberg: a fast table format for S3
Iceberg: a fast table format for S3
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production DeploymentUsing the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 

Similar to Changelog Stream Processing with Apache Flink

CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!
HostedbyConfluent
 
Why and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache FlinkWhy and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache Flink
Fabian Hueske
 
Streaming SQL
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde
 
2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines
Eamonn Boyle
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
 Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Julian Hyde
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/TridentQuerying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
DataWorks Summit/Hadoop Summit
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022
Flink Forward
 
Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite
Hortonworks
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
Julian Hyde
 
Omid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache PhoenixOmid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache Phoenix
DataWorks Summit
 
The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181
Mahmoud Samir Fayed
 
ARIES Recovery Algorithms
ARIES Recovery AlgorithmsARIES Recovery Algorithms
ARIES Recovery Algorithms
Pulasthi Lankeshwara
 
PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation
Max Klymyshyn
 
Streaming SQL
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde
 
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward
 
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther -  Table & SQL API – unified APIs for bat...Flink Forward SF 2017: Timo Walther -  Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward
 
Практическое применения Akka Streams
Практическое применения Akka StreamsПрактическое применения Akka Streams
Практическое применения Akka Streams
Alexey Romanchuk
 
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
2ГИС Технологии
 

Similar to Changelog Stream Processing with Apache Flink (20)

CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
 
Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!
 
Why and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache FlinkWhy and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache Flink
 
Streaming SQL
Streaming SQLStreaming SQL
Streaming SQL
 
2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
 Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/TridentQuerying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022
 
Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
 
Omid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache PhoenixOmid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache Phoenix
 
The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181
 
ARIES Recovery Algorithms
ARIES Recovery AlgorithmsARIES Recovery Algorithms
ARIES Recovery Algorithms
 
PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation
 
Streaming SQL
Streaming SQLStreaming SQL
Streaming SQL
 
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
 
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther -  Table & SQL API – unified APIs for bat...Flink Forward SF 2017: Timo Walther -  Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
 
Практическое применения Akka Streams
Практическое применения Akka StreamsПрактическое применения Akka Streams
Практическое применения Akka Streams
 
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
 
Pdxpugday2010 pg90
Pdxpugday2010 pg90Pdxpugday2010 pg90
Pdxpugday2010 pg90
 

More from Flink Forward

“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxTuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
Flink powered stream processing platform at Pinterest
Flink powered stream processing platform at PinterestFlink powered stream processing platform at Pinterest
Flink powered stream processing platform at Pinterest
Flink Forward
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
Flink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
Flink Forward
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
Flink Forward
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!
Flink Forward
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use casesExtending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use cases
Flink Forward
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
Flink Forward
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and ProfitUsing Queryable State for Fun and Profit
Using Queryable State for Fun and Profit
Flink Forward
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior DetectionLarge Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior Detection
Flink Forward
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta LakeBuilding Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 

More from Flink Forward (16)

“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxTuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
 
Flink powered stream processing platform at Pinterest
Flink powered stream processing platform at PinterestFlink powered stream processing platform at Pinterest
Flink powered stream processing platform at Pinterest
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use casesExtending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use cases
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and ProfitUsing Queryable State for Fun and Profit
Using Queryable State for Fun and Profit
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior DetectionLarge Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior Detection
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta LakeBuilding Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta Lake
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
 

Recently uploaded

GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
Peter Spielvogel
 

Recently uploaded (20)

GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
 

Changelog Stream Processing with Apache Flink

  • 1. Changelog Stream Processing with Apache Flink Timo Walther @twalthr – Flink Forward 2022 2022-08-03
  • 2. About me Open source ● Long-term committer since 2014 (before ASF) ● Member of the project management committee (PMC) ● Top 5 contributor (commits), top 1 contributor (additions) ● Among core architects of Flink SQL Career ● Early software engineer @ DataArtisans ● SDK team @ DataArtisans/Ververica (acquisition by Alibaba) ● SQL team lead @ Ververica ● Co-founder @ Immerok 2
  • 3. What is Apache Flink? 3
  • 4. Building Blocks for Stream Processing 4 Time ● Synchronize ● Progress ● Wait ● Timeout ● Fast-forward ● Replay State ● Store ● Buffer ● Cache ● Model ● Grow ● Expire Streams ● Pipeline ● Distribute ● Join ● Enrich ● Control ● Replay Snapshots ● Backup ● Version ● Fork ● A/B test ● Time-travel ● Restore
  • 5. What is Apache Flink used for? 5 Transactions Logs IoT Interactions Events … Analytics Event-driven Applications Data Integration ETL Messaging Systems Files Databases Key/Value Stores Applications Messaging Systems Files Databases Key/Value Stores
  • 7. API Stack 7 Dataflow Runtime Low-Level Stream Operator API Optimizer / Planner Table / SQL API DataStream API Stateful Functions
  • 8. DataStream API 8 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setRuntimeMode(STREAMING); DataStream<Integer> stream = env.fromElements(1, 2, 3); stream.executeAndCollect().forEachRemaining(System.out::println); Properties ● Exposes the building blocks for stream processing ● Arbitrary operator topologies using map(), process(), connect(), ... ● Business logic is written in user-defined functions ● Arbitrary user-defined record types flow in-between ● Conceptually always an append-only / insert-only log! 1 2 3 Output
  • 9. Table / SQL API 9 TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode()); // Programmatic Table table = env.fromValues(row(1), row(2), row(3)); // SQL Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))"); table.execute().print(); Properties ● Abstracts the building blocks for stream processing ● Operator topology is determined by planner ● Business logic is declared in SQL and/or Table API ● Internal record types flow, Flink’s Row type is exposed in Table API ● Conceptually a table, but a changelog under the hood! +----+-------------+ | op | f0 | +----+-------------+ | +I | 1 | | +I | 2 | | +I | 3 | Output
  • 10. DataStream API ↔Table / SQL API 10 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env); // Stream -> Table DataStream<?> inStream1 = ... Table appendOnlyTable = tableEnv.fromDataStream(inStream1) DataStream<Row> inStream2 = ... Table anyTable = tableEnv.fromChangelogStream(inStream2) // Table -> Stream DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class) DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable) Mix and match APIs!
  • 11. Changelog Stream Processing 11 Change is the law of life and those who look only to the past or present are certain to miss the future. John F. Kennedy
  • 12. Data Processing is a Stream of Changes 12 ● Business data is always a stream: bounded or unbounded ● Every record is a changelog entry: insertion as the default ● Batch processing is just a special case in the runtime now past future start end of stream bounded stream unbounded stream unbounded stream
  • 13. How do I Work with Streams in Flink SQL? 13 ● You don’t. You work with dynamic tables! ● A concept similar to materialized views CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) name amount Alice 56 Bob 10 Alice 89 name total Alice 145 Bob 10 So, is Flink SQL a database? No, bring your own data and systems!
  • 14. Stream-Table Duality - Basics 14 ● A stream is the changelog of a dynamic table ● Sources, operators, and sinks work on changelogs under the hood ● Each component declares the kind of changes it consumes/produces only +I Appending/Insert-only contains -… Updating contains -U Retracting never –U but +U Upserting Short name Long name +I Insertion Default for scans + output of bounded results. -U Update Before Retracts a previously emitted result. +U Update After Updates a previously emitted result. Requires a primary key if -U is omitted for idempotent updates. -D Delete Removes the last result.
  • 15. Stream-Table Duality - Example 15 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 changelog +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…)
  • 16. Stream-Table Duality - Example 16 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (PRIMARY KEY(name) …) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) Save ~50% of traffic if downstream system supports upserting!
  • 17. Stream-Table Duality - Propagation 17 ● Sources declares set of emitted changes i.e. changelog mode ● Optimizer tracks changelog mode and primary key through pipeline ● Sink declares changes it can digest CREATE TABLE … … WITH ('connector'='filesystem') … WITH ('connector'='kafka') … WITH ('connector'='kafka-upsert') … WITH ('connector'='jdbc') … WITH ('connector'='kafka', 'format' = 'debezium-json') +I +I +I -D +I -U +U -D +I (for sources)
  • 18. Retract vs. Upsert 18 Retract ● No primary key requirements ● Works for almost every external system ● Supports duplicate rows ● In distributed system often unavoidable à most flexible changelog mode à default mode Upsert ● Traffic + computation optimization ● In-place updates (idempotency) SELECT c, COUNT(*) FROM ( SELECT COUNT(*) AS c FROM T GROUP BY user ) GROUP BY c Count 1 Subtask 1 Count 2 Subtask 1 Subtask 2 +U[1] +U[2] +I[…] 1=>1 2=>1 Subtask 2 +I[…]
  • 19. Changelog Insights – Append-only 19 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); // accepts all changes INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 20. Changelog Insights – Updating 20 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 21. Changelog Insights – Updating with PK 21 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 22. Changelog Insights – Updating with PK 22 CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 23. Mode Transitions 23 Append-only Retracting Updating through operation if operator/sink requires it ChangelogNormalize if sink requires it UpsertMaterialize
  • 24. Mode Transitions – Characteristics 24 Append-only ● Event-time column backed by watermarks ● Highly state efficient due to notion of completeness ● Usually no event-time column ● State usage needs to be kept in mind ● Pure materialized view maintenance Retracting Updating aka "TABLE" aka "STREAM" aka ?
  • 26. Summary TLDR ● Flink's SQL engine is a powerful changelog processor ● Flexible tool for integrating systems with different semantics There is more… ● CDC connector ecosystem à 2.6k Github stars https://flink-packages.org/packages/cdc-connectors ● Table Store unified storage engine for dynamic tables à native changelog support à deep integration into Flink SQL "like a DB" https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html 30