Query Kafka with SQL
Jove Zhong
Co-Founder and Head of Product, Timeplus
Gang Tao
Co-Founder and CTO, Timeplus
Why, how, and what’s next?
Sep 27, 2023
image credit: teacoffeecup.com
sed 's/coffee/SQL on Kafka/g'
Real-time data is everywhere, at the edge and cloud
46 ZB
of data created by
billions of IoT by 2025
30%
of data generated will be
real-time by 2025
Only 1%
of data is analyzed and
streaming data is
primarily untapped
Why SQL on Kafka?
Why SQL on Database?
ret = open_database(&(my_stock->inventory_dbp)..);
my_database->get(my_database, NULL, &key, &data, 0);
client.get(key)
update_bins = {'b'=: u"ud83dude04" 'i': aerospike.null()}
client.put(key, update_bins)
request = new GetItemRequest()
.withKey(key_to_get)
.withTableName(table_name);
SELECT * FROM tab WHERE id='id1'
UPDATE tab SET flag=FALSE WHERE id='id1'
Why SQL on Kafka?
Reliable Fast Easy
Powerful Descriptive
FinTech
● Real-time post-trade analytics
● Real-time pricing
DevOps
● Real-time Github insights
● Real-time o11y and usage based
pricing
Security Compliance
● SOC2 compliance
● Container vulnerability monitoring
● Monitor Superblocks user activities
● Protect sensitive info in Slack
IoT
● Real-time fleet monitoring
Customer 360
● Auth0 notifications for new signups
● HubSpot custom dashboards/alerts
● Jitsu clickstream analytics
● Real-time Twitter marketing
Misc
● Wildfire monitoring and alerting
● Data-driven parent
Sample Use Cases
source: https://docs.timeplus.com/showcases
How do you like your coffee?
Flink ksqlDB Hazelcast
Druid Pinot
Trino
ClickHouse StarRocks
RisingWave Databend
Streaming
Processor
Streaming
Database
Real-time
Database
FlinkSQL
since 2016
FlinkSQL
since 2016
Community
☕☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕
JOIN
☕☕☕☕
Largescale
☕☕☕☕
Lightweight☕☕
Easy to use☕☕
Community ☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕☕
JOIN ☕☕☕
Largescale ☕☕
Lightweight☕☕
Easy to use☕☕☕
ksqlDB
since 2019
Distributed computation and storage platform
No dependency on disk storage, it keeps all its
operational state in the RAM of the cluster.
Flink ksqlDB Hazelcast
Druid Pinot
Trino
Streaming
Processor
Streaming
Database
Real-time
Database
1. create a schema json (columns, PKs)
2. create a table configuration json (streamType=Kafka)
3. docker run .. apachepinot/pinot:latest AddTable 
-schemaFile /tmp/transcript-schema.json 
-tableConfigFile /tmp/transcript-table-realtime.json 
..
-exec
1. load the druid-kafka-indexing-service extension on both the
Overlord and the MiddleManagers
2. Create a supervisor-spec.json containing the Kafka
supervisor spec file.
3. curl -X POST -H 'Content-Type: application/json' -d
@supervisor-spec.json
http://localhost:8090/druid/indexer/v1/supervisor
Add a catalog properties file etc/catalog/kafka.properties for the Kafka connector.
$ ./trino --catalog kafka --schema aSchema
trino:aSchema> SELECT count(*) FROM customer;
ClickHouse StarRocks
Streaming
Processor
Streaming
Database
Real-time
Database
the Next-Generation Streaming Database
(Kafka + Flink + ClickHouse )
SQL with streaming extension
Data Ingestion Unified Query Processing Pipeline
ingest
append
stream
read
historical
read
streaming
storage
historical
storage
query
Kafka
External
Stream
SELECT * FROM car_live_data
Stream tail
SELECT count(*) FROM car_live_data
Global
aggregation
SELECT window_start, count(*)
FROM tumble(car_live_data, 1m)
GROUP BY window_start
Window
aggregation
SELECT cid,
speed_kmh,
lag(speed_kmh) OVER
(PARTITION BY cid) AS last_spd
FROM car_live_data
Sub streams
SELECT window_start, count(*)
FROM tumble(car_live_data, 5s)
GROUP BY window_start
EMIT AFTER WATERMARK AND DELAY 2s
Late event
SELECT *
FROM car_live_data
WHERE
_tp_time > now() - 1d
Time travel
Community ☕☕
Real-time
☕☕☕☕
Streaming ☕☕☕
Historical
☕☕☕☕
JOIN
☕☕☕☕
Largescale ☕☕
Lightweight
☕☕☕☕
Easy to use ☕☕☕
since 2021
Mocha
one more thing
ClickHouse StarRocks
CREATE TABLE queue2 (
timestamp UInt64,
level String,
message String
)
ENGINE = Kafka
SETTINGS
kafka_broker_list =
'localhost:9092',
kafka_topic_list = 'topic',
kafka_group_name = 'group1',
kafka_format = 'JSONEachRow',
kafka_num_consumers = 4;
CREATE ROUTINE LOAD test_db.table102 ON
table1
COLUMNS TERMINATED BY ",",
COLUMNS (user_id, user_gender,
event_date, event_type)
WHERE event_type = 1
FROM KAFKA
(
"kafka_broker_list" =
"<kafka_broker_host>:<kafka_broker_port>"
,
"kafka_topic" = "topic1",
"property.kafka_default_offsets" =
"OFFSET_BEGINNING"
);
ClickHouse features
● table engine and table function
● rich functions and data types
● not 100% ansi compatible
Streaming
Processor
Streaming
Database
Realtime
Database
RisingWave
Databend
dozer -c dozer-config.yaml
curl -X POST
http://localhost:8080/tout/query
--header 'Content-Type:
application/json'
Community ☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕☕
JOIN ☕☕☕
Largescale ☕☕☕
Lightweight☕☕☕
☕
Easy to use☕☕☕
Cappuccino
Programing - turn data into insight
human
machine
1GL - machine language
2GL - assembly language
3GL - imperative language
4GL - descriptive language
5GL - intelligent language
data
insight
source
Streaming
Processor
● SQL as data pipeline
● No data storage
● Unbounded real-time
query
ETL / Data Pipeline
ingest
external
Realtime
Database
● mostly leveraging kafka to
ingest data
● federation search/query
○ ClickHouse Kafka Engine
○ Trino
● Bounded batch query, no
streaming query
Historical Report / Ad hoc Analysis
source
Streaming
Database
● support kafka data
storage
● Unbounded real-time
query
● combination of
real-time data and
historical data
Hybrid
Query Kafka with SQL: Open Source + Cloud + Source Available
Flink
ksqlDB Hazelcast Druid Pinot Trino
ClickHouse StarRocks
RisingWave
Databend
Streaming Processor Streaming Database Realtime Database
Community
☕☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕
JOIN
☕☕☕☕
Largescale
☕☕☕☕
Lightweight☕☕
Easy to use☕☕
Community ☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕☕
JOIN ☕☕☕
Largescale ☕☕
Lightweight☕☕
Easy to use☕☕☕
Community ☕☕
Real-time
☕☕☕☕
Streaming ☕☕☕
Historical
☕☕☕☕
JOIN
☕☕☕☕
Largescale ☕☕
Lightweight☕☕☕
☕
Easy to use☕☕☕
Community ☕☕☕
Real-time ☕☕☕
Streaming ☕☕☕
Historical ☕☕
JOIN ☕☕☕
Largescale ☕☕☕
Lightweight☕☕☕
☕
Easy to use☕☕☕
Q+A / Thank you!
Meet us at booth #407
Try Timeplus Proton (Open Source)
Or sign up for a free cloud account
timeplus.com

Query Your Streaming Data on Kafka using SQL: Why, How, and What

  • 1.
    Query Kafka withSQL Jove Zhong Co-Founder and Head of Product, Timeplus Gang Tao Co-Founder and CTO, Timeplus Why, how, and what’s next? Sep 27, 2023
  • 2.
    image credit: teacoffeecup.com sed's/coffee/SQL on Kafka/g'
  • 3.
    Real-time data iseverywhere, at the edge and cloud 46 ZB of data created by billions of IoT by 2025 30% of data generated will be real-time by 2025 Only 1% of data is analyzed and streaming data is primarily untapped
  • 4.
    Why SQL onKafka?
  • 5.
    Why SQL onDatabase? ret = open_database(&(my_stock->inventory_dbp)..); my_database->get(my_database, NULL, &key, &data, 0); client.get(key) update_bins = {'b'=: u"ud83dude04" 'i': aerospike.null()} client.put(key, update_bins) request = new GetItemRequest() .withKey(key_to_get) .withTableName(table_name); SELECT * FROM tab WHERE id='id1' UPDATE tab SET flag=FALSE WHERE id='id1'
  • 6.
    Why SQL onKafka? Reliable Fast Easy Powerful Descriptive
  • 7.
    FinTech ● Real-time post-tradeanalytics ● Real-time pricing DevOps ● Real-time Github insights ● Real-time o11y and usage based pricing Security Compliance ● SOC2 compliance ● Container vulnerability monitoring ● Monitor Superblocks user activities ● Protect sensitive info in Slack IoT ● Real-time fleet monitoring Customer 360 ● Auth0 notifications for new signups ● HubSpot custom dashboards/alerts ● Jitsu clickstream analytics ● Real-time Twitter marketing Misc ● Wildfire monitoring and alerting ● Data-driven parent Sample Use Cases source: https://docs.timeplus.com/showcases
  • 8.
    How do youlike your coffee? Flink ksqlDB Hazelcast Druid Pinot Trino ClickHouse StarRocks RisingWave Databend Streaming Processor Streaming Database Real-time Database
  • 9.
  • 10.
    FlinkSQL since 2016 Community ☕☕☕☕ Real-time ☕☕☕ Streaming☕☕☕ Historical ☕ JOIN ☕☕☕☕ Largescale ☕☕☕☕ Lightweight☕☕ Easy to use☕☕
  • 11.
    Community ☕☕☕ Real-time ☕☕☕ Streaming☕☕☕ Historical ☕☕ JOIN ☕☕☕ Largescale ☕☕ Lightweight☕☕ Easy to use☕☕☕ ksqlDB since 2019
  • 12.
    Distributed computation andstorage platform No dependency on disk storage, it keeps all its operational state in the RAM of the cluster. Flink ksqlDB Hazelcast Druid Pinot Trino Streaming Processor Streaming Database Real-time Database
  • 13.
    1. create aschema json (columns, PKs) 2. create a table configuration json (streamType=Kafka) 3. docker run .. apachepinot/pinot:latest AddTable -schemaFile /tmp/transcript-schema.json -tableConfigFile /tmp/transcript-table-realtime.json .. -exec 1. load the druid-kafka-indexing-service extension on both the Overlord and the MiddleManagers 2. Create a supervisor-spec.json containing the Kafka supervisor spec file. 3. curl -X POST -H 'Content-Type: application/json' -d @supervisor-spec.json http://localhost:8090/druid/indexer/v1/supervisor
  • 14.
    Add a catalogproperties file etc/catalog/kafka.properties for the Kafka connector. $ ./trino --catalog kafka --schema aSchema trino:aSchema> SELECT count(*) FROM customer;
  • 15.
  • 16.
    the Next-Generation StreamingDatabase (Kafka + Flink + ClickHouse ) SQL with streaming extension Data Ingestion Unified Query Processing Pipeline ingest append stream read historical read streaming storage historical storage query Kafka External Stream
  • 17.
    SELECT * FROMcar_live_data Stream tail SELECT count(*) FROM car_live_data Global aggregation SELECT window_start, count(*) FROM tumble(car_live_data, 1m) GROUP BY window_start Window aggregation SELECT cid, speed_kmh, lag(speed_kmh) OVER (PARTITION BY cid) AS last_spd FROM car_live_data Sub streams SELECT window_start, count(*) FROM tumble(car_live_data, 5s) GROUP BY window_start EMIT AFTER WATERMARK AND DELAY 2s Late event SELECT * FROM car_live_data WHERE _tp_time > now() - 1d Time travel
  • 18.
  • 19.
  • 20.
    ClickHouse StarRocks CREATE TABLEqueue2 ( timestamp UInt64, level String, message String ) ENGINE = Kafka SETTINGS kafka_broker_list = 'localhost:9092', kafka_topic_list = 'topic', kafka_group_name = 'group1', kafka_format = 'JSONEachRow', kafka_num_consumers = 4; CREATE ROUTINE LOAD test_db.table102 ON table1 COLUMNS TERMINATED BY ",", COLUMNS (user_id, user_gender, event_date, event_type) WHERE event_type = 1 FROM KAFKA ( "kafka_broker_list" = "<kafka_broker_host>:<kafka_broker_port>" , "kafka_topic" = "topic1", "property.kafka_default_offsets" = "OFFSET_BEGINNING" ); ClickHouse features ● table engine and table function ● rich functions and data types ● not 100% ansi compatible
  • 21.
  • 23.
    dozer -c dozer-config.yaml curl-X POST http://localhost:8080/tout/query --header 'Content-Type: application/json'
  • 24.
    Community ☕☕☕ Real-time ☕☕☕ Streaming☕☕☕ Historical ☕☕ JOIN ☕☕☕ Largescale ☕☕☕ Lightweight☕☕☕ ☕ Easy to use☕☕☕ Cappuccino
  • 26.
    Programing - turndata into insight human machine 1GL - machine language 2GL - assembly language 3GL - imperative language 4GL - descriptive language 5GL - intelligent language data insight
  • 27.
    source Streaming Processor ● SQL asdata pipeline ● No data storage ● Unbounded real-time query ETL / Data Pipeline ingest external Realtime Database ● mostly leveraging kafka to ingest data ● federation search/query ○ ClickHouse Kafka Engine ○ Trino ● Bounded batch query, no streaming query Historical Report / Ad hoc Analysis source Streaming Database ● support kafka data storage ● Unbounded real-time query ● combination of real-time data and historical data Hybrid
  • 28.
    Query Kafka withSQL: Open Source + Cloud + Source Available Flink ksqlDB Hazelcast Druid Pinot Trino ClickHouse StarRocks RisingWave Databend Streaming Processor Streaming Database Realtime Database
  • 29.
    Community ☕☕☕☕ Real-time ☕☕☕ Streaming ☕☕☕ Historical☕ JOIN ☕☕☕☕ Largescale ☕☕☕☕ Lightweight☕☕ Easy to use☕☕ Community ☕☕☕ Real-time ☕☕☕ Streaming ☕☕☕ Historical ☕☕ JOIN ☕☕☕ Largescale ☕☕ Lightweight☕☕ Easy to use☕☕☕ Community ☕☕ Real-time ☕☕☕☕ Streaming ☕☕☕ Historical ☕☕☕☕ JOIN ☕☕☕☕ Largescale ☕☕ Lightweight☕☕☕ ☕ Easy to use☕☕☕ Community ☕☕☕ Real-time ☕☕☕ Streaming ☕☕☕ Historical ☕☕ JOIN ☕☕☕ Largescale ☕☕☕ Lightweight☕☕☕ ☕ Easy to use☕☕☕
  • 30.
    Q+A / Thankyou! Meet us at booth #407 Try Timeplus Proton (Open Source) Or sign up for a free cloud account timeplus.com