SlideShare a Scribd company logo
1 of 47
1
Kick Your Database to the Curb
Using Kafka Streams Interactive Queries to Enable
Powerful MicroServices
2
Brief Introduction
• Worked at Confluent (Streams Team) 2 years
• Apache Kafka Committer
• Author Kafka Streams in Action
Special thanks to @gamussa!
3
Agenda
• What is State
• Kafka Streams Overview
• Describe Interactive Queries
• Live Demo!
4
Stateful Stream Processing
What is State?
5
Stateful Stream Processing
What is State?
Information your application needs to remember
beyond the scope of a single record
GroupBy Example
public static void main(String[] args) {
int counter = 0;
int sendInterval = 15;
Map<String, Integer> groupByCounts = new HashMap<>();
try(..consumer = new KafkaConsumer<>(consumerProperties());
..producer = new KafkaProducer<>(producerProperties())){
consumer.subscribe(Arrays.asList(”A”,”B”));
GroupBy Example
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
GroupBy Example
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
GroupBy Example
if(counter++ % sendInterval == 0) {
for(Entry<String, Integer> groupedEntry:groupByCounts.entrySet()){
ProducerRecord<String, Integer> producerRecord =
new ProducerRecord<>("group-by-counts",
groupedEntry.getKey(),
groupedEntry.getValue());
producer.send(producerRecord);
}
consumer.commitSync();
}
GroupBy Example
if(counter++ % sendInterval == 0) {
for(Entry<String, Integer> groupedEntry:groupByCounts.entrySet()){
ProducerRecord<String, Integer> producerRecord =
new ProducerRecord<>("group-by-counts",
groupedEntry.getKey(),
groupedEntry.getValue());
producer.send(producerRecord);
}
consumer.commitSync();
}
11
Streams GroupBy
...
stream = streamBuilder.stream(Arrays.asList(“A”, “B”))
stream.groupByKey()
.count()
.toStream()
.to(“output-topic”,
Produced.with(Serdes.String(), Serdes.Long()))
12
Streams GroupBy
...
stream = streamBuilder.stream(Arrays.asList(“A”, “B”))
stream.groupByKey()
.count()
.toStream()
.to(“output-topic”,
Produced.with(Serdes.String(), Serdes.Long()))
13
Streams GroupBy
...
stream = streamBuilder.stream(Arrays.asList(“A”, “B”))
stream.groupByKey()
.count()
.toStream()
.to(“output-topic”,
Produced.with(Serdes.String(), Serdes.Long()))
14
Stateful Stream Processing
Streams Stateful Operations
• Joins
• Windowing operations
• Aggregation/Reduce
Using any of these operations, Streams creates a state
store
15
Making Streams Results Queryable
Kafka Streams Application
KAFKA
External Application /
REST Service
16
Making Streams Results Queryable
Kafka Streams Application
KAFKA
External Application /
REST Service
Database
17
Making Streams Results Queryable
Kafka Streams Application
KAFKA
External Application /
REST Service
Database
18
Making Streams Queryable
stream.groupByKey()
.count()
.toStream()
.to(“output-topic”,
Produced.with(Serdes.String(), Serdes.Long()))
..
consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
someService.save(record.key(), record.value())
..
}
..
19
Making Streams State Directly Queryable
...
stream = streamBuilder.stream(Arrays.asList(“A”, “B”))
stream.groupByKey()
.count(Materialized.as(“count-store”))
.toStream()
.to(“output-topic”,
Produced.with(Serdes.String(), Serdes.Long()))
20
Making Streams State Directly Queryable
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG,
"ks-interactive-stock-analysis-appid");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
properties.put(StreamsConfig.APPLICATION_SERVER_CONFIG,
host+":"+port);
...
21
Making Streams State Directly Queryable
Kafka Streams Application
KAFKA
Embedded RPC
22
What’s with the APPLICATION_SERVER_ID
• A single Streams instance doesn’t contain all keys
• Streams will query other instances for store misses
• A single Streams instance can be proxy for all instances
24
Making Streams Results Queryable
Streams app “A”
Host = hostA:4567
KAFKA
Streams app “B”
Host = hostB:4568
25
Making Streams Results Queryable
Streams app “A”
Host = hostA:4567
Metadata -> hostB:4568
KAFKA
Streams app “B”
Host = hostB:4568
Metadata -> hostA:4567
26
Topic Partitions and Streams Tasks
Streams app “A”
Host = hostA:4567
Streams app “B”
Host = hostB:4568
State Store State Store
Topic with four partitions
Four partitions are converted to 4 tasks so
each streams application is assigned 2
partitions/tasks
27
Making Streams Results Queryable
Streams app “A”
Host = hostA:4567
Streams app “B”
Host = hostB:4568
State Store State Store
{“ENERGY”:”10000”} written to
partition 0 assigned to App A
{“FINANCE”:”11000”} written to
partition 1 assigned to App B
28
Making Streams Results Queryable
Streams app “A”
Host = hostA:4567
Streams app “B”
Host = hostB:4568
State Store State Store
{“ENERGY”:”10000”} written to
partition 0 assigned to App A
{“FINANCE”:”11000”} written to
partition 1 assigned to App B
http://hostA:4567?key=FINANCE
29
Example of a Streams RPC
Kafka Streams Application
KAFKA
{ JS }
Demo Time!
30
Embedding the Web Server
KafkaStreams kafkaStreams =
new KafkaStreams(builder.build(), streamsConfig);
InteractiveQueryServer queryServer =
new InteractiveQueryServer(kafkaStreams, hostInfo);
31
Embedding the Web Server
.
kafkaStreams.setStateListener(((newState, oldState) -> {
if (newState == KafkaStreams.State.RUNNING
&& oldState == KafkaStreams.State.REBALANCING) {
queryServer.setReady(true);
} else if (newState != KafkaStreams.State.RUNNING) {
queryServer.setReady(false);
}
}))
32
Embedding the Web Server
.
kafkaStreams.setStateListener(((newState, oldState) -> {
if (newState == KafkaStreams.State.RUNNING
&& oldState == KafkaStreams.State.REBALANCING) {
queryServer.setReady(true);
} else if (newState != KafkaStreams.State.RUNNING) {
queryServer.setReady(false);
}
}))
33
Embedding the Web Server
public void init() {
get("/window/:store/:key/:from/:to", (req, res) -> ready ?
fetchFromWindowStore(req.params()) : STORES_NOT_ACCESSIBLE);
get("/window/:store/:key", (req, res) -> ready ?
fetchFromWindowStore(req.params()) : STORES_NOT_ACCESSIBLE);
get("/kv/:store", (req, res) -> ready ? fetchAllFromKeyValueStore(req.params()) :
STORES_NOT_ACCESSIBLE);
get("/kv/:store/:local", (req, res) -> ready ?
fetchAllFromLocalKeyValueStore(req.params()) : STORES_NOT_ACCESSIBLE);
get("/session/:store/:key",
(req, res) -> ready ?
fetchFromSessionStore(req.params()) : STORES_NOT_ACCESSIBLE);
get("/iq",
(req, res) -> {
res.redirect("interactiveQueriesApplication.html");
return "";
});
34
Embedding the Web Server
fetchFromSessionStore(Map<String, String> params) {
String store = params.get(STORE_PARAM);
String key = params.get(KEY_PARAM);
HostInfo storeHostInfo = getHostInfo(store, key);
if (storeHostInfo.host().equals("unknown")) {
return STORES_NOT_ACCESSIBLE;
}
if (dataNotLocal(storeHostInfo)) {
return fetchRemote(storeHostInfo, "session", params);
}
ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store(store,
QueryableStoreTypes.sessionStore());
35
Embedding the Web Server
getHostInfo(String storeName, String key) {
StreamsMetadata metadata =
kafkaStreams.metadataForKey(storeName, key, stringSerializer);
return metadata.hostInfo();
}
36
Embedding the Web Server
fetchFromSessionStore(Map<String, String> params) {
String store = params.get(STORE_PARAM);
String key = params.get(KEY_PARAM);
HostInfo storeHostInfo = getHostInfo(store, key);
if (storeHostInfo.host().equals("unknown")) {
return STORES_NOT_ACCESSIBLE;
}
if (dataNotLocal(storeHostInfo)) {
return fetchRemote(storeHostInfo, "session", params);
}
ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store(store,
QueryableStoreTypes.sessionStore());
37
Embedding the Web Server
fetchFromSessionStore(Map<String, String> params) {
String store = params.get(STORE_PARAM);
String key = params.get(KEY_PARAM);
HostInfo storeHostInfo = getHostInfo(store, key);
if (storeHostInfo.host().equals("unknown")) {
return STORES_NOT_ACCESSIBLE;
}
if (dataNotLocal(storeHostInfo)) {
return fetchRemote(storeHostInfo, "session", params);
}
ReadOnlySessionStore<String, CustomerTransactions>
readOnlySessionStore = kafkaStreams.store(
store,
QueryableStoreTypes.sessionStore());
38
Embedding the Web Server
fetchFromSessionStore(Map<String, String> params) {
String store = params.get(STORE_PARAM);
String key = params.get(KEY_PARAM);
HostInfo storeHostInfo = getHostInfo(store, key);
if (storeHostInfo.host().equals("unknown")) {
return STORES_NOT_ACCESSIBLE;
}
if (dataNotLocal(storeHostInfo)) {
return fetchRemote(storeHostInfo, "session", params);
}
// Iterate over readOnlySessionStore and
// store results in a list sessionResults
return gson.toJson(sessionResults);
39
Client View Development
<body>
<h2>Kafka Streams Equities Dashboard Application</h2>
<!–- Other div elements left out for clarity -->
<div id="sessionDiv">
<h3 id="sessionHeader">Customer Session Equity Activity Table</h3>
<table id="sessionTable">
<tr>
<th>Customer Id</th>
<th>Average Equity Transaction Spent Per Session</th>
</tr>
</table>
</div>
</body>
40
Client View Development
<script>
function loadIqTables() {
$.getJSON("/kv/TransactionsBySector", function (response) {
updateTable(response, $('#txnsTable'))
$('#txnsHeader').animate({color:'red'},500).animate({color:'#CCCCCC'}, 500)
})
updateTableWithList("/window/NumberSharesPerPeriod/", symbols,
$('#stockTable'), $('#stockHeader'));
updateTableWithList("/session/CustomerPurchaseSessions/", customers,
$('#sessionTable'), $('#sessionHeader'))
}
setInterval(loadIqTables, 7000);
</script>
41
Client View Development
<script>
function loadIqTables() {
$.getJSON("/kv/TransactionsBySector", function (response) {
updateTable(response, $('#txnsTable'))
$('#txnsHeader').animate({color:'red'},500).animate({color:'#CCCCCC'}, 500)
})
updateTableWithList("/window/NumberSharesPerPeriod/", symbols,
$('#stockTable'), $('#stockHeader'));
updateTableWithList("/session/CustomerPurchaseSessions/", customers,
$('#sessionTable'), $('#sessionHeader'))
}
setInterval(loadIqTables, 7000);
</script>
42
Security
Kafka Streams Application
KAFKA https://
43
Security
ReadOnlySessionStore<String, CustomerTransactions>
readOnlySessionStore = kafkaStreams.store(
store,
QueryableStoreTypes.sessionStore());
try (KeyValueIterator<Windowed<String>, CustomerTransactions>
iterator = readOnlySessionStore.fetch(key)) {
while (iterator.hasNext()) {
//Transform or mask data here and return sanitized
//data
}
}
return gson.toJson(sanitizedRecords);
44
Summary
Interactive Queries is a powerful abstraction that
simplifies stateful stream processing
There are still cases for which external database/storage
might be a better
45
Summary
Kafka Streams in Action Examples: https://github.com/bbejeck/kafka-streams-in-
action/blob/master/src/main/java/bbejeck/webserver/InteractiveQueryServer.java
Music example: https://github.com/confluentinc/examples/blob/master/kafka-
streams/src/main/java/io/confluent/examples/streams/interactivequeries/kafkamusic/
KafkaMusicExample.java
Streaming Movie Ratings: https://github.com/confluentinc/demo-
scene/tree/master/streams-movie-demo
46
Thanks!
Stay in Touch!
• https://slackpass.io/confluentcommunity
• https://www.confluent.io/blog/
• Twitter @bbejeck
• We are hiring! https://www.confluent.io/careers/
KS19Meetup.
CONFLUENT COMMUNITY DISCOUNT CODE
25% OFF*
*Standard Priced Conference pass
NOMINATE YOURSELF OR A PEER AT
CONFLUENT.IO/NOMINATE

More Related Content

What's hot

Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streams
Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka StreamsKafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streams
Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streamsconfluent
 
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...Guozhang Wang
 
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...Guozhang Wang
 
Apache Kafka, and the Rise of Stream Processing
Apache Kafka, and the Rise of Stream ProcessingApache Kafka, and the Rise of Stream Processing
Apache Kafka, and the Rise of Stream ProcessingGuozhang Wang
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
 
Building Out Your Kafka Developer CDC Ecosystem
Building Out Your Kafka Developer CDC  EcosystemBuilding Out Your Kafka Developer CDC  Ecosystem
Building Out Your Kafka Developer CDC Ecosystemconfluent
 
Apache Kafka: New Features That You Might Not Know About
Apache Kafka: New Features That You Might Not Know AboutApache Kafka: New Features That You Might Not Know About
Apache Kafka: New Features That You Might Not Know AboutYaroslav Tkachenko
 
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019Michael Noll
 
Stream Application Development with Apache Kafka
Stream Application Development with Apache KafkaStream Application Development with Apache Kafka
Stream Application Development with Apache KafkaMatthias J. Sax
 
Actors or Not: Async Event Architectures
Actors or Not: Async Event ArchitecturesActors or Not: Async Event Architectures
Actors or Not: Async Event ArchitecturesYaroslav Tkachenko
 
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...Yaroslav Tkachenko
 
So You Want to Write a Connector?
So You Want to Write a Connector? So You Want to Write a Connector?
So You Want to Write a Connector? confluent
 
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...confluent
 
Event stream processing using Kafka streams
Event stream processing using Kafka streamsEvent stream processing using Kafka streams
Event stream processing using Kafka streamsFredrik Vraalsen
 
Exactly-once Stream Processing with Kafka Streams
Exactly-once Stream Processing with Kafka StreamsExactly-once Stream Processing with Kafka Streams
Exactly-once Stream Processing with Kafka StreamsGuozhang Wang
 
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...confluent
 
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...confluent
 
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...confluent
 
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’ Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’ confluent
 
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019confluent
 

What's hot (20)

Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streams
Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka StreamsKafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streams
Kafka Summit SF 2017 - Exactly-once Stream Processing with Kafka Streams
 
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...
 
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...
 
Apache Kafka, and the Rise of Stream Processing
Apache Kafka, and the Rise of Stream ProcessingApache Kafka, and the Rise of Stream Processing
Apache Kafka, and the Rise of Stream Processing
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
 
Building Out Your Kafka Developer CDC Ecosystem
Building Out Your Kafka Developer CDC  EcosystemBuilding Out Your Kafka Developer CDC  Ecosystem
Building Out Your Kafka Developer CDC Ecosystem
 
Apache Kafka: New Features That You Might Not Know About
Apache Kafka: New Features That You Might Not Know AboutApache Kafka: New Features That You Might Not Know About
Apache Kafka: New Features That You Might Not Know About
 
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019
Kafka 102: Streams and Tables All the Way Down | Kafka Summit San Francisco 2019
 
Stream Application Development with Apache Kafka
Stream Application Development with Apache KafkaStream Application Development with Apache Kafka
Stream Application Development with Apache Kafka
 
Actors or Not: Async Event Architectures
Actors or Not: Async Event ArchitecturesActors or Not: Async Event Architectures
Actors or Not: Async Event Architectures
 
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...
Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lesson...
 
So You Want to Write a Connector?
So You Want to Write a Connector? So You Want to Write a Connector?
So You Want to Write a Connector?
 
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
 
Event stream processing using Kafka streams
Event stream processing using Kafka streamsEvent stream processing using Kafka streams
Event stream processing using Kafka streams
 
Exactly-once Stream Processing with Kafka Streams
Exactly-once Stream Processing with Kafka StreamsExactly-once Stream Processing with Kafka Streams
Exactly-once Stream Processing with Kafka Streams
 
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...
Achieving a 50% Reduction in Cross-AZ Network Costs from Kafka (Uday Sagar Si...
 
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...
 
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...
Kafka Connect: Operational Lessons Learned from the Trenches (Elizabeth Benne...
 
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’ Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’
Serverless and Streaming: Building ‘eBay’ by ‘Turning the Database Inside Out’
 
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
 

Similar to Kick your database_to_the_curb_reston_08_27_19

Kick Your Database to the Curb
Kick Your Database to the CurbKick Your Database to the Curb
Kick Your Database to the CurbBill Bejeck
 
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...HostedbyConfluent
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!Guido Schmutz
 
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...confluent
 
APAC ksqlDB Workshop
APAC ksqlDB WorkshopAPAC ksqlDB Workshop
APAC ksqlDB Workshopconfluent
 
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...DataWorks Summit
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applicationsRobert Sanders
 
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseCodepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseSages
 
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...Spark Summit
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopSages
 
From Kafka to BigQuery - Strata Singapore
From Kafka to BigQuery - Strata SingaporeFrom Kafka to BigQuery - Strata Singapore
From Kafka to BigQuery - Strata SingaporeOfir Sharony
 
Spring and Cloud Foundry; a Marriage Made in Heaven
Spring and Cloud Foundry; a Marriage Made in HeavenSpring and Cloud Foundry; a Marriage Made in Heaven
Spring and Cloud Foundry; a Marriage Made in HeavenJoshua Long
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaAOE
 
DOSUG Taking Apache Camel For A Ride
DOSUG Taking Apache Camel For A RideDOSUG Taking Apache Camel For A Ride
DOSUG Taking Apache Camel For A RideMatthew McCullough
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraDeependra Ariyadewa
 
점진적인 레거시 웹 애플리케이션 개선 과정
점진적인 레거시 웹 애플리케이션 개선 과정점진적인 레거시 웹 애플리케이션 개선 과정
점진적인 레거시 웹 애플리케이션 개선 과정Arawn Park
 
Meet the squirrel @ #CSHUG
Meet the squirrel @ #CSHUGMeet the squirrel @ #CSHUG
Meet the squirrel @ #CSHUGMárton Balassi
 
Anton Moldovan "Load testing which you always wanted"
Anton Moldovan "Load testing which you always wanted"Anton Moldovan "Load testing which you always wanted"
Anton Moldovan "Load testing which you always wanted"Fwdays
 
Windowing in Kafka Streams and Flink SQL
Windowing in Kafka Streams and Flink SQLWindowing in Kafka Streams and Flink SQL
Windowing in Kafka Streams and Flink SQLHostedbyConfluent
 

Similar to Kick your database_to_the_curb_reston_08_27_19 (20)

Kick Your Database to the Curb
Kick Your Database to the CurbKick Your Database to the Curb
Kick Your Database to the Curb
 
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...
Building an Interactive Query Service in Kafka Streams With Bill Bejeck | Cur...
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!
 
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
 
APAC ksqlDB Workshop
APAC ksqlDB WorkshopAPAC ksqlDB Workshop
APAC ksqlDB Workshop
 
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applications
 
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseCodepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
 
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache Hadoop
 
From Kafka to BigQuery - Strata Singapore
From Kafka to BigQuery - Strata SingaporeFrom Kafka to BigQuery - Strata Singapore
From Kafka to BigQuery - Strata Singapore
 
Spring and Cloud Foundry; a Marriage Made in Heaven
Spring and Cloud Foundry; a Marriage Made in HeavenSpring and Cloud Foundry; a Marriage Made in Heaven
Spring and Cloud Foundry; a Marriage Made in Heaven
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS Lambda
 
DOSUG Taking Apache Camel For A Ride
DOSUG Taking Apache Camel For A RideDOSUG Taking Apache Camel For A Ride
DOSUG Taking Apache Camel For A Ride
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
 
점진적인 레거시 웹 애플리케이션 개선 과정
점진적인 레거시 웹 애플리케이션 개선 과정점진적인 레거시 웹 애플리케이션 개선 과정
점진적인 레거시 웹 애플리케이션 개선 과정
 
Meet the squirrel @ #CSHUG
Meet the squirrel @ #CSHUGMeet the squirrel @ #CSHUG
Meet the squirrel @ #CSHUG
 
Anton Moldovan "Load testing which you always wanted"
Anton Moldovan "Load testing which you always wanted"Anton Moldovan "Load testing which you always wanted"
Anton Moldovan "Load testing which you always wanted"
 
Windowing in Kafka Streams and Flink SQL
Windowing in Kafka Streams and Flink SQLWindowing in Kafka Streams and Flink SQL
Windowing in Kafka Streams and Flink SQL
 
Not your Grandma's XQuery
Not your Grandma's XQueryNot your Grandma's XQuery
Not your Grandma's XQuery
 

More from confluent

Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
 
Santander Stream Processing with Apache Flink
Santander Stream Processing with Apache FlinkSantander Stream Processing with Apache Flink
Santander Stream Processing with Apache Flinkconfluent
 
Unlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsUnlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
 
Workshop híbrido: Stream Processing con Flink
Workshop híbrido: Stream Processing con FlinkWorkshop híbrido: Stream Processing con Flink
Workshop híbrido: Stream Processing con Flinkconfluent
 
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
 
AWS Immersion Day Mapfre - Confluent
AWS Immersion Day Mapfre   -   ConfluentAWS Immersion Day Mapfre   -   Confluent
AWS Immersion Day Mapfre - Confluentconfluent
 
Eventos y Microservicios - Santander TechTalk
Eventos y Microservicios - Santander TechTalkEventos y Microservicios - Santander TechTalk
Eventos y Microservicios - Santander TechTalkconfluent
 
Q&A with Confluent Experts: Navigating Networking in Confluent Cloud
Q&A with Confluent Experts: Navigating Networking in Confluent CloudQ&A with Confluent Experts: Navigating Networking in Confluent Cloud
Q&A with Confluent Experts: Navigating Networking in Confluent Cloudconfluent
 
Citi TechTalk Session 2: Kafka Deep Dive
Citi TechTalk Session 2: Kafka Deep DiveCiti TechTalk Session 2: Kafka Deep Dive
Citi TechTalk Session 2: Kafka Deep Diveconfluent
 
Build real-time streaming data pipelines to AWS with Confluent
Build real-time streaming data pipelines to AWS with ConfluentBuild real-time streaming data pipelines to AWS with Confluent
Build real-time streaming data pipelines to AWS with Confluentconfluent
 
Q&A with Confluent Professional Services: Confluent Service Mesh
Q&A with Confluent Professional Services: Confluent Service MeshQ&A with Confluent Professional Services: Confluent Service Mesh
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
 
Citi Tech Talk: Event Driven Kafka Microservices
Citi Tech Talk: Event Driven Kafka MicroservicesCiti Tech Talk: Event Driven Kafka Microservices
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
 
Confluent & GSI Webinars series - Session 3
Confluent & GSI Webinars series - Session 3Confluent & GSI Webinars series - Session 3
Confluent & GSI Webinars series - Session 3confluent
 
Citi Tech Talk: Messaging Modernization
Citi Tech Talk: Messaging ModernizationCiti Tech Talk: Messaging Modernization
Citi Tech Talk: Messaging Modernizationconfluent
 
Citi Tech Talk: Data Governance for streaming and real time data
Citi Tech Talk: Data Governance for streaming and real time dataCiti Tech Talk: Data Governance for streaming and real time data
Citi Tech Talk: Data Governance for streaming and real time dataconfluent
 
Confluent & GSI Webinars series: Session 2
Confluent & GSI Webinars series: Session 2Confluent & GSI Webinars series: Session 2
Confluent & GSI Webinars series: Session 2confluent
 
Data In Motion Paris 2023
Data In Motion Paris 2023Data In Motion Paris 2023
Data In Motion Paris 2023confluent
 
Confluent Partner Tech Talk with Synthesis
Confluent Partner Tech Talk with SynthesisConfluent Partner Tech Talk with Synthesis
Confluent Partner Tech Talk with Synthesisconfluent
 
The Future of Application Development - API Days - Melbourne 2023
The Future of Application Development - API Days - Melbourne 2023The Future of Application Development - API Days - Melbourne 2023
The Future of Application Development - API Days - Melbourne 2023confluent
 
The Playful Bond Between REST And Data Streams
The Playful Bond Between REST And Data StreamsThe Playful Bond Between REST And Data Streams
The Playful Bond Between REST And Data Streamsconfluent
 

More from confluent (20)

Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
 
Santander Stream Processing with Apache Flink
Santander Stream Processing with Apache FlinkSantander Stream Processing with Apache Flink
Santander Stream Processing with Apache Flink
 
Unlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsUnlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insights
 
Workshop híbrido: Stream Processing con Flink
Workshop híbrido: Stream Processing con FlinkWorkshop híbrido: Stream Processing con Flink
Workshop híbrido: Stream Processing con Flink
 
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...
 
AWS Immersion Day Mapfre - Confluent
AWS Immersion Day Mapfre   -   ConfluentAWS Immersion Day Mapfre   -   Confluent
AWS Immersion Day Mapfre - Confluent
 
Eventos y Microservicios - Santander TechTalk
Eventos y Microservicios - Santander TechTalkEventos y Microservicios - Santander TechTalk
Eventos y Microservicios - Santander TechTalk
 
Q&A with Confluent Experts: Navigating Networking in Confluent Cloud
Q&A with Confluent Experts: Navigating Networking in Confluent CloudQ&A with Confluent Experts: Navigating Networking in Confluent Cloud
Q&A with Confluent Experts: Navigating Networking in Confluent Cloud
 
Citi TechTalk Session 2: Kafka Deep Dive
Citi TechTalk Session 2: Kafka Deep DiveCiti TechTalk Session 2: Kafka Deep Dive
Citi TechTalk Session 2: Kafka Deep Dive
 
Build real-time streaming data pipelines to AWS with Confluent
Build real-time streaming data pipelines to AWS with ConfluentBuild real-time streaming data pipelines to AWS with Confluent
Build real-time streaming data pipelines to AWS with Confluent
 
Q&A with Confluent Professional Services: Confluent Service Mesh
Q&A with Confluent Professional Services: Confluent Service MeshQ&A with Confluent Professional Services: Confluent Service Mesh
Q&A with Confluent Professional Services: Confluent Service Mesh
 
Citi Tech Talk: Event Driven Kafka Microservices
Citi Tech Talk: Event Driven Kafka MicroservicesCiti Tech Talk: Event Driven Kafka Microservices
Citi Tech Talk: Event Driven Kafka Microservices
 
Confluent & GSI Webinars series - Session 3
Confluent & GSI Webinars series - Session 3Confluent & GSI Webinars series - Session 3
Confluent & GSI Webinars series - Session 3
 
Citi Tech Talk: Messaging Modernization
Citi Tech Talk: Messaging ModernizationCiti Tech Talk: Messaging Modernization
Citi Tech Talk: Messaging Modernization
 
Citi Tech Talk: Data Governance for streaming and real time data
Citi Tech Talk: Data Governance for streaming and real time dataCiti Tech Talk: Data Governance for streaming and real time data
Citi Tech Talk: Data Governance for streaming and real time data
 
Confluent & GSI Webinars series: Session 2
Confluent & GSI Webinars series: Session 2Confluent & GSI Webinars series: Session 2
Confluent & GSI Webinars series: Session 2
 
Data In Motion Paris 2023
Data In Motion Paris 2023Data In Motion Paris 2023
Data In Motion Paris 2023
 
Confluent Partner Tech Talk with Synthesis
Confluent Partner Tech Talk with SynthesisConfluent Partner Tech Talk with Synthesis
Confluent Partner Tech Talk with Synthesis
 
The Future of Application Development - API Days - Melbourne 2023
The Future of Application Development - API Days - Melbourne 2023The Future of Application Development - API Days - Melbourne 2023
The Future of Application Development - API Days - Melbourne 2023
 
The Playful Bond Between REST And Data Streams
The Playful Bond Between REST And Data StreamsThe Playful Bond Between REST And Data Streams
The Playful Bond Between REST And Data Streams
 

Recently uploaded

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 

Recently uploaded (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 

Kick your database_to_the_curb_reston_08_27_19

  • 1. 1 Kick Your Database to the Curb Using Kafka Streams Interactive Queries to Enable Powerful MicroServices
  • 2. 2 Brief Introduction • Worked at Confluent (Streams Team) 2 years • Apache Kafka Committer • Author Kafka Streams in Action Special thanks to @gamussa!
  • 3. 3 Agenda • What is State • Kafka Streams Overview • Describe Interactive Queries • Live Demo!
  • 5. 5 Stateful Stream Processing What is State? Information your application needs to remember beyond the scope of a single record
  • 6. GroupBy Example public static void main(String[] args) { int counter = 0; int sendInterval = 15; Map<String, Integer> groupByCounts = new HashMap<>(); try(..consumer = new KafkaConsumer<>(consumerProperties()); ..producer = new KafkaProducer<>(producerProperties())){ consumer.subscribe(Arrays.asList(”A”,”B”));
  • 7. GroupBy Example while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5)); for (ConsumerRecord<String, String> record : records) { String key = record.key(); Integer count = groupByCounts.get(key); if (count == null) { count = 0; } count += 1; groupByCounts.put(key, count); }
  • 8. GroupBy Example while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5)); for (ConsumerRecord<String, String> record : records) { String key = record.key(); Integer count = groupByCounts.get(key); if (count == null) { count = 0; } count += 1; groupByCounts.put(key, count); }
  • 9. GroupBy Example if(counter++ % sendInterval == 0) { for(Entry<String, Integer> groupedEntry:groupByCounts.entrySet()){ ProducerRecord<String, Integer> producerRecord = new ProducerRecord<>("group-by-counts", groupedEntry.getKey(), groupedEntry.getValue()); producer.send(producerRecord); } consumer.commitSync(); }
  • 10. GroupBy Example if(counter++ % sendInterval == 0) { for(Entry<String, Integer> groupedEntry:groupByCounts.entrySet()){ ProducerRecord<String, Integer> producerRecord = new ProducerRecord<>("group-by-counts", groupedEntry.getKey(), groupedEntry.getValue()); producer.send(producerRecord); } consumer.commitSync(); }
  • 11. 11 Streams GroupBy ... stream = streamBuilder.stream(Arrays.asList(“A”, “B”)) stream.groupByKey() .count() .toStream() .to(“output-topic”, Produced.with(Serdes.String(), Serdes.Long()))
  • 12. 12 Streams GroupBy ... stream = streamBuilder.stream(Arrays.asList(“A”, “B”)) stream.groupByKey() .count() .toStream() .to(“output-topic”, Produced.with(Serdes.String(), Serdes.Long()))
  • 13. 13 Streams GroupBy ... stream = streamBuilder.stream(Arrays.asList(“A”, “B”)) stream.groupByKey() .count() .toStream() .to(“output-topic”, Produced.with(Serdes.String(), Serdes.Long()))
  • 14. 14 Stateful Stream Processing Streams Stateful Operations • Joins • Windowing operations • Aggregation/Reduce Using any of these operations, Streams creates a state store
  • 15. 15 Making Streams Results Queryable Kafka Streams Application KAFKA External Application / REST Service
  • 16. 16 Making Streams Results Queryable Kafka Streams Application KAFKA External Application / REST Service Database
  • 17. 17 Making Streams Results Queryable Kafka Streams Application KAFKA External Application / REST Service Database
  • 18. 18 Making Streams Queryable stream.groupByKey() .count() .toStream() .to(“output-topic”, Produced.with(Serdes.String(), Serdes.Long())) .. consumer.poll(Duration.ofSeconds(5)); for (ConsumerRecord<String, String> record : records) { someService.save(record.key(), record.value()) .. } ..
  • 19. 19 Making Streams State Directly Queryable ... stream = streamBuilder.stream(Arrays.asList(“A”, “B”)) stream.groupByKey() .count(Materialized.as(“count-store”)) .toStream() .to(“output-topic”, Produced.with(Serdes.String(), Serdes.Long()))
  • 20. 20 Making Streams State Directly Queryable Properties props = new Properties(); props.put(StreamsConfig.APPLICATION_ID_CONFIG, "ks-interactive-stock-analysis-appid"); props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); properties.put(StreamsConfig.APPLICATION_SERVER_CONFIG, host+":"+port); ...
  • 21. 21 Making Streams State Directly Queryable Kafka Streams Application KAFKA Embedded RPC
  • 22. 22 What’s with the APPLICATION_SERVER_ID • A single Streams instance doesn’t contain all keys • Streams will query other instances for store misses • A single Streams instance can be proxy for all instances
  • 23. 24 Making Streams Results Queryable Streams app “A” Host = hostA:4567 KAFKA Streams app “B” Host = hostB:4568
  • 24. 25 Making Streams Results Queryable Streams app “A” Host = hostA:4567 Metadata -> hostB:4568 KAFKA Streams app “B” Host = hostB:4568 Metadata -> hostA:4567
  • 25. 26 Topic Partitions and Streams Tasks Streams app “A” Host = hostA:4567 Streams app “B” Host = hostB:4568 State Store State Store Topic with four partitions Four partitions are converted to 4 tasks so each streams application is assigned 2 partitions/tasks
  • 26. 27 Making Streams Results Queryable Streams app “A” Host = hostA:4567 Streams app “B” Host = hostB:4568 State Store State Store {“ENERGY”:”10000”} written to partition 0 assigned to App A {“FINANCE”:”11000”} written to partition 1 assigned to App B
  • 27. 28 Making Streams Results Queryable Streams app “A” Host = hostA:4567 Streams app “B” Host = hostB:4568 State Store State Store {“ENERGY”:”10000”} written to partition 0 assigned to App A {“FINANCE”:”11000”} written to partition 1 assigned to App B http://hostA:4567?key=FINANCE
  • 28. 29 Example of a Streams RPC Kafka Streams Application KAFKA { JS } Demo Time!
  • 29. 30 Embedding the Web Server KafkaStreams kafkaStreams = new KafkaStreams(builder.build(), streamsConfig); InteractiveQueryServer queryServer = new InteractiveQueryServer(kafkaStreams, hostInfo);
  • 30. 31 Embedding the Web Server . kafkaStreams.setStateListener(((newState, oldState) -> { if (newState == KafkaStreams.State.RUNNING && oldState == KafkaStreams.State.REBALANCING) { queryServer.setReady(true); } else if (newState != KafkaStreams.State.RUNNING) { queryServer.setReady(false); } }))
  • 31. 32 Embedding the Web Server . kafkaStreams.setStateListener(((newState, oldState) -> { if (newState == KafkaStreams.State.RUNNING && oldState == KafkaStreams.State.REBALANCING) { queryServer.setReady(true); } else if (newState != KafkaStreams.State.RUNNING) { queryServer.setReady(false); } }))
  • 32. 33 Embedding the Web Server public void init() { get("/window/:store/:key/:from/:to", (req, res) -> ready ? fetchFromWindowStore(req.params()) : STORES_NOT_ACCESSIBLE); get("/window/:store/:key", (req, res) -> ready ? fetchFromWindowStore(req.params()) : STORES_NOT_ACCESSIBLE); get("/kv/:store", (req, res) -> ready ? fetchAllFromKeyValueStore(req.params()) : STORES_NOT_ACCESSIBLE); get("/kv/:store/:local", (req, res) -> ready ? fetchAllFromLocalKeyValueStore(req.params()) : STORES_NOT_ACCESSIBLE); get("/session/:store/:key", (req, res) -> ready ? fetchFromSessionStore(req.params()) : STORES_NOT_ACCESSIBLE); get("/iq", (req, res) -> { res.redirect("interactiveQueriesApplication.html"); return ""; });
  • 33. 34 Embedding the Web Server fetchFromSessionStore(Map<String, String> params) { String store = params.get(STORE_PARAM); String key = params.get(KEY_PARAM); HostInfo storeHostInfo = getHostInfo(store, key); if (storeHostInfo.host().equals("unknown")) { return STORES_NOT_ACCESSIBLE; } if (dataNotLocal(storeHostInfo)) { return fetchRemote(storeHostInfo, "session", params); } ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store(store, QueryableStoreTypes.sessionStore());
  • 34. 35 Embedding the Web Server getHostInfo(String storeName, String key) { StreamsMetadata metadata = kafkaStreams.metadataForKey(storeName, key, stringSerializer); return metadata.hostInfo(); }
  • 35. 36 Embedding the Web Server fetchFromSessionStore(Map<String, String> params) { String store = params.get(STORE_PARAM); String key = params.get(KEY_PARAM); HostInfo storeHostInfo = getHostInfo(store, key); if (storeHostInfo.host().equals("unknown")) { return STORES_NOT_ACCESSIBLE; } if (dataNotLocal(storeHostInfo)) { return fetchRemote(storeHostInfo, "session", params); } ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store(store, QueryableStoreTypes.sessionStore());
  • 36. 37 Embedding the Web Server fetchFromSessionStore(Map<String, String> params) { String store = params.get(STORE_PARAM); String key = params.get(KEY_PARAM); HostInfo storeHostInfo = getHostInfo(store, key); if (storeHostInfo.host().equals("unknown")) { return STORES_NOT_ACCESSIBLE; } if (dataNotLocal(storeHostInfo)) { return fetchRemote(storeHostInfo, "session", params); } ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store( store, QueryableStoreTypes.sessionStore());
  • 37. 38 Embedding the Web Server fetchFromSessionStore(Map<String, String> params) { String store = params.get(STORE_PARAM); String key = params.get(KEY_PARAM); HostInfo storeHostInfo = getHostInfo(store, key); if (storeHostInfo.host().equals("unknown")) { return STORES_NOT_ACCESSIBLE; } if (dataNotLocal(storeHostInfo)) { return fetchRemote(storeHostInfo, "session", params); } // Iterate over readOnlySessionStore and // store results in a list sessionResults return gson.toJson(sessionResults);
  • 38. 39 Client View Development <body> <h2>Kafka Streams Equities Dashboard Application</h2> <!–- Other div elements left out for clarity --> <div id="sessionDiv"> <h3 id="sessionHeader">Customer Session Equity Activity Table</h3> <table id="sessionTable"> <tr> <th>Customer Id</th> <th>Average Equity Transaction Spent Per Session</th> </tr> </table> </div> </body>
  • 39. 40 Client View Development <script> function loadIqTables() { $.getJSON("/kv/TransactionsBySector", function (response) { updateTable(response, $('#txnsTable')) $('#txnsHeader').animate({color:'red'},500).animate({color:'#CCCCCC'}, 500) }) updateTableWithList("/window/NumberSharesPerPeriod/", symbols, $('#stockTable'), $('#stockHeader')); updateTableWithList("/session/CustomerPurchaseSessions/", customers, $('#sessionTable'), $('#sessionHeader')) } setInterval(loadIqTables, 7000); </script>
  • 40. 41 Client View Development <script> function loadIqTables() { $.getJSON("/kv/TransactionsBySector", function (response) { updateTable(response, $('#txnsTable')) $('#txnsHeader').animate({color:'red'},500).animate({color:'#CCCCCC'}, 500) }) updateTableWithList("/window/NumberSharesPerPeriod/", symbols, $('#stockTable'), $('#stockHeader')); updateTableWithList("/session/CustomerPurchaseSessions/", customers, $('#sessionTable'), $('#sessionHeader')) } setInterval(loadIqTables, 7000); </script>
  • 42. 43 Security ReadOnlySessionStore<String, CustomerTransactions> readOnlySessionStore = kafkaStreams.store( store, QueryableStoreTypes.sessionStore()); try (KeyValueIterator<Windowed<String>, CustomerTransactions> iterator = readOnlySessionStore.fetch(key)) { while (iterator.hasNext()) { //Transform or mask data here and return sanitized //data } } return gson.toJson(sanitizedRecords);
  • 43. 44 Summary Interactive Queries is a powerful abstraction that simplifies stateful stream processing There are still cases for which external database/storage might be a better
  • 44. 45 Summary Kafka Streams in Action Examples: https://github.com/bbejeck/kafka-streams-in- action/blob/master/src/main/java/bbejeck/webserver/InteractiveQueryServer.java Music example: https://github.com/confluentinc/examples/blob/master/kafka- streams/src/main/java/io/confluent/examples/streams/interactivequeries/kafkamusic/ KafkaMusicExample.java Streaming Movie Ratings: https://github.com/confluentinc/demo- scene/tree/master/streams-movie-demo
  • 45. 46 Thanks! Stay in Touch! • https://slackpass.io/confluentcommunity • https://www.confluent.io/blog/ • Twitter @bbejeck • We are hiring! https://www.confluent.io/careers/
  • 46. KS19Meetup. CONFLUENT COMMUNITY DISCOUNT CODE 25% OFF* *Standard Priced Conference pass
  • 47. NOMINATE YOURSELF OR A PEER AT CONFLUENT.IO/NOMINATE

Editor's Notes

  1. This is me I’ve worked at Confluent for 1.5 years on Streams team I authored the book Kafka Streams in Action Now let’s get started! First let’s go over what we are going to cover today
  2. This is me I’ve worked at Confluent for 1.5 years on Streams team I authored the book Kafka Streams in Action Now let’s get started! First let’s go over what we are going to cover today
  3. Toplogy is a collection of procesing nodes in a graph A sub-topology is a collection of processing nodes connected by common input topic Relationship between tasks threads and state stores Next let's take a look a life before Kafka Streams so we can get a sense of what Kafka Streams is.
  4. Toplogy is a collection of procesing nodes in a graph A sub-topology is a collection of processing nodes connected by common input topic Relationship between tasks threads and state stores Next let's take a look a life before Kafka Streams so we can get a sense of what Kafka Streams is.
  5. Image you have a Kafka topic and you need to do a group-by count on it, without Kafka Streams you'd need to do some manual processing to acheive this. Here is the main method and setting up the consumer and producer and subscribing to two topics This is more boiler plate work that needs to be done
  6. Next you loop over the retreived records and to a count by key and store it in a hashmap. And we can see this is an example of needing local state for stream processing Again we can see boiler plate work here nothing to do your business logic
  7. Next you loop over the retreived records and to a count by key and store it in a hashmap. At this point we are doing the business logic this is part we care about. Putting the record back in local state
  8. Then after a given amount of retrievals you will iterate over the resutls and publish those grouped counts out to a topic for downstream users Here we are manually keeping track of how often we want to process records downstream Not a complex application, but there are a handful manual steps involved. Manage producer, consumer when records are emitted handling commits etc. Now lets take a look at how we'd solve the same issue in Kafka Streams
  9. Then after a given amount of retrievals you will iterate over the resutls and publish those grouped counts out to a topic for downstream users Not a complex application, but there are a handful manual steps involved. Manage producer, consumer when records are emitted handling commits etc. Here we also commit only after we sent records downstream, if a failure occurs beforehand we’ll re-process the data Now lets take a look at how we'd solve the same issue in Kafka Streams
  10. This is a basic streams application we’ll use as the base of our examples First we create the stream from two topics “A” and “B”. We’re going to assue the data is coming in with keys Then we groupByKey, so we can count Notice the Materialized.as which allows us to name the state store We then convert our update stream to a record stream to allow us to write the result stream ot the output-topic The streams DSL gives you a lot power and flexibilty and is concise This generates a topology - connected processing nodes NEXT: how can we view the topology generated from this DSL code?
  11. This is a basic streams application we’ll use as the base of our examples First we create the stream from two topics “A” and “B”. We’re going to assue the data is coming in with keys Then we groupByKey, so we can count Notice the Materialized.as which allows us to name the state store We then convert our update stream to a record stream to allow us to write the result stream ot the output-topic The streams DSL gives you a lot power and flexibilty and is concise This generates a topology - connected processing nodes NEXT: how can we view the topology generated from this DSL code?
  12. This is a basic streams application we’ll use as the base of our examples First we create the stream from two topics “A” and “B”. We’re going to assue the data is coming in with keys Then we groupByKey, so we can count Notice the Materialized.as which allows us to name the state store We then convert our update stream to a record stream to allow us to write the result stream ot the output-topic The streams DSL gives you a lot power and flexibilty and is concise This generates a topology - connected processing nodes NEXT: how can we view the topology generated from this DSL code?
  13. Toplogy is a collection of procesing nodes in a graph A sub-topology is a collection of processing nodes connected by common input topic Relationship between tasks threads and state stores Next let's take a look a life before Kafka Streams so we can get a sense of what Kafka Streams is.
  14. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  15. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  16. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  17. This is a basic streams application we’ll use as the base of our examples First we create the stream from two topics “A” and “B”. We’re going to assue the data is coming in with keys Then we groupByKey, so we can count Notice the Materialized.as which allows us to name the state store We then convert our update stream to a record stream to allow us to write the result stream ot the output-topic The streams DSL gives you a lot power and flexibilty and is concise This generates a topology - connected processing nodes NEXT: how can we view the topology generated from this DSL code?
  18. This is a basic streams application we’ll use as the base of our examples First we create the stream from two topics “A” and “B”. We’re going to assue the data is coming in with keys Then we groupByKey, so we can count Notice the Materialized.as which allows us to name the state store We then convert our update stream to a record stream to allow us to write the result stream ot the output-topic The streams DSL gives you a lot power and flexibilty and is concise This generates a topology - connected processing nodes NEXT: how can we view the topology generated from this DSL code?
  19. Now if we want to view the topology created from our DSL code Build the Topology then call describe and render as a strring NEXT: now let’s look at the string
  20. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  21. This is how the string topology looks Notice Toplogies at the top implying multiple sub-topologies Here one sub-topology, A subtology is created by a source node we’ll see more in a minute Arrows pointing to the right show the next processor, arrows pointing left show where the processor received its records from Notice our named store, input topic and output topics This is good information but not very intutive, we have options for renderting a graph NEXT: let’s take a look at how we can create a graph out of this string
  22. Here the red code corresponds to the top node KStreamSource-00000000000 The groupByKey call does not create a processing node It creates an intermediate object you can use to create aggregation operations So groupByKey is actually represented by the arrow with sending records from the source to the KSTREAM-AGGREGATE
  23. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  24. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  25. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  26. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  27. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  28. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  29. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  30. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  31. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  32. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  33. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  34. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  35. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  36. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  37. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  38. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  39. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  40. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  41. Here stream the KGroupedStream returned by groupByKey are represented by the arrow going from the KSteamSource to the KStreamAggregate node So count creates a KStreamAggregate node and we can see the associated store over to the right created from Materialized.as call
  42. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  43. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  44. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent
  45. Thanks for your time Stay in touch and use these resources to participate in the community We have a book signing for Kafka Streams in Action at the Confluent Booth at 4:45 PM today stop By and pick up a signed copy and check out what’s going on at Confluent