20. @gamussa | #confluentvug | @confluentinc
The log is a simple idea
Messages are added at the end of the log
Old New
21. @gamussa | #confluentvug | @confluentinc
The log is a simple idea
Messages are added at the end of the log
Old New
22. @gamussa | #confluentvug | @confluentinc
Consumers have a position all of their own
Old New
Robin
is here
Scan
Viktor
is here
Scan
Ricardo
is here
Scan
23. @gamussa | #confluentvug | @confluentinc
Consumers have a position all of their own
Old New
Robin
is here
Scan
Viktor
is here
Scan
Ricardo
is here
Scan
24. @gamussa | #confluentvug | @confluentinc
Consumers have a position all of their own
Old New
Robin
is here
Scan
Viktor
is here
Scan
Ricardo
is here
Scan
28. @gamussa | #confluentvug | @confluentinc
Shard data to get scalability
Producer (1) Producer (2) Producer (3)
Cluster
of
machines
Partitions live on
different machines
Messages are sent
to different
partitions
29. @gamussa | #confluentvug | @confluentinc
// in-memory store, not persistent
Map<String, Integer> groupByCounts = new HashMap<>();
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties());
KafkaProducer<String, Integer> producer = new KafkaProducer<>(producerProperties())) {
consumer.subscribe(Arrays.asList("A", "B"));
while (true) { // consumer poll loop
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
}
30. @gamussa | #confluentvug | @confluentinc
// in-memory store, not persistent
Map<String, Integer> groupByCounts = new HashMap<>();
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties());
KafkaProducer<String, Integer> producer = new KafkaProducer<>(producerProperties())) {
consumer.subscribe(Arrays.asList("A", "B"));
while (true) { // consumer poll loop
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
}
31. @gamussa | #confluentvug | @confluentinc
// in-memory store, not persistent
Map<String, Integer> groupByCounts = new HashMap<>();
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties());
KafkaProducer<String, Integer> producer = new KafkaProducer<>(producerProperties())) {
consumer.subscribe(Arrays.asList("A", "B"));
while (true) { // consumer poll loop
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
}
32. @gamussa | #confluentvug | @confluentinc
// in-memory store, not persistent
Map<String, Integer> groupByCounts = new HashMap<>();
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties());
KafkaProducer<String, Integer> producer = new KafkaProducer<>(producerProperties())) {
consumer.subscribe(Arrays.asList("A", "B"));
while (true) { // consumer poll loop
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
}
33. @gamussa | #confluentvug | @confluentinc
while (true) { // consumer poll loop
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
for (ConsumerRecord<String, String> record : records) {
String key = record.key();
Integer count = groupByCounts.get(key);
if (count == null) {
count = 0;
}
count += 1;
groupByCounts.put(key, count);
}
}
71. @gAmUssA | #confluentvug | @confluentinc
ALL UPCOMING MEETUPS
NEW EVENT EMAIL ALERTS
THE CONFLUENT MEETUP HUB
CNFL.IO/MEETUP-HUB
VIDEOS OF PAST
MEETUPS
SLIDES FROM THE
TALKS
72. @gamussa | #confluentvug | @confluentinc
Confluent Community
Slack
A vibrant community of over 16,000
members
Come along and discuss Apache Kafka and
Confluent Platform on dedicated channels
including #ksqlDB, #connect, #clients, and
more
http://cnfl.io/slack
73. @gamussa | #confluentvug | @confluentinc
Free eBooks Designing Event-Driven Systems
Ben Stopford
Kafka: The Definitive Guide
Neha Narkhede, Gwen Shapira, Todd Palino
Making Sense of Stream Processing
Martin Kleppmann
I ❤ Logs
Jay Kreps
http://cnfl.io/book-bundle