When it absolutely,
positively,
has to be there
Reliability Guarantees
in Apache Kafka
@jeffholoman @gwenshap
Gwen Shapira
Confluent
Jeff Holoman
Cloudera
Kafka
 High Throughput
 Low Latency
 Scalable
 Centralized
 Real-time
“If data is the lifeblood of high
technology, Apache Kafka is the
circulatory system”
--Todd Palino
Kafka SRE @ LinkedIn
If Kafka is a critical piece of our pipeline
 Can we be 100% sure that our data will get there?
 Can we lose messages?
 How do we verify?
 Who’s fault is it?
Distributed Systems
 Things Fail
 Systems are designed to
tolerate failure
 We must expect failures
and design our code and
configure our systems to
handle them
Network
Broker MachineClient Machine
Data Flow
Kafka Client
Broker
O/S Socket Buffer
NIC
NIC
Page Cache
Disk
Application Thread
O/S Socket Buffer
async
callback
✗
✗
✗
✗
✗
✗
✗✗ data
ack / exception
Client Machine
Kafka Client
O/S Socket Buffer
NIC
Application Thread
✗
✗
✗Broker Machine
Broker
NIC
Page Cache
Disk
O/S Socket Buffer
miss
✗
✗
✗
✗
Network
Data Flow
✗
data
offsets
ZK
Kafka✗
Replication is your friend
 Kafka protects against failures by replicating data
 The unit of replication is the partition
 One replica is designated as the Leader
 Follower replicas fetch data from the leader
 The leader holds the list of “in-sync” replicas
Replication and ISRs
00
11
22
00
11
22
00
11
22
ProducerProducer
Broker 100 Broker 101 Broker 102
Topic:
Partitions:
Replicas:
my_topic
3
3
Partition:
Leader:
ISR:
1
101
100,102
Partition:
Leader:
ISR:
2
102
101,100
Partition:
Leader:
ISR:
0
100
101,102
ISR
 2 things make a replica in-sync
- Lag behind leader
- replica.lag.time.max.ms – replica that didn’t fetch or is behind
- replica.lag.max.messages – will go away in 0.9
- Connection to Zookeeper
Terminology
 Acked
- Producers will not retry sending.
- Depends on producer setting
 Committed
- Consumers can read.
- Only when message got to all ISR.
 replica.lag.time.max.ms
- how long can a dead replica prevent
consumers from reading?
Replication
 Acks = all
- only waits for in-sync replicas to reply.
Replica 3
100
Replica 2
100
Replica 1
100
Time
Replication
Replica 2
100
101
Replica 1
100
101
Time
 Replica 3 stopped replicating for some reason
Acked in acks = all
“committed”
Acked in acks = 1
but not
“committed”
Replication
Replica 2
100
101
Replica 1
100
101
Time
 One replica drops out of ISR, or goes offline
 All messages are now acked and committed
Replication
Replica 1
100
101
102
103
104Time
 2nd
Replica drops out, or is offline
Replication
Time
 Now we’re in trouble
✗
Replication
Replica 3
100
Replica 2
100
101
Time
All those are
“acked” and
“committed”
So what to do
 Disable Unclean Leader Election
- unclean.leader.election.enable = false
 Set replication factor
- default.replication.factor = 3
 Set minimum ISRs
- min.insync.replicas = 2
Warning
 min.insync.replicas is applied at the topic-level.
 Must alter the topic configuration manually if created before the server level change
 Must manually alter the topic < 0.9.0 (KAFKA-2114)
Replication
 Replication = 3
 Min ISR = 2
Replica 3
100
Replica 2
100
Replica 1
100
Time
Replication
Replica 2
100
101
Replica 1
100
101
Time
 One replica drops out of ISR, or goes offline
Replication
Replica 1
100
101102
103
104
Time
 2nd
Replica fails out, or is out of sync
Buffers in
Producer
Producer Internals
 Producer sends batches of messages to a buffer
M3
Application
Thread
Application
Thread
Application
Thread
send()
M2 M1 M0
Batch 3
Batch 2
Batch 1
Fail?
response
retry
Update Future
callback
drain
Metadata or
Exception
Basics
 Durability can be configured with the producer configuration
request.required.acks
- 0 The message is written to the network (buffer)
- 1 The message is written to the leader
- all The producer gets an ack after all ISRs receive the data; the message is
committed
 Make sure producer doesn’t just throws messages away!
- block.on.buffer.full = true
 All calls are non-blocking async
 2 Options for checking for failures:
- Immediately block for response: send().get()
- Do followup work in Callback, close producer after error threshold
- Be careful about buffering these failures. Future work? KAFKA-1955
- Don’t forget to close the producer! producer.close() will block until in-flight txns
complete
 retries (producer config) defaults to 0
 message.send.max.retries (server config) defaults to 3
 In flight requests could lead to message re-ordering
Consumer
 Two choices for Consumer API
- Simple Consumer
- High Level Consumer
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Commit?
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Commit?
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Auto-commit
enabled
✗Commit
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Auto-commit
enabled
✗
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Auto-commit
enabled
Consumer
Picks up here
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Commit
Consumer Offsets
P0 P2 P3 P4 P5 P6
Consumer
Thread 1 Thread 2 Thread 3 Thread 4
Commit
Offset
commits for
all threads
P0 P2 P3 P4 P5 P6
Consumer 1 Consumer 2 Consumer 3 Consumer 4
Consumer Offsets
Auto-commit
DISABLED
Commit
Consumer Recommendations
 Set autocommit.enable = false
 Manually commit offsets after the message data is processed / persisted
consumer.commitOffsets();
 Run each consumer in it’s own thread
New Consumer!
 No Zookeeper! At all!
 Rebalance listener
 Commit:
- Commit
- Commit async
- Commit( offset)
 Seek(offset)
Exactly Once Semantics
 At most once is easy
 At least once is not bad either – commit after 100% sure data is safe
 Exactly once is tricky
- Commit data and offsets in one transaction
- Idempotent producer
Monitoring for Data Loss
 Monitor for producer errors – watch the retry numbers
 Monitor consumer lag – MaxLag or via offsets
 Standard schema:
- Each message should contain timestamp and originating service and host
 Each producer can report message counts and offsets to a special topic
 “Monitoring consumer” reports message counts to another special topic
 “Important consumers” also report message counts
 Reconcile the results
Be Safe, Not Sorry
 Acks = all
 Block.on.buffer.full = true
 Retries = MAX_INT
 ( Max.inflight.requests.per.connect = 1 )
 Producer.close()
 Replication-factor >= 3
 Min.insync.replicas = 2
 Unclean.leader.election = false
 Auto.offset.commit = false
 Commit after processing
 Monitor!

Kafka Reliability - When it absolutely, positively has to be there

  • 1.
    When it absolutely, positively, hasto be there Reliability Guarantees in Apache Kafka @jeffholoman @gwenshap Gwen Shapira Confluent Jeff Holoman Cloudera
  • 2.
    Kafka  High Throughput Low Latency  Scalable  Centralized  Real-time
  • 3.
    “If data isthe lifeblood of high technology, Apache Kafka is the circulatory system” --Todd Palino Kafka SRE @ LinkedIn
  • 4.
    If Kafka isa critical piece of our pipeline  Can we be 100% sure that our data will get there?  Can we lose messages?  How do we verify?  Who’s fault is it?
  • 5.
    Distributed Systems  ThingsFail  Systems are designed to tolerate failure  We must expect failures and design our code and configure our systems to handle them
  • 6.
    Network Broker MachineClient Machine DataFlow Kafka Client Broker O/S Socket Buffer NIC NIC Page Cache Disk Application Thread O/S Socket Buffer async callback ✗ ✗ ✗ ✗ ✗ ✗ ✗✗ data ack / exception
  • 7.
    Client Machine Kafka Client O/SSocket Buffer NIC Application Thread ✗ ✗ ✗Broker Machine Broker NIC Page Cache Disk O/S Socket Buffer miss ✗ ✗ ✗ ✗ Network Data Flow ✗ data offsets ZK Kafka✗
  • 8.
    Replication is yourfriend  Kafka protects against failures by replicating data  The unit of replication is the partition  One replica is designated as the Leader  Follower replicas fetch data from the leader  The leader holds the list of “in-sync” replicas
  • 9.
    Replication and ISRs 00 11 22 00 11 22 00 11 22 ProducerProducer Broker100 Broker 101 Broker 102 Topic: Partitions: Replicas: my_topic 3 3 Partition: Leader: ISR: 1 101 100,102 Partition: Leader: ISR: 2 102 101,100 Partition: Leader: ISR: 0 100 101,102
  • 10.
    ISR  2 thingsmake a replica in-sync - Lag behind leader - replica.lag.time.max.ms – replica that didn’t fetch or is behind - replica.lag.max.messages – will go away in 0.9 - Connection to Zookeeper
  • 11.
    Terminology  Acked - Producerswill not retry sending. - Depends on producer setting  Committed - Consumers can read. - Only when message got to all ISR.  replica.lag.time.max.ms - how long can a dead replica prevent consumers from reading?
  • 12.
    Replication  Acks =all - only waits for in-sync replicas to reply. Replica 3 100 Replica 2 100 Replica 1 100 Time
  • 13.
    Replication Replica 2 100 101 Replica 1 100 101 Time Replica 3 stopped replicating for some reason Acked in acks = all “committed” Acked in acks = 1 but not “committed”
  • 14.
    Replication Replica 2 100 101 Replica 1 100 101 Time One replica drops out of ISR, or goes offline  All messages are now acked and committed
  • 15.
  • 16.
  • 17.
    Replication Replica 3 100 Replica 2 100 101 Time Allthose are “acked” and “committed”
  • 18.
    So what todo  Disable Unclean Leader Election - unclean.leader.election.enable = false  Set replication factor - default.replication.factor = 3  Set minimum ISRs - min.insync.replicas = 2
  • 19.
    Warning  min.insync.replicas isapplied at the topic-level.  Must alter the topic configuration manually if created before the server level change  Must manually alter the topic < 0.9.0 (KAFKA-2114)
  • 20.
    Replication  Replication =3  Min ISR = 2 Replica 3 100 Replica 2 100 Replica 1 100 Time
  • 21.
    Replication Replica 2 100 101 Replica 1 100 101 Time One replica drops out of ISR, or goes offline
  • 22.
    Replication Replica 1 100 101102 103 104 Time  2nd Replicafails out, or is out of sync Buffers in Producer
  • 24.
    Producer Internals  Producersends batches of messages to a buffer M3 Application Thread Application Thread Application Thread send() M2 M1 M0 Batch 3 Batch 2 Batch 1 Fail? response retry Update Future callback drain Metadata or Exception
  • 25.
    Basics  Durability canbe configured with the producer configuration request.required.acks - 0 The message is written to the network (buffer) - 1 The message is written to the leader - all The producer gets an ack after all ISRs receive the data; the message is committed  Make sure producer doesn’t just throws messages away! - block.on.buffer.full = true
  • 26.
     All callsare non-blocking async  2 Options for checking for failures: - Immediately block for response: send().get() - Do followup work in Callback, close producer after error threshold - Be careful about buffering these failures. Future work? KAFKA-1955 - Don’t forget to close the producer! producer.close() will block until in-flight txns complete  retries (producer config) defaults to 0  message.send.max.retries (server config) defaults to 3  In flight requests could lead to message re-ordering
  • 28.
    Consumer  Two choicesfor Consumer API - Simple Consumer - High Level Consumer
  • 29.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4
  • 30.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Commit?
  • 31.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Commit?
  • 32.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Auto-commit enabled ✗Commit
  • 33.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Auto-commit enabled ✗
  • 34.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Auto-commit enabled Consumer Picks up here
  • 35.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Commit
  • 36.
    Consumer Offsets P0 P2P3 P4 P5 P6 Consumer Thread 1 Thread 2 Thread 3 Thread 4 Commit Offset commits for all threads
  • 37.
    P0 P2 P3P4 P5 P6 Consumer 1 Consumer 2 Consumer 3 Consumer 4 Consumer Offsets Auto-commit DISABLED Commit
  • 38.
    Consumer Recommendations  Setautocommit.enable = false  Manually commit offsets after the message data is processed / persisted consumer.commitOffsets();  Run each consumer in it’s own thread
  • 39.
    New Consumer!  NoZookeeper! At all!  Rebalance listener  Commit: - Commit - Commit async - Commit( offset)  Seek(offset)
  • 40.
    Exactly Once Semantics At most once is easy  At least once is not bad either – commit after 100% sure data is safe  Exactly once is tricky - Commit data and offsets in one transaction - Idempotent producer
  • 41.
    Monitoring for DataLoss  Monitor for producer errors – watch the retry numbers  Monitor consumer lag – MaxLag or via offsets  Standard schema: - Each message should contain timestamp and originating service and host  Each producer can report message counts and offsets to a special topic  “Monitoring consumer” reports message counts to another special topic  “Important consumers” also report message counts  Reconcile the results
  • 42.
    Be Safe, NotSorry  Acks = all  Block.on.buffer.full = true  Retries = MAX_INT  ( Max.inflight.requests.per.connect = 1 )  Producer.close()  Replication-factor >= 3  Min.insync.replicas = 2  Unclean.leader.election = false  Auto.offset.commit = false  Commit after processing  Monitor!

Editor's Notes

  • #7 Low Level Diagram: Not talking about producer / consumer design yet…maybe this is too low-level though Show diagram of network send -&amp;gt; os socket -&amp;gt; NIC -&amp;gt; ---- NIC -&amp;gt; Os socket buffer -&amp;gt; socket -&amp;gt; internal message flow / socket server -&amp;gt; response back to client -&amp;gt; how writes get persisted to disk including os buffers, async write etc Then overlay places where things can fail.
  • #8 Low Level Diagram: Not talking about producer / consumer design yet…maybe this is too low-level though Show diagram of network send -&amp;gt; os socket -&amp;gt; NIC -&amp;gt; ---- NIC -&amp;gt; Os socket buffer -&amp;gt; socket -&amp;gt; internal message flow / socket server -&amp;gt; response back to client -&amp;gt; how writes get persisted to disk including os buffers, async write etc Then overlay places where things can fail.
  • #10 Highlight boxes with different color
  • #30 This conceptually is our high-level consumer. In this diagram we have a topic with 6 partitions, and an application running 4 threads.
  • #31 Kafka provides two different paradigms for commiting offsets. The first is “auto-committing”, more on this later. The second is to manually commit offsets in your application. But what’s the right time? If we commit offsets as soon as we actually receive a message, we expose our selves to data loss as we could have process, machine or thread failure before we persist or otherwise process our data.
  • #32 So what we’d really like to do is only commit offsets after we’ve done some amount of processing and / or persistence on the data. Typical situations would be, after producing a new message to kafka, or after writing a record to HDFS.
  • #33 So lets so we have auto-commit enabled, and we are chugging along, and counting on the consumer to commit our offsets for us. This is great because we don’t have to code anything, and don’t have think about the frequency of commits and the impact that might have on our throughput. Life is good. But now we’ve lost a thread or a process. And we don’t really know where we are in the processing, Because the last auto-commit committed stuff that we hadn’t actually written to disk.
  • #34 So now we’re in a situation where we think we’ve read all of our data but we will have gaps in data. Note the same risk applies if we lose a partition or broker and get a new leader. OR
  • #35 If we add more consumers in the same group and we rebalance the partition assignment. Imagine a scenario where you are hanging in your processing, or there’s some other reason that you have to exit before persisting to disk, the new consumer added will just pick up from the last committed offset. Yes these are corner cases, but we are talking about things going wrong, and you should consider these cases.
  • #36 Ok so don’t use autocommit if you care about this sort of thing.
  • #37 One other thing to note, is that if you are running some code akin to the ConsumerGroup Example that’s on the wiki, and you are running one consumer with multiple threads, when you issue a commit from one thread, it will commit across all threads. So this isn’t great for all of the reasons that we mentioned a few moments ago.
  • #38 So disable auto commit. Commit after your processing, and run the high level consumer in it’s own thread.
  • #39 To cement this: Note a lot this changes in the next release with the new Consumer, but maybe we will revisit that once that is released!