Broker-1 Broker-2 Broker-3
Data Store for longer retention
Topics: categories in which message
feed is maintained
Producer: Processes that publish
messages to a Kafka topic.
Consumers: processes that subscribe
to topics and process the feed of
Brokers: Servers which form a kafka
cluster and act as a data transport
channel between producers and
Simplified View of a Kafka System
ZookeeperBroker 1 Broker 2 Broker 3
Producer 1 Producer 2
Consumer 1 Consumer 2 Consumer 3
• Each partition is an ordered, immutable sequence of
• Messages are continuously appended to it.
• Each message in partition is assigned a unique
sequential id number called offset.
• Any message in partition can be accessed using this
• Partition servers 2 purposes:
A topic can be divided into multiple partition, and
each partition can be on different servers.
A consumer can consume from multiple partitions at
same time(while maintaining ordering guarantee).
Distribution & Replication
• The partitions of the log are distributed over Kafka cluster
• Each server handles data and requests for some number of
• Each partition is replicated for fault tolerance.
• Each partition has one server which acts as the leader.
• The leader handles all read and write requests for the
• Followers keep replicating the leader.
• Producers publish data to the topics of their choice.
• Producer can choose the topic’s partition to which
message should be assigned.
• Partition can be selected in a round robin manner for
• Kafka doesn’t care about serialization format. All it
need is a byte array.
• Other messaging systems basically follow 2 models:
• Kafka uses a concept of consumer group which generalizes
both these models.
• Consumers label themselves with a consumer group name
• Each message published to a topic, is delivered to one
consumer instance, within each subscribing consumer group.
• Unlike other messaging system, message are not
deleted on consumption.
• Message are retained until a configurable period of
time after which they are deleted (even if they are
• Consumers can re-consume any chunk of older
message using message offset.
• Kafka performance is effectively constant with respect
to data size, so huge data size is not an issue.
1. Ordering guarantee
• Messages sent by a producer to a particular topic partition will be
appended in the order they are sent.
• A consumer instance sees messages in the order they are stored in the
2. At least once delivery
3. Fault tolerance
For a topic with replication factor N, up to N-1 server failures will not cause
any data loss.
4. No corruption of data:
• over the network
• On the disk
Misc Design features
1. Stateless broker
• Each consumer maintains its own state(offset)
2. Load balancing
3. Asynchronous send
4. Push/pull model instead of Push/Push
5. Consumer Position
6. Offline Data Load
7. Simple API
8. Low Overhead
9. Batch send and receive
10. No message caching in JVM
11. Rely on file system buffering
• mostly sequential access patterns
12. Zero-copy transfer: file->socket