Running Kafka At Scale
System Architect @ Confluent
Committer @ Apache Kafka
Software Engineer @ Cloudera
Senior Consultant @ Pythian
So, we are done,
When it comes to critical production systems –
Never trust a vendor.
Building a Kafka cluster from the hardware up
What’s Important To You?
Message Retention - Disk size
Message Throughput - Network capacity
Producer Performance - Disk I/O
Consumer Performance - Memory
RAIS - Redundant Array of Inexpensive Servers
Kafka is well-suited to horizontal scaling
Also helps with CPU utilization
• Kafka needs to decompress and recompress every message batch
• KIP-31 will help with this by eliminating recompression
Don’t co-locate Kafka
• Can survive a single disk failure (not RAID 0)
• Provides the broker with a single log directory
• Eats up disk I/O
• Gives Kafka all the disk I/O available
• Broker is not smart about balancing partitions
• If one disk fails, the entire broker stops
Amazon EBS performance works!
Operating System Tuning
• EXT or XFS
• Using unsafe mount options
• Dirty Pages
Only use JDK 8 now
Keep heap size small
• Even our largest brokers use a 6 GB heap
• Save the rest for page cache
Garbage Collection - G1 all the way
• Basic tuning only
• Watch for humongous allocations
Monitoring the Foundation
Network inbound and outbound
Filehandle usage for Kafka
• Free space - where you write logs, and where Kafka stores messages
• Free inodes
• I/O performance - at least average wait and percent utilization
Broker Ground Rules
• Stick (mostly) with the defaults
• Set default cluster retention as appropriate
• Default partition count should be at least the number of brokers
• Watch the right things
• Don’t try to alert on everything
Triage and Resolution
• Solve problems, don’t mask them
Too Much Information!
Monitoring teams hate Kafka
• Per-Topic metrics
• Per-Partition metrics
• Per-Client metrics
Capture as much as you can
• Many metrics are useful while
triaging an issue
Clients want metrics on their own topics
Only alert on what is needed to signal a problem
Bytes In and Out, Messages In
• Why not messages out?
• Count and Leader Count
• Under Replicated and Offline
• Network pool, Request pool
• Max Dirty Percent
• Rates and times - total, queue, local, and send
Bytes In, Bytes Out
Messages In, Produce Rate, Produce Failure Rate
Fetch Rate, Fetch Failure Rate
Log End Offset
• Why bother?
• KIP-32 will make this unnecessary
Provide this to your customers for them to alert on
Trend cluster utilization and growth over time
Use default configurations for quotas and retention to require customers to
talk to you
Monitor request times
• If you are able to develop a consistent baseline, this is early warning
Under Replicated Partitions
Count of number of partitions which are not fully replicated within the
Also referred to as “replica lag”
Primary indicator of problems within the cluster
Appropriately Sizing Topics
Topics are “Logical” – data modeling is based on data and consumers
Number of partitions:
• How many brokers do you have in the cluster?
• How many consumers do you have?
• Do you have specific partition requirements?
Keeping partition sizes manageable
Don’t have too many partitions
More partitions higher throughput
• t: target throughput, p: producer throughput per partition, c: consumer
throughput per partition
• max(t/p, t/c)
Downside with more partitions
• requires more open file handle
• may increase unavailability
• may increase end-to-end latency
• may require more memory in the client
Rule of thumb
• 2-4 K partitions per broker
• 10s K partitions per cluster
Broker Performance Checks
Are all the brokers in the cluster working?
Are the network interfaces saturated?
• Reelect partition leaders
• Rebalance partitions in the cluster
• Spread out traffic more (increase partitions or brokers)
Is the CPU utilization high? (especially iowait)
• Is another process competing for resources?
• Look for a bad disk
Are you still running 0.8?
Do you have really big messages?
Anatomy of a Produce Request
Other BrokersOther BrokersOther Brokers
Do other replicas
need to confirm?
Anatomy of a Fetch Request
Has consumer property
Other BrokersOther BrokersOther Brokers
Request Local Time
Request Queue Time
Response Remote Time
How do we know it’s the app?
Try Perf tool
Try Perf tool
on the Broker
Probably the app
Either the broker
Sync = Slow
Batch.size vs Linger.ms
• Batch will be sent as soon as it is full
• Therefore small batch size can decrease throughput
• Increase batch size if the producer is running near saturation
• If consistently sending near-empty batchs – increase to linger.ms will add a
bit of latency, but improve throughput
Consumers typically live in “consumer groups”
Partitions in topics are balanced between consumers in groups
Consumer Group 1
My Consumer is not just slow – it is hanging!
• There are no messages available (try perf consumer)
• Next message is too large
• Perpetual rebalance
• Not polling enough
• Multiple consumers in same group in same thread
Rebalances are the consumer performance killer
Consumers must keep polling
Or they die.
When consumers die,
the group rebalances.
When the group rebalances,
it does not consume.
Min.fetch.bytes vs. max.wait
• What if the topic doesn’t have much data?
• “Are we there yet?” “and now?”
• Reduce load on broker by letting fetch requests wait a bit for data
• Add latency to increase throughput
• Careful! Don’t fetch more than you can process!
Commits take time
• Commit less often
• Commit async
• Consumer throughput is often limited by target
• i.e. you can only write to HDFS so fast (and it aint fast)
• My SLA is 1GB/s but single-client HDFS writes are 20MB/s
• If each consumer writes to HDFS – you need 50 consumers
• Which means you need 50 partitions
• Except sometimes adding partitions is a bitch
• So do the math first
I need to get data from Dallas to AWS
• Put the consumer far from Kafka
• Because failure to pull data is safer than failure to push
• Tune network parameters in Client, Kafka and both OS
•Send buffer -> bandwidth X delay
This will maximize use of bandwidth.
Note that cheap AWS nodes have low bandwidth
• Burrow is useful here
• records-per-request / bytes-per-request
Apologies on behalf of
We forgot to document
metrics for the new