1. New age Distributed Messaging
Kafka & Concepts explored !!
Dileep Varma Kalidindi
Nov 2014
2. Who Am I ?
4/5/2016 Confidential 2
Name: Dileep Varma Kalidindi
Status: Senior Engineer @Responsys (since Apr’14), Circles Team.
Fascination: Problem Solving , Distributed & BigData churning systems.
Past: 8+yrs with VeriSign, Informatica Labs, NTT Data.
Hobbies: Jumping (Water & Air)
3. What is brewing today ?
4/5/2016 Confidential 3
Responsys Technology Road Map.
Data off the limits - Handling & Processing BigData
Scope for New Age capabilities (in distributed msg’ng) – Architecture peek through
Existing System bottlenecks & shortfalls
Rethinking from fundamentals – Distributed Commit Log
Kafka Messaging – Concept, Architecture, API & Demo
Kafka Internals – ZooKeeper in depth, Atomic broadcast & Quorum
Performance & feature comparisons – Traditional vs New Age
5. Data off the limits – Handling larger Data sets
4/5/2016 Confidential 5
Kafka on Responsys technology Road map - Antonio
Data evolution from Traditional to BigData
Characterized by Volume, Variety, Velocity, Variability, Veracity & Complexity
Volume -> Quantity of data. Storage & Processing (Hadoop, NoSQL)
Variety -> Diversity of data sets, OLTP, OLAP (NoSQL, NewSQL)
Velocity -> Speed of data handling in real time (Kafka, Storm, Flume)
Deeper market penetration implicitly transforms Data
Our focus is on Velocity
Need of the hour is Systems to handle – BigData Technologies
7. 7
Uber
Application
Database
UI PUB WS CN BounceIS
LA JMS EC SPAM ETLAB
EventDB
CustDB
ReportDBSysAdmDB
Data
Warehouse
AuditDB
UsageDB
EMD CL PD
ICR
Content
IDDP
Short URL
SUL DIS
SMS PGPUSH
SMSL
Identifying Scope – Architecture Peek-in
REAL TIME PROCESSING
8. 4/5/2016 Confidential 8
Is a there problem with my current System ?
Existing systems are good (IBM MQ) in traditional sense.
Delivery guarantee is good for Emails, what for events (PubWeb, Bounce, AB) ?
Focus on throughput. Existing brokers have limitations.
Scaling and Replication, cost of Cluster maintenance in existing MQ.
Dynamic rebalancing of Brokers, Consumers
10. Log’s – fundamental System blocks
4/5/2016 Confidential 10
• Log (as a foundation) :
Append-only, totally-ordered sequence of records ordered by
time.
Unique –sequential log entry (Clock Decoupled time stamp)
Deterministic
• Logging (as a core process) :
• IS Machine readable logging
Ex: Write ahead logs, Commit logs &Transaction logs
• IS NOT Application logging (Human readable)
Ex: Log4j, slf4j etc..
• Backbone of Distributed Messaging, Databases, NoSQL, Key-
Value stores, replication, Hadoop, Version Control…
• Logs for Data Integration, Real time processing & System
building.
11. Log’s – solving Problems
4/5/2016 Confidential 11
• Logs are not new in Databases !!
Started with IBM SystemR
Physical logging – Values of rows changed, Logical logging – SQL Queries
Logs implementations – ACID to Replication (Goldengate)
• State Machine Replication Principle
2 identical, deterministic process -> begin with same state, gets same inputs in order, produce same output and
ends in same state
• In Distributed Systems they Solve core problems
Ordering changes Distributing data
• Processing and replication
Active – Passive
Active - Active
12. Log’s – driving Architecture
4/5/2016 Confidential 12
• Log-structured data flow
Cache system
Asynchronous Production & Consumption
• Kafka Log Centric approach:
Not a Database, Log file collection, Typical messaging system
• Event driven architecture:
Kafka – event driven, Multi-subscriber system (Topic)
Example – which performs multiple ops on one event job
15. Introducing Kafka
4/5/2016
Confidential 15
http://kafka.apache.org/
Originated at LinkedIn, open sourced in early 2011
Implemented in Scala, some Java
9 core committers, plus ~ 20 contributors
Kafka is a distributed, partitioned, replicated commit log
service. A uniquely designed pub-sub messaging system
Designed for,
High throughput to support high volume event feeds.
Support real-time processing of these feeds to create new, derived feeds.
low-latency delivery to handle traditional messaging use cases.
Guarantee fault-tolerance
17. Kafka is Amazingly fast – How ?
4/5/2016 17Confidential
• “Up to 2 million writes/sec on 3 cheap machines”
• Using 3 producers on 3 different machines, 3x async replication
• Only 1 producer/machine because NIC already saturated
• Sustained throughput as stored data grows
• Slightly different test config than 2M writes/sec above.
18. Kafka is Amazingly fast – Why ?
4/5/2016 18Confidential
• Fast writes:
• While Kafka persists all data to disk, essentially all writes go to the
page cache of OS, i.e. RAM.
• Cf. hardware specs and OS tuning (we cover this later)
• Fast reads:
• Very efficient to transfer data from page cache to a network socket
• Linux: sendfile() system call
• Combination of the two = fast Kafka!
• Example (Operations): On a Kafka cluster where the consumers are mostly caught
up you will see no read activity on the disks as they will be serving data entirely
from cache.
19. Kafka Core Concepts - A first look
4/5/2016 19Confidential
• The who is who
• Producers write data to brokers.
• Consumers read data from brokers.
• All this is distributed.
• The data
• Data is stored in topics.
• Topics are split into partitions, which are replicated
20. Kafka Concepts - Topics
4/5/2016 20Confidential
• Topic: feed name to which messages are published
• Example: “pubweb.event.2”
23. Kafka Concepts - Partitions
4/5/2016 23Confidential
• A topic consists of partitions
• Partition: ordered + immutable sequence of messages
that is continually appended to
• Partitions of a topic are Configurable
24. Kafka Concepts - Partition Offset
4/5/2016 24Confidential
• Offset: messages in the partitions are each assigned a unique (per
partition) and sequential id called the offset
• Consumers track their pointers via (offset, partition, topic) tuples
Consumer group C1
25. Kafka Concepts - Partition Replica’s
4/5/2016 25Confidential
• Replicas: “backups” of a partition
• They exist solely to prevent data loss.
• Replicas are never read from, never written to.
• They do NOT help to increase producer or consumer parallelism!
31. Kafka Core Concepts - Recap
4/5/2016 31Confidential
• The who is who
• Producers write data to brokers.
• Consumers read data from brokers.
• All this is distributed.
• The data
• Data is stored in topics.
• Topics are split into partitions, which are replicated
35. Apache ZooKeeper
4/5/2016
Confidential 35
Apache Kafka uses ZooKeeper to detect crashes, implement topic
discovery, and maintain production & consumption state for topics.
High-performance coordination service for distributed applications.
SoC – Separates Coordination overhead from Application logic.
Centralized service for naming (registry), configuration
management, synchronization, and group membership services.
Zookeeper is backbone for Hbase, Solr, Facebook messaging apps &
many more distributed apps.
Simple, Replicated, Ordered and Fast
36. Zookeeper- Internals
4/5/2016
Confidential 36
Znodes
Persistent – exists till deleted
Ephemeral - session scope
Reads by all Nodes and Writes through Leaders
Data is stored as byte array
Allows Watches and notifications
Ensemble – a group of Servers available to service
Quorum determined leader selection
37. ZooKeeper – Guarantees
4/5/2016
Confidential 37
• Follows principles of ATOMIC broadcast
Sequential Consistency – Updates are applied in order
Atomicity – Updates either succeed or fail
Single system image – Same view of service regardless of ZK server
Reliability – Persistence of updates
Timeliness – System is guaranteed to be up-to-date within time bound
• In Summary - Zookeeper { Leader Activation + Message delivery }
39. Kafka performance – Producer tests
(LinkedIn benchmark test)
4/5/2016
Confidential 39
• HW Set-up with 2 linux nodes
• Each with 8 2 GHZ cores (8 Cores/Mac ~ 16 GHZ processing)
• 16 GB of RAM, 6 disks with RAID 10 and 1GB network connection.
• Producer test
• Single producer ~ 10 million msgs each of 200bytes
• Kafka msg batch 1 and 50. Other MQ’s no batching
• X-axis – Msg sent to broker, Y-axis – Producer throughput
• Why is Producer fast
• No ACK
• Batching
• Kafka storage format
40. Kafka performance – Consumer tests
(LinkedIn benchmark test)
4/5/2016
Confidential 40
• HW Set-up with 2 linux nodes
• Each with 8 2 GHZ cores (8 Cores/Mac ~ 16 GHZ processing)
• 16 GB of RAM, 6 disks with RAID 10 and 1GB network connection.
• Consumer test
• Single consumer retrives 10 million msgs each of 200bytes
• Each pull request for 1000 msgs (200kb)
• X-axis – Msg consumed from broker, Y-axis – consumer throughput
• Why is Producer fast
• No Delivery state storage
• Kafka storage format
(less data transmitted)
42. Summary – quick Recap
4/5/2016
Confidential 42
Importance Handling & Processing BigData
Scope for introduction in Responsys Architecture
Existing System bottlenecks & shortfalls
Distributed Commit Log
Kafka Messaging
Kafka Internals – ZooKeeper
Performance & feature comparisons – Traditional vs New Age
43. Conclusion – Open ended
4/5/2016
Confidential 43
• Limitation is on Data – not on Systems
• No need for complete revamp
• Choice of Right systems at right time is the recipe.
References
1. https://kafka.apache.org/
2. http://zookeeper.apache.org/
3. http://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-
three-cheap-machines
4. http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-
should-know-about-real-time-datas-unifying