SlideShare a Scribd company logo
1 of 16
Admitted Cloud Adoption
Strategy – Phase1
Publish – Subscribe Model With Kafka
What Is Pub – Sub Model
• Publish/subscribe messaging is a pattern that is characterized by the
sender (publisher) of a piece of data (message) not specifically
directing it to a receiver.
• Instead, the publisher classifies the message somehow, and that
receiver (subscriber) subscribes to receive certain classes of
messages.
• Pub/sub systems often have a broker, a central point where messages
are published, to facilitate this.
• Apache Kafka is a publish/subscribe messaging system
Kafka
At the heart of the Kafka messaging system sits a partitioned, replayable log. The
log-structured approach is itself a simple idea: a collection of messages, appended
sequentially to a file. When a service wants to read messages from Kafka, it
“seeks” to the position of the last message it read, then scans sequentially, reading
messages in order while periodically recording its new position in the log
Topic and Partitions
• Messages in Kafka are categorized into topics. The closest analogies for a topic is a database table or a folder
in a filesystem.
• Topics are additionally broken down into a number of partitions. Going back to the “commit log” description,
partition is a single log.
Producer and Consumers
• Producers create new messages. In general, a message will be produced to a specific topic.
• Consumers read messages. The consumer subscribes to one or more topics and reads the messages
Brokers and Clusters
• A single Kafka server is called a broker. The broker receives messages from producers, assigns offsets to them, and
commits the messages to storage on disk.
• It also services consumers, responding to fetch requests for partitions and responding with the messages that have been
committed to disk.
• Kafka brokers are designed to operate as part of a cluster. Within a cluster of brokers, one broker will also function as the
cluster controller
Why Kafka
• Multiple Producers: Kafka is able to seamlessly handle multiple producers, whether those
clients are using many topics or the same topic.
• Multiple Consumers: Kafka is designed for multiple consumers to read any single stream of
messages without interfering with each other.
• Disk-Based Retention: Messages are committed to disk and will be stored with configurable
retention rules. Durable retention means no danger of losing data.
• Consumers can be stopped, and the messages will be retained in Kafka. This allows them to
restart and pick up processing messages where they left off with no data loss.
• Scalable: Kafka’s flexible scalability makes it easy to handle any amount of data. It provides
options for single broker as a proof of concept, expand to a small development cluster of
three brokers, and move into production with a larger cluster. Producers, consumers, and
brokers can all be scaled out to handle very large message streams with ease.
• High Performance: All of these features come together to make Apache Kafka a
publish/subscribe messaging system with excellent performance under high load.
How to build a Pub Sub Model with Kafka
Questions??

More Related Content

What's hot

What's hot (20)

Apache kafka
Apache kafkaApache kafka
Apache kafka
 
Kafka presentation
Kafka presentationKafka presentation
Kafka presentation
 
Apache Kafka - Messaging System Overview
Apache Kafka - Messaging System OverviewApache Kafka - Messaging System Overview
Apache Kafka - Messaging System Overview
 
kafka
kafkakafka
kafka
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
Kafka tutorial
Kafka tutorialKafka tutorial
Kafka tutorial
 
Apache Kafka - Martin Podval
Apache Kafka - Martin PodvalApache Kafka - Martin Podval
Apache Kafka - Martin Podval
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Kafka 101
Kafka 101Kafka 101
Kafka 101
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
Apache Kafka Introduction
Apache Kafka IntroductionApache Kafka Introduction
Apache Kafka Introduction
 
A visual introduction to Apache Kafka
A visual introduction to Apache KafkaA visual introduction to Apache Kafka
A visual introduction to Apache Kafka
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Stream processing using Kafka
Stream processing using KafkaStream processing using Kafka
Stream processing using Kafka
 
Envoy and Kafka
Envoy and KafkaEnvoy and Kafka
Envoy and Kafka
 
Kafka Tutorial: Kafka Security
Kafka Tutorial: Kafka SecurityKafka Tutorial: Kafka Security
Kafka Tutorial: Kafka Security
 
Fundamentals of Apache Kafka
Fundamentals of Apache KafkaFundamentals of Apache Kafka
Fundamentals of Apache Kafka
 
Fundamentals and Architecture of Apache Kafka
Fundamentals and Architecture of Apache KafkaFundamentals and Architecture of Apache Kafka
Fundamentals and Architecture of Apache Kafka
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
So You Want to Write a Connector?
So You Want to Write a Connector? So You Want to Write a Connector?
So You Want to Write a Connector?
 

Similar to Kafka pub sub demo

apachekafka-160907180205.pdf
apachekafka-160907180205.pdfapachekafka-160907180205.pdf
apachekafka-160907180205.pdf
TarekHamdi8
 
Copy of Kafka-Camus
Copy of Kafka-CamusCopy of Kafka-Camus
Copy of Kafka-Camus
Deep Shah
 

Similar to Kafka pub sub demo (20)

Session 23 - Kafka and Zookeeper
Session 23 - Kafka and ZookeeperSession 23 - Kafka and Zookeeper
Session 23 - Kafka and Zookeeper
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
apachekafka-160907180205.pdf
apachekafka-160907180205.pdfapachekafka-160907180205.pdf
apachekafka-160907180205.pdf
 
kafka_session_updated.pptx
kafka_session_updated.pptxkafka_session_updated.pptx
kafka_session_updated.pptx
 
Kafka.pptx
Kafka.pptxKafka.pptx
Kafka.pptx
 
Unleashing Real-time Power with Kafka.pptx
Unleashing Real-time Power with Kafka.pptxUnleashing Real-time Power with Kafka.pptx
Unleashing Real-time Power with Kafka.pptx
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
Columbus mule soft_meetup_aug2021_Kafka_Integration
Columbus mule soft_meetup_aug2021_Kafka_IntegrationColumbus mule soft_meetup_aug2021_Kafka_Integration
Columbus mule soft_meetup_aug2021_Kafka_Integration
 
Kafka syed academy_v1_introduction
Kafka syed academy_v1_introductionKafka syed academy_v1_introduction
Kafka syed academy_v1_introduction
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
Kafka Fundamentals
Kafka FundamentalsKafka Fundamentals
Kafka Fundamentals
 
Kafka
KafkaKafka
Kafka
 
Kafka 10000 feet view
Kafka 10000 feet viewKafka 10000 feet view
Kafka 10000 feet view
 
Kafka basics
Kafka basicsKafka basics
Kafka basics
 
Copy of Kafka-Camus
Copy of Kafka-CamusCopy of Kafka-Camus
Copy of Kafka-Camus
 
Kafka for begginer
Kafka for begginerKafka for begginer
Kafka for begginer
 
Distributed messaging with Apache Kafka
Distributed messaging with Apache KafkaDistributed messaging with Apache Kafka
Distributed messaging with Apache Kafka
 
Fundamentals of Apache Kafka
Fundamentals of Apache KafkaFundamentals of Apache Kafka
Fundamentals of Apache Kafka
 
Understanding kafka
Understanding kafkaUnderstanding kafka
Understanding kafka
 
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...
 

Recently uploaded

Recently uploaded (20)

MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 

Kafka pub sub demo

  • 1. Admitted Cloud Adoption Strategy – Phase1 Publish – Subscribe Model With Kafka
  • 2. What Is Pub – Sub Model • Publish/subscribe messaging is a pattern that is characterized by the sender (publisher) of a piece of data (message) not specifically directing it to a receiver. • Instead, the publisher classifies the message somehow, and that receiver (subscriber) subscribes to receive certain classes of messages. • Pub/sub systems often have a broker, a central point where messages are published, to facilitate this. • Apache Kafka is a publish/subscribe messaging system
  • 3. Kafka At the heart of the Kafka messaging system sits a partitioned, replayable log. The log-structured approach is itself a simple idea: a collection of messages, appended sequentially to a file. When a service wants to read messages from Kafka, it “seeks” to the position of the last message it read, then scans sequentially, reading messages in order while periodically recording its new position in the log
  • 4. Topic and Partitions • Messages in Kafka are categorized into topics. The closest analogies for a topic is a database table or a folder in a filesystem. • Topics are additionally broken down into a number of partitions. Going back to the “commit log” description, partition is a single log.
  • 5. Producer and Consumers • Producers create new messages. In general, a message will be produced to a specific topic. • Consumers read messages. The consumer subscribes to one or more topics and reads the messages
  • 6. Brokers and Clusters • A single Kafka server is called a broker. The broker receives messages from producers, assigns offsets to them, and commits the messages to storage on disk. • It also services consumers, responding to fetch requests for partitions and responding with the messages that have been committed to disk. • Kafka brokers are designed to operate as part of a cluster. Within a cluster of brokers, one broker will also function as the cluster controller
  • 7. Why Kafka • Multiple Producers: Kafka is able to seamlessly handle multiple producers, whether those clients are using many topics or the same topic. • Multiple Consumers: Kafka is designed for multiple consumers to read any single stream of messages without interfering with each other. • Disk-Based Retention: Messages are committed to disk and will be stored with configurable retention rules. Durable retention means no danger of losing data. • Consumers can be stopped, and the messages will be retained in Kafka. This allows them to restart and pick up processing messages where they left off with no data loss. • Scalable: Kafka’s flexible scalability makes it easy to handle any amount of data. It provides options for single broker as a proof of concept, expand to a small development cluster of three brokers, and move into production with a larger cluster. Producers, consumers, and brokers can all be scaled out to handle very large message streams with ease. • High Performance: All of these features come together to make Apache Kafka a publish/subscribe messaging system with excellent performance under high load.
  • 8. How to build a Pub Sub Model with Kafka
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.