SlideShare a Scribd company logo
1 of 44
Himani Arora & Prabhat Kashyap
Software Consultant
@_himaniarora @pk_official
Who we are?
Himani Arora
@_himaniarora
Software Consultant @ Knoldus Software LLP
Contributed in Apache Kafka, Juypter,
Apache CarbonData, Lightbend Lagom etc
Currently learning Apache Kafka
Prabhat Kashyap
@pk_official
Software Consultant @ Knoldus Software LLP
Contributed in Apache Kafka and Apache
CarbonData and Lightbend Templates
Currently learning Apache Kafka
Agenda
●
What is Stream processing
●
Paradigms of programming
●
Stream Processing with Kafka
●
What are Kafka Streams
●
Inside Kafka Streams
●
Demonstration of stream processing using Kafka Streams
●
Overview of Kafka Connect
●
Demo with Kafka Connect
What is stream processing?
● Real-time processing of data
● Does not treat data as static tables or files
● Data has to be processed fast, so that a firm can react to
changing business conditions in real time. This is required
for trading, fraud detection, system monitoring, and many
other examples.
● A “too late architecture” cannot realize these use cases.
BIG DATA VERSUS FAST DATA
3 PARADIGMS OF PROGRAMMING
● REQUEST/RESPONSE
● BATCH SYSTEMS
● STREAM PROCESSING
REQUEST/RESPONSE
BATCH SYSTEM
STREAM PROCESSING
STREAM PROCESSING with KAFKA
2 APPROACHES:
● DO IT YOURSELF (DIY ! ) STREAM PROCESSING
● STREAM PROCESSING FRAMEWORK
DIY STREAM PROCESSING
Major Challenges:
● FAULT TOLERANCE
● PARTITIONING AND SCALABILITY
● TIME
● STATE
● REPROCESSING
STREAM PROCESSING FRAMEWORK
Many already available stream processing framework are:
SPARK
STORM
SAMZA
FLINK ETC...
KAFKA STREAMS : ANOTHER WAY OF STREAM PROCESSING
Let’s starts with Kafka Stream but wait.. What is KAFKA?
Hello! Apache Kafka
● Apache Kafka is an Open Source project under Apache Licence
2.0
● Apache Kafka was originally developed by LinkedIn.
● On 23 October 2012 Apache Kafka graduated from incubator to
top level projects.
● Components of Apache Kafka
○ Producer
○ Consumer
○ Broker
○ Topic
○ Data
○ Parallelism
Enterprises that use Kafka
What is Kafka Streams
● It is Streams API of Apache Kafka, available through a Java library.
● Kafka Streams is built on top of functionality provided by Kafka’s.
● It is , by deliberate design, tightly integrated with Apache Kafka.
● It can be used to build highly scalable, elastic, fault-tolerant, distributed
applications and microservices.
● Kafka Streams API allows you to create real-time applications.
● It is the easiest yet the most powerful technology to process data stored
in Kafka.
If we look closer
● A key motivation of the Kafka Streams API is to bring stream processing out of
the Big Data niche into the world of mainstream application development.
● Using the Kafka Streams API you can implement standard Java applications to
solve your stream processing needs.
● Your applications are fully elastic: you can run one or more instances of your
application.
● This lightweight and integrative approach of the Kafka Streams API – “Build
applications, not infrastructure!” .
● Deployment-wise you are free to chose from any technology that can deploy Java
applications
Capabilities of Kafka Stream
● Powerful
○ Makes your applications highly scalable, elastic, distributed, fault-
tolerant.
○ Stateful and stateless processing
○ Event-time processing with windowing, joins, aggregations
● Lightweight
○ Low barrier to entry
○ No processing cluster required
○ No external dependencies other than Apache Kafka
Capabilities of Kafka Stream
● Real-time
○ Millisecond processing latency
○ Record-at-a-time processing (no micro-batching)
○ Seamlessly handles late-arriving and out-of-order data
○ High throughput
● Fully integrated
○ 100% compatible with Apache Kafka 0.10.2 and 0.10.1
○ Easy to integrate into existing applications and microservices
○ Runs everywhere: on-premises, public clouds, private clouds, containers, etc.
○ Integrates with databases through continous change data capture (CDC) performed by
Kafka Connect
Key concepts of Kafka Streams
● Stateful Stream Processing
● KStream
● KTable
● Time
● Aggregations
● Joins
● Windowing
Key concepts of Kafka Streams
● Stateful Stream Processing
– Some stream processing applications don’t require state – they
are stateless.
– In practice, however, most applications require state – they are
stateful.
– The state must be managed in a fault-tolerant manner.
– Application is stateful whenever, for example, it needs to join,
aggregate, or window its input data.
Key concepts of Kafka Streams
● Kstream
– A KStream is an abstraction of a record stream.
– Each data record represents a self-contained datum in the
unbounded data set.
– Using the table analogy, data records in a record stream are
always interpreted as an “INSERT” .
– Let’s imagine the following two data records are being sent to
the stream:
("alice", 1) --> ("alice", 3)
Key concepts of Kafka Streams
● Ktable
– A KStream is an abstraction of a changelog stream.
– Each data record represents an update.
– Using the table analogy, data records in a record stream are
always interpreted as an “UPDATE” .
– Let’s imagine the following two data records are being sent to
the stream:
("alice", 1) --> ("alice", 3)
Key concepts of Kafka Streams
● Time
– A critical aspect in stream processing is the the notion of time.
– Kafka Streams supports the following notions of time:
●
Event Time
●
Processing Time
●
Ingestion Time
– Kafka Streams assigns a timestamp to every data record via
so-called timestamp extractors.
Key concepts of Kafka Streams
● Aggregations
– An aggregation operation takes one input stream or table, and
yields a new table.
– It is done by combining multiple input records into a single
output record.
– In the Kafka Streams DSL, an input stream of an aggregation
operation can be a KStream or a KTable, but the output
stream will always be a KTable.
Key concepts of Kafka Streams
● Joins
– A join operation merges two input streams and/or tables based
on the keys of their data records, and yields a new
stream/table.
Key concepts of Kafka Streams
● Windowing
– Windowing lets you control how to group records that have the same
key for stateful operations such as aggregations or joins into so-
called windows.
– Windows are tracked per record key.
– When working with windows, you can specify a retention period for
the window.
– This retention period controls how long Kafka Streams will wait for
out-of-order or late-arriving data records for a given window.
– If a record arrives after the retention period of a window has passed,
the record is discarded and will not be processed in that window.
Inside Kafka Stream
Processor Topology
Stream Partitions and Tasks
● Each stream partition is a totally ordered sequence of data records and
maps to a Kafka topic partition.
● A data record in the stream maps to a Kafka message from that topic.
● The keys of data records determine the partitioning of data in both Kafka
and Kafka Streams, i.e., how data is routed to specific partitions within
topics.
Threading Model
● Kafka Streams allows the user to configure the number of threads that
the library can use to parallelize processing within an application
instance.
● Each thread can execute one or more stream tasks with their processor
topologies independently.
State
● Kafka Streams provides so-called state stores.
● State can be used by stream processing applications to store and query
data, which is an important capability when implementing stateful
operations.
Backpressure
● Kafka Streams does not use a backpressure mechanism because it
does not need one.
● It uses depth-first processing strategy.
● Each record consumed from Kafka will go through the whole processor
(sub-)topology for processing and for (possibly) being written back to
Kafka before the next record will be processed.
● No records are being buffered in-memory between two connected
stream processors.
● Kafka Streams leverages Kafka’s consumer client behind the scenes.
DEMO
Kafka Streams
HOW TO GET DATA IN AND OUT OF KAFKA?
KAFKA CONNECT
Kafka connect
● So-called Sources import data into Kafka, and Sinks export data from
Kafka.
● An implementation of a Source or Sink is a Connector. And users deploy
connectors to enable data flows on Kafka
● All Kafka Connect sources and sinks map to partitioned streams of
records.
● This is a generalization of Kafka’s concept of topic partitions: a stream
refers to the complete set of records that are split into independent
infinite sequences of records
CONFIGURING CONNECTORS
● Connector configurations are key-value mappings.
● For standalone mode these are defined in a properties file and
passed to the Connect process on the command line.
● In distributed mode, they will be included in the JSON payload
sent over the REST API for the request that creates the connector.
CONFIGURING CONNECTORS
Few settings common that are common to all connectors:
● name - Unique name for the connector. Attempting to register again
with the same name will fail.
● connector.class - The Java class for the connector
● tasks.max - The maximum number of tasks that should be created for
this connector. The connector may create fewer tasks if it cannot
achieve this level of parallelism.
REFERENCES
●
https://www.slideshare.net/ConfluentInc/demystifying-stream-processing-with-apache-kafka-
69228952
●
https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
●
http://docs.confluent.io/3.2.0/streams/index.html
●
http://docs.confluent.io/3.2.0/connect/index.html
Thank You

More Related Content

What's hot

What's hot (20)

Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Kafka Streams: What it is, and how to use it?
Kafka Streams: What it is, and how to use it?Kafka Streams: What it is, and how to use it?
Kafka Streams: What it is, and how to use it?
 
Apache Kafka Fundamentals for Architects, Admins and Developers
Apache Kafka Fundamentals for Architects, Admins and DevelopersApache Kafka Fundamentals for Architects, Admins and Developers
Apache Kafka Fundamentals for Architects, Admins and Developers
 
ksqlDB - Stream Processing simplified!
ksqlDB - Stream Processing simplified!ksqlDB - Stream Processing simplified!
ksqlDB - Stream Processing simplified!
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Apache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals ExplainedApache Kafka Architecture & Fundamentals Explained
Apache Kafka Architecture & Fundamentals Explained
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
Apache Kafka - Martin Podval
Apache Kafka - Martin PodvalApache Kafka - Martin Podval
Apache Kafka - Martin Podval
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
Introduction to Kafka Streams
Introduction to Kafka StreamsIntroduction to Kafka Streams
Introduction to Kafka Streams
 
Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...
Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...
Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...
 
ksqlDB: A Stream-Relational Database System
ksqlDB: A Stream-Relational Database SystemksqlDB: A Stream-Relational Database System
ksqlDB: A Stream-Relational Database System
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Kafka 101
Kafka 101Kafka 101
Kafka 101
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
Getting Started with Confluent Schema Registry
Getting Started with Confluent Schema RegistryGetting Started with Confluent Schema Registry
Getting Started with Confluent Schema Registry
 
Streaming all over the world Real life use cases with Kafka Streams
Streaming all over the world  Real life use cases with Kafka StreamsStreaming all over the world  Real life use cases with Kafka Streams
Streaming all over the world Real life use cases with Kafka Streams
 
Apache Kafka® Security Overview
Apache Kafka® Security OverviewApache Kafka® Security Overview
Apache Kafka® Security Overview
 
Hello, kafka! (an introduction to apache kafka)
Hello, kafka! (an introduction to apache kafka)Hello, kafka! (an introduction to apache kafka)
Hello, kafka! (an introduction to apache kafka)
 

Similar to Stream processing using Kafka

Similar to Stream processing using Kafka (20)

BBL KAPPA Lesfurets.com
BBL KAPPA Lesfurets.comBBL KAPPA Lesfurets.com
BBL KAPPA Lesfurets.com
 
Building Streaming Data Applications Using Apache Kafka
Building Streaming Data Applications Using Apache KafkaBuilding Streaming Data Applications Using Apache Kafka
Building Streaming Data Applications Using Apache Kafka
 
Connecting kafka message systems with scylla
Connecting kafka message systems with scylla   Connecting kafka message systems with scylla
Connecting kafka message systems with scylla
 
Alpakka - Connecting Kafka and ElasticSearch to Akka Streams
Alpakka - Connecting Kafka and ElasticSearch to Akka StreamsAlpakka - Connecting Kafka and ElasticSearch to Akka Streams
Alpakka - Connecting Kafka and ElasticSearch to Akka Streams
 
Apache frameworks for Big and Fast Data
Apache frameworks for Big and Fast DataApache frameworks for Big and Fast Data
Apache frameworks for Big and Fast Data
 
AWS Re-Invent 2017 Netflix Keystone SPaaS - Monal Daxini - Abd320 2017
AWS Re-Invent 2017 Netflix Keystone SPaaS - Monal Daxini - Abd320 2017AWS Re-Invent 2017 Netflix Keystone SPaaS - Monal Daxini - Abd320 2017
AWS Re-Invent 2017 Netflix Keystone SPaaS - Monal Daxini - Abd320 2017
 
Building streaming data applications using Kafka*[Connect + Core + Streams] b...
Building streaming data applications using Kafka*[Connect + Core + Streams] b...Building streaming data applications using Kafka*[Connect + Core + Streams] b...
Building streaming data applications using Kafka*[Connect + Core + Streams] b...
 
Kafka Streams for Java enthusiasts
Kafka Streams for Java enthusiastsKafka Streams for Java enthusiasts
Kafka Streams for Java enthusiasts
 
Current and Future of Apache Kafka
Current and Future of Apache KafkaCurrent and Future of Apache Kafka
Current and Future of Apache Kafka
 
Structured Streaming with Kafka
Structured Streaming with KafkaStructured Streaming with Kafka
Structured Streaming with Kafka
 
14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...
14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...
14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...
 
Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka
Stream, Stream, Stream: Different Streaming Methods with Spark and KafkaStream, Stream, Stream: Different Streaming Methods with Spark and Kafka
Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka
 
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
 
Change data capture
Change data captureChange data capture
Change data capture
 
Stream, stream, stream: Different streaming methods with Spark and Kafka
Stream, stream, stream: Different streaming methods with Spark and KafkaStream, stream, stream: Different streaming methods with Spark and Kafka
Stream, stream, stream: Different streaming methods with Spark and Kafka
 
Apache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksApache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected Talks
 
Cloud lunch and learn real-time streaming in azure
Cloud lunch and learn real-time streaming in azureCloud lunch and learn real-time streaming in azure
Cloud lunch and learn real-time streaming in azure
 
Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?
 
A Tour of Apache Kafka
A Tour of Apache KafkaA Tour of Apache Kafka
A Tour of Apache Kafka
 
Apache Kafka Streams
Apache Kafka StreamsApache Kafka Streams
Apache Kafka Streams
 

More from Knoldus Inc.

More from Knoldus Inc. (20)

Stakeholder Management (Project Management) Presentation
Stakeholder Management (Project Management) PresentationStakeholder Management (Project Management) Presentation
Stakeholder Management (Project Management) Presentation
 
Introduction To Kaniko (DevOps) Presentation
Introduction To Kaniko (DevOps) PresentationIntroduction To Kaniko (DevOps) Presentation
Introduction To Kaniko (DevOps) Presentation
 
Efficient Test Environments with Infrastructure as Code (IaC)
Efficient Test Environments with Infrastructure as Code (IaC)Efficient Test Environments with Infrastructure as Code (IaC)
Efficient Test Environments with Infrastructure as Code (IaC)
 
Exploring Terramate DevOps (Presentation)
Exploring Terramate DevOps (Presentation)Exploring Terramate DevOps (Presentation)
Exploring Terramate DevOps (Presentation)
 
Clean Code in Test Automation Differentiating Between the Good and the Bad
Clean Code in Test Automation  Differentiating Between the Good and the BadClean Code in Test Automation  Differentiating Between the Good and the Bad
Clean Code in Test Automation Differentiating Between the Good and the Bad
 
Integrating AI Capabilities in Test Automation
Integrating AI Capabilities in Test AutomationIntegrating AI Capabilities in Test Automation
Integrating AI Capabilities in Test Automation
 
State Management with NGXS in Angular.pptx
State Management with NGXS in Angular.pptxState Management with NGXS in Angular.pptx
State Management with NGXS in Angular.pptx
 
Authentication in Svelte using cookies.pptx
Authentication in Svelte using cookies.pptxAuthentication in Svelte using cookies.pptx
Authentication in Svelte using cookies.pptx
 
OAuth2 Implementation Presentation (Java)
OAuth2 Implementation Presentation (Java)OAuth2 Implementation Presentation (Java)
OAuth2 Implementation Presentation (Java)
 
Supply chain security with Kubeclarity.pptx
Supply chain security with Kubeclarity.pptxSupply chain security with Kubeclarity.pptx
Supply chain security with Kubeclarity.pptx
 
Mastering Web Scraping with JSoup Unlocking the Secrets of HTML Parsing
Mastering Web Scraping with JSoup Unlocking the Secrets of HTML ParsingMastering Web Scraping with JSoup Unlocking the Secrets of HTML Parsing
Mastering Web Scraping with JSoup Unlocking the Secrets of HTML Parsing
 
Akka gRPC Essentials A Hands-On Introduction
Akka gRPC Essentials A Hands-On IntroductionAkka gRPC Essentials A Hands-On Introduction
Akka gRPC Essentials A Hands-On Introduction
 
Entity Core with Core Microservices.pptx
Entity Core with Core Microservices.pptxEntity Core with Core Microservices.pptx
Entity Core with Core Microservices.pptx
 
Introduction to Redis and its features.pptx
Introduction to Redis and its features.pptxIntroduction to Redis and its features.pptx
Introduction to Redis and its features.pptx
 
GraphQL with .NET Core Microservices.pdf
GraphQL with .NET Core Microservices.pdfGraphQL with .NET Core Microservices.pdf
GraphQL with .NET Core Microservices.pdf
 
NuGet Packages Presentation (DoT NeT).pptx
NuGet Packages Presentation (DoT NeT).pptxNuGet Packages Presentation (DoT NeT).pptx
NuGet Packages Presentation (DoT NeT).pptx
 
Data Quality in Test Automation Navigating the Path to Reliable Testing
Data Quality in Test Automation Navigating the Path to Reliable TestingData Quality in Test Automation Navigating the Path to Reliable Testing
Data Quality in Test Automation Navigating the Path to Reliable Testing
 
K8sGPTThe AI​ way to diagnose Kubernetes
K8sGPTThe AI​ way to diagnose KubernetesK8sGPTThe AI​ way to diagnose Kubernetes
K8sGPTThe AI​ way to diagnose Kubernetes
 
Introduction to Circle Ci Presentation.pptx
Introduction to Circle Ci Presentation.pptxIntroduction to Circle Ci Presentation.pptx
Introduction to Circle Ci Presentation.pptx
 
Robusta -Tool Presentation (DevOps).pptx
Robusta -Tool Presentation (DevOps).pptxRobusta -Tool Presentation (DevOps).pptx
Robusta -Tool Presentation (DevOps).pptx
 

Recently uploaded

JustNaik Solution Deck (stage bus sector)
JustNaik Solution Deck (stage bus sector)JustNaik Solution Deck (stage bus sector)
JustNaik Solution Deck (stage bus sector)
Max Lee
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
Alluxio, Inc.
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
mbmh111980
 

Recently uploaded (20)

How to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabberHow to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabber
 
Agnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in KrakówAgnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in Kraków
 
A Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data MigrationA Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data Migration
 
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
 
KLARNA - Language Models and Knowledge Graphs: A Systems Approach
KLARNA -  Language Models and Knowledge Graphs: A Systems ApproachKLARNA -  Language Models and Knowledge Graphs: A Systems Approach
KLARNA - Language Models and Knowledge Graphs: A Systems Approach
 
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
 
JustNaik Solution Deck (stage bus sector)
JustNaik Solution Deck (stage bus sector)JustNaik Solution Deck (stage bus sector)
JustNaik Solution Deck (stage bus sector)
 
AI Hackathon.pptx
AI                        Hackathon.pptxAI                        Hackathon.pptx
AI Hackathon.pptx
 
how-to-download-files-safely-from-the-internet.pdf
how-to-download-files-safely-from-the-internet.pdfhow-to-download-files-safely-from-the-internet.pdf
how-to-download-files-safely-from-the-internet.pdf
 
IT Software Development Resume, Vaibhav jha 2024
IT Software Development Resume, Vaibhav jha 2024IT Software Development Resume, Vaibhav jha 2024
IT Software Development Resume, Vaibhav jha 2024
 
OpenChain @ LF Japan Executive Briefing - May 2024
OpenChain @ LF Japan Executive Briefing - May 2024OpenChain @ LF Japan Executive Briefing - May 2024
OpenChain @ LF Japan Executive Briefing - May 2024
 
CompTIA Security+ (Study Notes) for cs.pdf
CompTIA Security+ (Study Notes) for cs.pdfCompTIA Security+ (Study Notes) for cs.pdf
CompTIA Security+ (Study Notes) for cs.pdf
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
 
APVP,apvp apvp High quality supplier safe spot transport, 98% purity
APVP,apvp apvp High quality supplier safe spot transport, 98% purityAPVP,apvp apvp High quality supplier safe spot transport, 98% purity
APVP,apvp apvp High quality supplier safe spot transport, 98% purity
 
AI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in MichelangeloAI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in Michelangelo
 
What need to be mastered as AI-Powered Java Developers
What need to be mastered as AI-Powered Java DevelopersWhat need to be mastered as AI-Powered Java Developers
What need to be mastered as AI-Powered Java Developers
 
10 Essential Software Testing Tools You Need to Know About.pdf
10 Essential Software Testing Tools You Need to Know About.pdf10 Essential Software Testing Tools You Need to Know About.pdf
10 Essential Software Testing Tools You Need to Know About.pdf
 
Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024
 
The Impact of PLM Software on Fashion Production
The Impact of PLM Software on Fashion ProductionThe Impact of PLM Software on Fashion Production
The Impact of PLM Software on Fashion Production
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
 

Stream processing using Kafka

  • 1. Himani Arora & Prabhat Kashyap Software Consultant @_himaniarora @pk_official
  • 2. Who we are? Himani Arora @_himaniarora Software Consultant @ Knoldus Software LLP Contributed in Apache Kafka, Juypter, Apache CarbonData, Lightbend Lagom etc Currently learning Apache Kafka Prabhat Kashyap @pk_official Software Consultant @ Knoldus Software LLP Contributed in Apache Kafka and Apache CarbonData and Lightbend Templates Currently learning Apache Kafka
  • 3. Agenda ● What is Stream processing ● Paradigms of programming ● Stream Processing with Kafka ● What are Kafka Streams ● Inside Kafka Streams ● Demonstration of stream processing using Kafka Streams ● Overview of Kafka Connect ● Demo with Kafka Connect
  • 4. What is stream processing? ● Real-time processing of data ● Does not treat data as static tables or files ● Data has to be processed fast, so that a firm can react to changing business conditions in real time. This is required for trading, fraud detection, system monitoring, and many other examples. ● A “too late architecture” cannot realize these use cases.
  • 5. BIG DATA VERSUS FAST DATA
  • 6. 3 PARADIGMS OF PROGRAMMING ● REQUEST/RESPONSE ● BATCH SYSTEMS ● STREAM PROCESSING
  • 10. STREAM PROCESSING with KAFKA 2 APPROACHES: ● DO IT YOURSELF (DIY ! ) STREAM PROCESSING ● STREAM PROCESSING FRAMEWORK
  • 11. DIY STREAM PROCESSING Major Challenges: ● FAULT TOLERANCE ● PARTITIONING AND SCALABILITY ● TIME ● STATE ● REPROCESSING
  • 12. STREAM PROCESSING FRAMEWORK Many already available stream processing framework are: SPARK STORM SAMZA FLINK ETC...
  • 13. KAFKA STREAMS : ANOTHER WAY OF STREAM PROCESSING
  • 14. Let’s starts with Kafka Stream but wait.. What is KAFKA?
  • 15. Hello! Apache Kafka ● Apache Kafka is an Open Source project under Apache Licence 2.0 ● Apache Kafka was originally developed by LinkedIn. ● On 23 October 2012 Apache Kafka graduated from incubator to top level projects. ● Components of Apache Kafka ○ Producer ○ Consumer ○ Broker ○ Topic ○ Data ○ Parallelism
  • 16.
  • 18. What is Kafka Streams ● It is Streams API of Apache Kafka, available through a Java library. ● Kafka Streams is built on top of functionality provided by Kafka’s. ● It is , by deliberate design, tightly integrated with Apache Kafka. ● It can be used to build highly scalable, elastic, fault-tolerant, distributed applications and microservices. ● Kafka Streams API allows you to create real-time applications. ● It is the easiest yet the most powerful technology to process data stored in Kafka.
  • 19.
  • 20. If we look closer ● A key motivation of the Kafka Streams API is to bring stream processing out of the Big Data niche into the world of mainstream application development. ● Using the Kafka Streams API you can implement standard Java applications to solve your stream processing needs. ● Your applications are fully elastic: you can run one or more instances of your application. ● This lightweight and integrative approach of the Kafka Streams API – “Build applications, not infrastructure!” . ● Deployment-wise you are free to chose from any technology that can deploy Java applications
  • 21. Capabilities of Kafka Stream ● Powerful ○ Makes your applications highly scalable, elastic, distributed, fault- tolerant. ○ Stateful and stateless processing ○ Event-time processing with windowing, joins, aggregations ● Lightweight ○ Low barrier to entry ○ No processing cluster required ○ No external dependencies other than Apache Kafka
  • 22. Capabilities of Kafka Stream ● Real-time ○ Millisecond processing latency ○ Record-at-a-time processing (no micro-batching) ○ Seamlessly handles late-arriving and out-of-order data ○ High throughput ● Fully integrated ○ 100% compatible with Apache Kafka 0.10.2 and 0.10.1 ○ Easy to integrate into existing applications and microservices ○ Runs everywhere: on-premises, public clouds, private clouds, containers, etc. ○ Integrates with databases through continous change data capture (CDC) performed by Kafka Connect
  • 23. Key concepts of Kafka Streams ● Stateful Stream Processing ● KStream ● KTable ● Time ● Aggregations ● Joins ● Windowing
  • 24. Key concepts of Kafka Streams ● Stateful Stream Processing – Some stream processing applications don’t require state – they are stateless. – In practice, however, most applications require state – they are stateful. – The state must be managed in a fault-tolerant manner. – Application is stateful whenever, for example, it needs to join, aggregate, or window its input data.
  • 25. Key concepts of Kafka Streams ● Kstream – A KStream is an abstraction of a record stream. – Each data record represents a self-contained datum in the unbounded data set. – Using the table analogy, data records in a record stream are always interpreted as an “INSERT” . – Let’s imagine the following two data records are being sent to the stream: ("alice", 1) --> ("alice", 3)
  • 26. Key concepts of Kafka Streams ● Ktable – A KStream is an abstraction of a changelog stream. – Each data record represents an update. – Using the table analogy, data records in a record stream are always interpreted as an “UPDATE” . – Let’s imagine the following two data records are being sent to the stream: ("alice", 1) --> ("alice", 3)
  • 27. Key concepts of Kafka Streams ● Time – A critical aspect in stream processing is the the notion of time. – Kafka Streams supports the following notions of time: ● Event Time ● Processing Time ● Ingestion Time – Kafka Streams assigns a timestamp to every data record via so-called timestamp extractors.
  • 28. Key concepts of Kafka Streams ● Aggregations – An aggregation operation takes one input stream or table, and yields a new table. – It is done by combining multiple input records into a single output record. – In the Kafka Streams DSL, an input stream of an aggregation operation can be a KStream or a KTable, but the output stream will always be a KTable.
  • 29. Key concepts of Kafka Streams ● Joins – A join operation merges two input streams and/or tables based on the keys of their data records, and yields a new stream/table.
  • 30. Key concepts of Kafka Streams ● Windowing – Windowing lets you control how to group records that have the same key for stateful operations such as aggregations or joins into so- called windows. – Windows are tracked per record key. – When working with windows, you can specify a retention period for the window. – This retention period controls how long Kafka Streams will wait for out-of-order or late-arriving data records for a given window. – If a record arrives after the retention period of a window has passed, the record is discarded and will not be processed in that window.
  • 33. Stream Partitions and Tasks ● Each stream partition is a totally ordered sequence of data records and maps to a Kafka topic partition. ● A data record in the stream maps to a Kafka message from that topic. ● The keys of data records determine the partitioning of data in both Kafka and Kafka Streams, i.e., how data is routed to specific partitions within topics.
  • 34. Threading Model ● Kafka Streams allows the user to configure the number of threads that the library can use to parallelize processing within an application instance. ● Each thread can execute one or more stream tasks with their processor topologies independently.
  • 35. State ● Kafka Streams provides so-called state stores. ● State can be used by stream processing applications to store and query data, which is an important capability when implementing stateful operations.
  • 36. Backpressure ● Kafka Streams does not use a backpressure mechanism because it does not need one. ● It uses depth-first processing strategy. ● Each record consumed from Kafka will go through the whole processor (sub-)topology for processing and for (possibly) being written back to Kafka before the next record will be processed. ● No records are being buffered in-memory between two connected stream processors. ● Kafka Streams leverages Kafka’s consumer client behind the scenes.
  • 38. HOW TO GET DATA IN AND OUT OF KAFKA?
  • 40. Kafka connect ● So-called Sources import data into Kafka, and Sinks export data from Kafka. ● An implementation of a Source or Sink is a Connector. And users deploy connectors to enable data flows on Kafka ● All Kafka Connect sources and sinks map to partitioned streams of records. ● This is a generalization of Kafka’s concept of topic partitions: a stream refers to the complete set of records that are split into independent infinite sequences of records
  • 41. CONFIGURING CONNECTORS ● Connector configurations are key-value mappings. ● For standalone mode these are defined in a properties file and passed to the Connect process on the command line. ● In distributed mode, they will be included in the JSON payload sent over the REST API for the request that creates the connector.
  • 42. CONFIGURING CONNECTORS Few settings common that are common to all connectors: ● name - Unique name for the connector. Attempting to register again with the same name will fail. ● connector.class - The Java class for the connector ● tasks.max - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.

Editor's Notes

  1. continuously, concurrently, and in a record-by-record fashion. But as a continuous infinite stream of data integrated from both live and historical sources.
  2. A big data architecture contains several parts. Often, masses of structured and semi-structured historical data are stored in Hadoop (Volume + Variety). On the other side, stream processing is used for fast data requirements (Velocity + Variety). Both complement each other very well. This meetup focuses on real-time and stream processing.
  3. IMAGE SOURCE https://image.slidesharecdn.com/demystifyingstreamprocessingwithapachekafka-161118053223/95/demystifying-stream-processing-with-apache-kafka-4-638.jpg?cb=1479447621 Synchronous and tightly coupled Scaling is possible by adding more instances to this service Latency sensitive and due to tight coupling its sensitive to failures.
  4. you send all your inputs in and wait for your system to crunch all that data before it send all the output back.
  5. in between request/response and batch systems. here you send some inputs in and you get some outputs back. this definition of SOME is left to the program. the o/p is available at variable times too. the BIG shift is that, stream processing knows that the data is unbounded and it shall never be complete. BENEFIT: It gives complete control to the program over the tradeoffs involved. (latency, correctness and cost )
  6. DIY → you take your kafka libraries and you decide to decide to do everything yourself. If you have decided to do this then you should be aware of these hard problems.
  7. producers publish data to Kafka brokers, and consumers read published data from Kafka brokers. Producers and consumers are totally decoupled, and both run outside the Kafka brokers in the perimeter of a Kafka cluster. A Kafka cluster consists of one or more brokers.
  8. Kafka topics are divided into a number of partitions. Partitions allow you to parallelize a topic by splitting the data in a particular topic across multiple brokers — each partition can be placed on a separate machine to allow for multiple consumers to read from a topic in parallel. Consumers can also be parallelized so that multiple consumers can read from multiple partitions in a topic allowing for very high message processing throughput.
  9. Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other data systems. It makes it simple to quickly define connectors that move large data sets into and out of Kafka. Kafka Connect’s scope is narrow: it focuses only on copying streaming data to and from Kafka and does not handle other tasks, such as stream processing,
  10. Standalone: bin/connect-standalone worker.properties connector1.properties [connector2.properties connector3.properties ...] Standalone mode is the simplest mode, where a single process is responsible for executing all connectors and tasks. Since it is a single process, it requires minimal configuration. Distributed mode provides scalability and automatic fault tolerance for Kafka Connect. In distributed mode, you start many worker processes using the same group.id and they automatically coordinate to schedule execution of connectors and tasks across all available workers. curl -X POST -H "Content-Type: application/json" --data '{"name": "local-console-source", "config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "tasks.max":"1", "topic":"connect-test" }}' http://localhost:8083/connectors # Or, to use a file containing the JSON-formatted configuration # curl -X POST -H "Content-Type: application/json" --data @config.json http://localhost:8083/connectors
  11. Sink connectors also have one additional option to control their input, topics - A list of topics to use as input for this connector