The document discusses Kafka, an open-source distributed event streaming platform. It provides an introduction to Kafka and describes how it is used by many large companies to process streaming data in real-time. Key aspects of Kafka explained include topics, partitions, producers, consumers, consumer groups, and how Kafka is able to achieve high performance through its architecture and design.
Kafka Tutorial - Introduction to Apache Kafka (Part 1)Jean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This slide deck is a tutorial for the Kafka streaming platform. This slide deck covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example to demonstrate failover of brokers as well as consumers. Then it goes through some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have also expanded on the Kafka design section and added references. The tutorial covers Avro and the Schema Registry as well as advance Kafka Producers.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Kafka Tutorial - Introduction to Apache Kafka (Part 1)Jean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This slide deck is a tutorial for the Kafka streaming platform. This slide deck covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example to demonstrate failover of brokers as well as consumers. Then it goes through some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have also expanded on the Kafka design section and added references. The tutorial covers Avro and the Schema Registry as well as advance Kafka Producers.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
Watch this talk here: https://www.confluent.io/online-talks/how-apache-kafka-works-on-demand
Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees.
We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview.
This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
Kafka Streams is a new stream processing library natively integrated with Kafka. It has a very low barrier to entry, easy operationalization, and a natural DSL for writing stream processing applications. As such it is the most convenient yet scalable option to analyze, transform, or otherwise process data that is backed by Kafka. We will provide the audience with an overview of Kafka Streams including its design and API, typical use cases, code examples, and an outlook of its upcoming roadmap. We will also compare Kafka Streams' light-weight library approach with heavier, framework-based tools such as Spark Streaming or Storm, which require you to understand and operate a whole different infrastructure for processing real-time data in Kafka.
Integrating Apache Kafka Into Your Environmentconfluent
Watch this talk here: https://www.confluent.io/online-talks/integrating-apache-kafka-into-your-environment-on-demand
Integrating Apache Kafka with other systems in a reliable and scalable way is a key part of an event streaming platform. This session will show you how to get streams of data into and out of Kafka with Kafka Connect and REST Proxy, maintain data formats and ensure compatibility with Schema Registry and Avro, and build real-time stream processing applications with Confluent KSQL and Kafka Streams.
This session is part 4 of 4 in our Fundamentals for Apache Kafka series.
Kafka Tutorial, Kafka ecosystem with clustering examplesJean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This Introduction to Kafka streaming platform covers Kafka Architecture with many small examples from the command line. Then we expand on this with a multi-server example. We walk you through Consumer failover. Then we walk you through clustering and Kafka Broker failover. It covers consumers, producers, and clustering basics.
Kafka Tutorial: Streaming Data ArchitectureJean-Paul Azar
Kafka tutorial covers Java examples for Producers and Consumers. Also covers why Kafka is important and what Kafka is. Takes a look at the whole ecosystem around Kafka. Discusses low-level details about Kafka needed for successful deploys and performance tuning like batching, compression, partitioning, and replication.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
Watch this talk here: https://www.confluent.io/online-talks/how-apache-kafka-works-on-demand
Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees.
We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview.
This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
Kafka Streams is a new stream processing library natively integrated with Kafka. It has a very low barrier to entry, easy operationalization, and a natural DSL for writing stream processing applications. As such it is the most convenient yet scalable option to analyze, transform, or otherwise process data that is backed by Kafka. We will provide the audience with an overview of Kafka Streams including its design and API, typical use cases, code examples, and an outlook of its upcoming roadmap. We will also compare Kafka Streams' light-weight library approach with heavier, framework-based tools such as Spark Streaming or Storm, which require you to understand and operate a whole different infrastructure for processing real-time data in Kafka.
Integrating Apache Kafka Into Your Environmentconfluent
Watch this talk here: https://www.confluent.io/online-talks/integrating-apache-kafka-into-your-environment-on-demand
Integrating Apache Kafka with other systems in a reliable and scalable way is a key part of an event streaming platform. This session will show you how to get streams of data into and out of Kafka with Kafka Connect and REST Proxy, maintain data formats and ensure compatibility with Schema Registry and Avro, and build real-time stream processing applications with Confluent KSQL and Kafka Streams.
This session is part 4 of 4 in our Fundamentals for Apache Kafka series.
Kafka Tutorial, Kafka ecosystem with clustering examplesJean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This Introduction to Kafka streaming platform covers Kafka Architecture with many small examples from the command line. Then we expand on this with a multi-server example. We walk you through Consumer failover. Then we walk you through clustering and Kafka Broker failover. It covers consumers, producers, and clustering basics.
Kafka Tutorial: Streaming Data ArchitectureJean-Paul Azar
Kafka tutorial covers Java examples for Producers and Consumers. Also covers why Kafka is important and what Kafka is. Takes a look at the whole ecosystem around Kafka. Discusses low-level details about Kafka needed for successful deploys and performance tuning like batching, compression, partitioning, and replication.
Kafka Intro With Simple Java Producer ConsumersJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer.
Unlocking the Power of Apache Kafka: How Kafka Listeners Facilitate Real-time...Denodo
Watch full webinar here: https://buff.ly/43PDVsz
In today's fast-paced, data-driven world, organizations need real-time data pipelines and streaming applications to make informed decisions. Apache Kafka, a distributed streaming platform, provides a powerful solution for building such applications and, at the same time, gives the ability to scale without downtime and to work with high volumes of data. At the heart of Apache Kafka lies Kafka Topics, which enable communication between clients and brokers in the Kafka cluster.
Join us for this session with Pooja Dusane, Data Engineer at Denodo where we will explore the critical role that Kafka listeners play in enabling connectivity to Kafka Topics. We'll dive deep into the technical details, discussing the key concepts of Kafka listeners, including their role in enabling real-time communication between consumers and producers. We'll also explore the various configuration options available for Kafka listeners and demonstrate how they can be customized to suit specific use cases.
Attend and Learn:
- The critical role that Kafka listeners play in enabling connectivity in Apache Kafka.
- Key concepts of Kafka listeners and how they enable real-time communication between clients and brokers.
- Configuration options available for Kafka listeners and how they can be customized to suit specific use cases.
Applying ML on your Data in Motion with AWS and Confluent | Joseph Morais, Co...HostedbyConfluent
Event-driven application architectures are becoming increasingly common as a large number of users demand more interactive, real-time, and intelligent responses. Yet it can be challenging to decide how to capture and perform real-time data analysis and deliver differentiating experiences. Join experts from Confluent and AWS to learn how to build Apache Kafka®-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
Typesafe & William Hill: Cassandra, Spark, and Kafka - The New Streaming Data...DataStax Academy
Typesafe did a survey of Spark usage last year and found that a large percentage of Spark users combine it with Cassandra and Kafka. This talk focuses on streaming data scenarios that demonstrate how these three tools complement each other for building robust, scalable, and flexible data applications. Cassandra provides resilient and scalable storage, with flexible data format and query options. Kafka provides durable, scalable collection of streaming data with message-queue semantics. Spark provides very flexible analytics, everything from classic SQL queries to machine learning and graph algorithms, running in a streaming model based on "mini-batches", offline batch jobs, or interactive queries. We'll consider best practices and areas where improvements are needed.
Introduction to Kafka Streams PresentationKnoldus Inc.
Kafka Streams is a client library providing organizations with a particularly efficient framework for processing streaming data. It offers a streamlined method for creating applications and microservices that must process data in real-time to be effective. Using the Streams API within Apache Kafka, the solution fundamentally transforms input Kafka topics into output Kafka topics. The benefits are important: Kafka Streams pairs the ease of utilizing standard Java and Scala application code on the client end with the strength of Kafka’s robust server-side cluster architecture.
DevOps Fest 2020. Сергій Калінець. Building Data Streaming Platform with Apac...DevOps_Fest
Apache Kafka зараз на хайпі. Все більше компаній починають використовувати її, як message bus. Проте Kafka може набагато більше, аніж бути просто транспортом. Її реальна міць і краса розкриваються, коли Kafka стає центральною нервовою системою вашої архітектури. Вона швидка, надійна і доволі гнучка для різних сценаріїв використання.
На цій доповіді Сергій поділитися досвідом побудови data streaming платформи. Ми поговоримо про те, як Kafka працює, як її потрібно конфігурувати і в які халепи можна потрапити, якщо Kafka використовується неоптимально.
In this Kafka Tutorial, we will discuss Kafka Architecture. In this Kafka Architecture article, we will see API’s in Kafka. Moreover, we will learn about Kafka Broker, Kafka Consumer, Zookeeper, and Kafka Producer. Also, we will see some fundamental concepts of Kafka.
Apache Kafka - Scalable Message-Processing and more !Guido Schmutz
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. How can me make sure that all these event are accepted and forwarded in an efficient and reliable way? This is where Apache Kafaka comes into play, a distirbuted, highly-scalable messaging broker, build for exchanging huge amount of messages between a source and a target.
This session will start with an introduction into Apache and presents the role of Apache Kafka in a modern data / information architecture and the advantages it brings to the table. Additionally the Kafka ecosystem will be covered as well as the integration of Kafka in the Oracle Stack, with products such as Golden Gate, Service Bus and Oracle Stream Analytics all being able to act as a Kafka consumer or producer.
Amazon AWS basics needed to run a Cassandra Cluster in AWSJean-Paul Azar
There is a lot of advice on how to configure a Cassandra cluster on AWS. Not every configuration meets every use case.
Best way to know how to deploy Cassandra on AWS is to know the basics of AWS. Part 1: We start covering AWS (as it applies to Cassandra). Later we go into detail with AWS Cassandra specifics.
Event Streaming Architectures with Confluent and ScyllaDBScyllaDB
Jeff Bean will lead a discussion of event-driven architectures, Apache Kafka, Kafka Connect, KSQL and Confluent Cloud. Then we'll talk about some uses of Confluent and Scylla together, including a co-deployment with Lookout, ScyllaDB and Confluent in the IoT space, and the upcoming native connector.
Lesfurest.com invited me to talk about the KAPPA Architecture style during a BBL.
Kappa architecture is a style for real-time processing of large volumes of data, combining stream processing, storage, and serving layers into a single pipeline. It's different from the Lambda architecture, uses separate batch and stream processing pipelines.
Kafka Tutorial - Introduction to Apache Kafka (Part 2)Jean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This slide deck is a tutorial for the Kafka streaming platform. This slide deck covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example to demonstrate failover of brokers as well as consumers. Then it goes through some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have also expanded on the Kafka design section and added references. The tutorial covers Avro and the Schema Registry as well as advance Kafka Producers.
Similar to Kafka Tutorial - introduction to the Kafka streaming platform (20)
Covers using Kafka MirrorMaker for disaster recovery, scaling reads, and to isolate mission critical clusters. Starts out with a description of MirrorMaker and how to use. Then walks through a thorough introduction and example. Step by Step.
This tutorial covers advanced consumer topics like custom deserializers, ConsumerRebalanceListener to rewind to a certain offset, manual assignment of partitions to implement a "priority queue", “at least once” message delivery semantics Consumer Java example, “at most once” message delivery semantics Consumer Java example, “exactly once” message delivery semantics Consumer Java example, and a lot more.
In this slide deck we show how to implement custom Kafka Serializer for Producer. We then show how failover works configuring when broker/topic config min.insync.replicas, and Producer config acks (0, 1, -1, none, leader, all).
Then tutorial show how to implement Kafka producer batching and compression. Then use Producer metrics API to see how batching and compression improves throughput. Then this tutorial covers using retires and timeouts, and tested that it works. It explains how the setup of max inflight messages and retry back off work and when to use and not use inflight messaging.
It goes on to who how to implement a ProducerInterceptor. Then lastly, it shows how to implement a custom Kafka partitioner to implement a priority queue for important records. Through many of the step by step examples, this tutorial shows how to use some of the Kafka tools to do replication verification, and inspect the topic partition leadership status.
Kafka and Avro with Confluent Schema RegistryJean-Paul Azar
Covers how to use Kafka/Avro to send Records with support for schema and Avro serialization. Covers how to use Avro with Kafka and the confluent Schema Registry.
Avro Tutorial - Records with Schema for Kafka and HadoopJean-Paul Azar
Covers how to use Avro to save records to disk. This can be used later to use Avro with Kafka Schema Registry. This provides background on Avro which gets used with Hadoop and Kafka.
Amazon Cassandra Basics & Guidelines for AWS/EC2/VPC/EBSJean-Paul Azar
A comprehensive guide to deploying and configuring Cassandra on AWS/EC2. This guide is accurate and up to date as of 2017. There is a lot of information out there, and some of it is old or just wrong. Examples come from working code. This guide covers Ec2MultiRegionSnitch and EC2Snitch, broadcast address, using KMS to encrypt EBS, SSL config which is required for Ec2MultiRegionSnitch, Ansible, SSH Config, and setting up bastions so you can deploy your cluster on private subnets (NatGateway, how to setup routes, security groups, etc.). We also cover multi-region, multi-DC Cassandra deployments using VPN. We include the advantages and set up for enhanced networking and cluster placement groups in EC2.
We even cover how to setup and use the new EBS elastic volumes and how they benefit Cassandra deploys on AWS. We also cover how to setup Cassandra with systemd, and how to enable CloudWatch monitoring for Cassandra and the Linux OS for metrics and log aggregation.
From NAT setup to how to configure the GC and which EC2 instances to pick, this is the most comprehensive guide to deploying Cassandra on AWS/EC2/VPC.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Leading Change strategies and insights for effective change management pdf 1.pdf
Kafka Tutorial - introduction to the Kafka streaming platform
1. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Cassandra and Kafka Support on AWS/EC2
Cloudurable
Introduction to Kafka
Support around Cassandra
and Kafka running in EC2
2.
3. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Kafka growing
Why Kafka? Kafka adoption is on the rise
but why
4. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka growth exploding
❖ Kafka growth exploding
❖ 1/3 of all Fortune 500 companies
❖ Top ten travel companies, 7 of top ten banks, 8 of top
ten insurance companies, 9 of top ten telecom
companies
❖ LinkedIn, Microsoft and Netflix process 4 comma
message a day with Kafka (1,000,000,000,000)
❖ Real time streams of data, used to collect big data or to
do real time analysis (or both)
5. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Why Kafka is Needed?
❖ Real time streaming data processed for real time analytics
❖ Service calls, track every call, IOT sensors
❖ Apache Kafka is a fast, scalable, durable, and fault-tolerant publish-subscribe
messaging system
❖ Kafka is often used instead of JMS and AMQP
❖ higher throughput, reliability and replication
❖ Kafka can work in combination with Apache Storm, Apache HBase and Apache
Spark for real-time analysis and processing of streaming data
❖ Kafka brokers massive message streams for low-latency analysis in Hadoop or
Spark
❖ Kafka Streaming (subproject) can be used for real analytics
6. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Use Cases
❖ Stream Processing
❖ Website Activity Tracking
❖ Metrics Collection and Monitoring
❖ Log Aggregation
❖ Real time analytics
❖ Capture and ingest data into Spark / Hadoop
❖ CRQS, replay, error recovery
❖ Guaranteed distributed commit log for in-memory computing
7. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Why is Kafka Popular?
❖ Great performance
❖ Stable, Reliable Durability, Publish-subscribe and queue
(scales well with N-number of consumer groups), Replication,
Tunable Consistency Guarantees, Ordering Preserved at shard
level
❖ Works well with systems that have data streams to process,
aggregate, transform & load into other stores
❖ Most important reason: Kafka’s great performance: throughput,
latency, obtained through great engineering
❖ Operational Simplicity, easy to setup and use, easy to reason
8. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Why is Kafka so fast?
❖ Zero Copy - calls the OS kernel direct rather to move data fast
❖ Batch Data in Chunks - Kafka is all about batching the data into
chunks. Batches data end to end from Producer to file system to
Consumer. More efficient data compression. Reduces I/O latency
❖ Avoids Random Disk Access - immutable commit log. No slow
disk seeking. No random I/O operations. Disk accessed in
sequential manner
❖ Horizontal Scale - ability to have thousands of partitions for a
single topic spread out to thousands of servers allows Kafka to
handle massive load
9. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Cassandra / Kafka Support in EC2/AWS
Kafka Introduction Kafka messaging
10. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
What is Kafka?
❖ Distributed Streaming Platform
❖ Publish and Subscribe to streams of records
❖ Fault tolerant storage
❖ Process records as they occur
11. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Usage
❖ Build real-time streaming data pipe-lines
❖ Enable in-memory microservices (actors, Akka, Vert.x,
Qbit, RxJava)
❖ Build real-time streaming applications that react to
streams
❖ Real-time data analytics
❖ Transform, react, aggregate, join real-time data flows
12. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Use Cases
❖ Metrics / KPIs gathering
❖ Aggregate statistics from many sources
❖ Even Sourcing
❖ Used with microservices (in-memory) and actor systems
❖ Commit Log
❖ External commit log for distributed systems. Replicated
data between nodes, re-sync for nodes to restore state
❖ Real-time data analytics, Stream Processing, Log
Aggregation, Messaging, Click-stream tracking, Audit trail,
etc.
13. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Who uses Kafka?
❖ LinkedIn: Activity data and operational metrics
❖ Twitter: Uses it as part of Storm – stream processing
infrastructure
❖ Square: Kafka as bus to move all system events to various
Square data centers (logs, custom events, metrics, an so
on). Outputs to Splunk, Graphite, Esper-like alerting
systems
❖ Spotify, Uber, Tumbler, Goldman Sachs, PayPal, Box,
Cisco, CloudFlare, DataDog, LucidWorks, MailChimp,
NetFlix, etc.
14. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka: Topics, Producers, and
Consumers
Kafka
Cluster
Topic
Producer
Producer
Producer
Consumer
Consumer
Consumer
record
record
15. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Fundamentals
❖ Records have a key, value and timestamp
❖ Topic a stream of records (“/orders”, “/user-signups”), feed name
❖ Log topic storage on disk
❖ Partition / Segments (parts of Topic Log)
❖ Producer API to produce a streams or records
❖ Consumer API to consume a stream of records
❖ Broker: Cluster of Kafka servers running in cluster form broker. Consists on many
processes on many servers
❖ ZooKeeper: Does coordination of broker and consumers. Consistent file system
for configuration information and leadership election
16. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Performance details
❖ Topic is like a feed name “/shopping-cart-done“, “/user-signups", which Producers write to and Consumers read
from
❖ Topic associated with a log which is data structure on disk
❖ Producer(s) append Records at end of Topic log
❖ Whilst many Consumers read from Kafka at their own cadence
❖ Each Consumer (Consumer Group) tracks offset from where they left off reading
❖ How can Kafka scale if multiple producers and consumers read/write to the same Kafka Topic log?
❖ Sequential writes to filesystem are fast (700 MB or more a second)
❖ Kafka scales writes and reads by sharding Topic logs into Partitions (parts of a Topic log)
❖ Topics logs can be split into multiple Partitions different machines/different disks
❖ Multiple Producers can write to different Partitions of the same Topic
❖ Multiple Consumers Groups can read from different partitions efficiently
❖ Partitions can be distributed on different machines in a cluster
❖ high performance with horizontal scalability and failover
17. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Fundamentals 2
❖ Kafka uses ZooKeeper to form Kafka Brokers into a cluster
❖ Each node in Kafka cluster is called a Kafka Broker
❖ Partitions can be replicated across multiple nodes for failover
❖ One node/partition’s replicas is chosen as leader
❖ Leader handles all reads and writes of Records for partition
❖ Writes to partition are replicated to followers (node/partition pair)
❖ An follower that is in-sync is called an ISR (in-sync replica)
❖ If a partition leader fails, one ISR is chosen as new leader
18. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
ZooKeeper does coordination for Kafka Consumer
and Kafka Cluster
Kafka BrokerProducer
Producer
Producer
Consumer
Consumer
Consumer
Kafka Broker
Kafka Broker
Topic
ZooKeeper
19. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Replication of Kafka Partitions
0
Kafka Broker 0
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Kafka Broker 1
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Kafka Broker 2
Partition 1
Partition 2
Partition 3
Partition 4
Client Producer
1) Write record
Partition 0
2) Replicate
record
2) Replicate
record
Leader Red
Follower Blue
Record is considered "committed"
when all ISRs for partition wrote to their log
ISR = in-sync replica
Only committed records are
readable from consumer
20. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Replication of Kafka Partitions
1
Kafka Broker 0
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Kafka Broker 1
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Kafka Broker 2
Partition 1
Partition 2
Partition 3
Partition 4
Client Producer
1) Write record
Partition 0
2) Replicate
record
2) Replicate
record
Another partition can be owned
by another leader on another Kafka broker
Leader Red
Follower Blue
21. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Extensions
❖ Streams API to transform, aggregate, process records
from a stream and produce derivative streams
❖ Connector API reusable producers and consumers
(e.g., stream of changes from DynamoDB)
22. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Connectors and
Streams
Kafka
Cluster
App
App
App
App
App
App
DB DB
App App
Connectors
Producers
Consumers
Streams
23. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Polyglot clients / Wire
protocol
❖ Kafka communication from clients and servers wire
protocol over TCP protocol
❖ Protocol versioned
❖ Maintains backwards compatibility
❖ Many languages supported
24. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Topics and Logs
❖ Topic is a stream of records
❖ Topics stored in log
❖ Log broken up into partitions and segments
❖ Topic is a category or stream name
❖ Topics are pub/sub
❖ Can have zero or many consumer groups
(subscribers)
❖ Topics are broken up into partitions for speed and size
25. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Topic Partitions
❖ Topics are broken up into partitions
❖ Partitions are decided usually by key of record
❖ Key of record determines which partition
❖ Partitions are used to scale Kafka across many servers
❖ Record sent to correct partition by key
❖ Partitions are used to facilitate parallel consumers
❖ Records are consumed in parallel up to the number of
partitions
26. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Partition Log
❖ Order is maintained only in a single partition
❖ Partition is ordered, immutable sequence of records that is continually appended to—a structured commit
log
❖ Producers write at their own cadence so order of Records cannot be guaranteed across partitions
❖ Producers pick the partition such that Record/messages goes to a given same partition based on the data
❖ Example have all the events of a certain 'employeeId' go to same partition
❖ If order within a partition is not needed, a 'Round Robin' partition strategy can be used so Records are
evenly distributed across partitions.
❖ Records in partitions are assigned sequential id number called the offset
❖ Offset identifies each record within the partition
❖ Topic Partitions allow Kafka log to scale beyond a size that will fit on a single server
❖ Topic partition must fit on servers that host it, but topic can span many partitions hosted by many servers
❖ Topic Partitions are unit of parallelism - each consumer in a consumer group can work on one partition at a
time
28. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Record retention
❖ Kafka cluster retains all published records
❖ Time based – configurable retention period
❖ Size based
❖ Compaction
❖ Retention policy of three days or two weeks or a month
❖ It is available for consumption until discarded by time, size or
compaction
❖ Consumption speed not impacted by size
29. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumers / Producers
0 1 42 3 5 6 7 8 9 10 11
Partition
0
Consumer Group A
Producers
Consumer Group B
Consumers remember offset where they left off.
Consumers groups each have their own offset.
30. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Partition Distribution
❖ Each partition has leader server and zero or more follower
servers
❖ Leader handles all read and write requests for partition
❖ Followers replicate leader, and take over if leader dies
❖ Used for parallel consumer handling within a group
❖ Partitions of log are distributed over the servers in the Kafka cluster
with each server handling data and requests for a share of partitions
❖ Each partition can be replicated across a configurable number of
Kafka servers
❖ Used for fault tolerance
31. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Producers
❖ Producers send records to topics
❖ Producer picks which partition to send record to per topic
❖ Can be done in a round-robin
❖ Can be based on priority
❖ Typically based on key of record
❖ Kafka default partitioner for Java uses hash of keys to
choose partitions, or a round-robin strategy if no key
❖ Important: Producer picks partition
32. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumer Groups
❖ Consumers are grouped into a Consumer Group
❖ Consumer group has a unique id
❖ Each consumer group is a subscriber
❖ Each consumer group maintains its own offset
❖ Multiple subscribers = multiple consumer groups
❖ A Record is delivered to one Consumer in a Consumer Group
❖ Each consumer in consumer groups takes records and only one
consumer in group gets same record
❖ Consumers in Consumer Group load balance record
consumption
33. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumer Groups 2
❖ How does Kafka divide up topic so multiple Consumers in a consumer
group can process a topic?
❖ Kafka makes you group consumers into consumers group with a group id
❖ Consumer with same id belong in same Consumer Group
❖ One Kafka broker becomes group coordinator for Consumer Group
❖ assigns partitions when new members arrive (older clients would talk
direct to ZooKeeper now broker does coordination)
❖ or reassign partitions when group members leave or topic changes
(config / meta-data change
❖ When Consumer group is created, offset set according to reset policy of
topic
34. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumer Group 3
❖ If Consumer fails before sending commit offset XXX to Kafka broker,
❖ different Consumer can continue from the last committed offset
❖ some Kafka records could be reprocessed (at least once behavior)
❖ "Log end offset" is offset of last record written to log partition and where Producers write
to next
❖ "High watermark" is offset of last record that was successfully replicated to all partitions
followers
❖ Consumer only reads up to the “high watermark”. Consumer can’t read un-replicated
data
❖ Only a single Consumer from the same Consumer Group can access a single Partition
❖ If Consumer Group count exceeds Partition count:
❖ Extra Consumers remain idle; can be used for failover
❖ If more Partitions than Consumer Group instances,
❖ Some Consumers will read from more than one partition
35. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
2 server Kafka cluster hosting 4 partitions (P0-P5)
Kafka Cluster
Server 2
P0 P1 P5
Server 1
P2 P3 P4
Consumer Group A
C0 C1 C3
Consumer Group B
C0 C1 C3
36. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumer
Consumption
❖ Kafka Consumer consumption divides partitions over consumer instances
❖ Each Consumer is exclusive consumer of a "fair share" of partitions
❖ Consumer membership in group is handled by the Kafka protocol
dynamically
❖ If new Consumers join Consumer group they get share of partitions
❖ If Consumer dies, its partitions are split among remaining live
Consumers in group
❖ Order is only guaranteed within a single partition
❖ Since records are typically stored by key into a partition then order per
partition is sufficient for most use cases
37. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka vs JMS Messaging
❖ It is a bit like both Queues and Topics in JMS
❖ Kafka is a queue system per consumer in consumer group so load
balancing like JMS queue
❖ Kafka is a topic/pub/sub by offering Consumer Groups which act like
subscriptions
❖ Broadcast to multiple consumer groups
❖ By design Kafka is better suited for scale due to partition topic log
❖ Also by moving location in log to client/consumer side of equation
instead of the broker, less tracking required by Broker
❖ Handles parallel consumers better
38. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka scalable message
storage
❖ Kafka acts as a good storage system for records/messages
❖ Records written to Kafka topics are persisted to disk and replicated to
other servers for fault-tolerance
❖ Kafka Producers can wait on acknowledgement
❖ Write not complete until fully replicated
❖ Kafka disk structures scales well
❖ Writing in large streaming batches is fast
❖ Clients/Consumers control read position (offset)
❖ Kafka acts like high-speed file system for commit log storage,
replication
39. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Stream Processing
❖ Kafka for Stream Processing
❖ Kafka enable real-time processing of streams.
❖ Kafka supports stream processor
❖ Stream processor takes continual streams of records from input topics, performs some
processing, transformation, aggregation on input, and produces one or more output
streams
❖ A video player app might take in input streams of videos watched and videos paused, and
output a stream of user preferences and gear new video recommendations based on recent
user activity or aggregate activity of many users to see what new videos are hot
❖ Kafka Stream API solves hard problems with out of order records, aggregating across
multiple streams, joining data from multiple streams, allowing for stateful computations, and
more
❖ Stream API builds on core Kafka primitives and has a life of its own
40. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Using Kafka Single
Node
41. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Run Kafka
❖ Run ZooKeeper
❖ Run Kafka Server/Broker
❖ Create Kafka Topic
❖ Run producer
❖ Run consumer
42. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Run ZooKeeper
43. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Run Kafka Server
44. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Kafka Topic
45. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Producer
46. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Consumer
47. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Running Kafka Producer and
Consumer
48. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Use Kafka to send and receive messages
Lab 1-A Use Kafka Use single server version of
Kafka
49. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Using Kafka Cluster
50. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Running many nodes
❖ Modify properties files
❖ Change port
❖ Change Kafka log location
❖ Start up many Kafka server instances
❖ Create Replicated Topic
51. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Leave everything from before
running
52. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create two new
server.properties files
❖ Copy existing server.properties to server-
1.properties, server-2.properties
❖ Change server-1.properties to use port 9093, broker
id 1, and log.dirs “/tmp/kafka-logs-1”
❖ Change server-2.properties to use port 9094, broker
id 2, and log.dirs “/tmp/kafka-logs-2”
53. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
server-x.properties
54. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Start second and third servers
55. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Kafka replicated topic my-
failsafe-topic
56. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Start Kafka consumer and
producer
57. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka consumer and producer
running
58. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Use Kafka Describe Topic
The leader is broker 0
There is only one partition
There are three in-sync replicas (ISR)
59. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Test Failover by killing 1st
server
Use Kafka topic describe to see that a new leader was elected!
NEW LEADER IS 2!
60. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Use Kafka to send and receive messages
Lab 2-A Use Kafka Use a Kafka Cluster to
replicate a Kafka topic log
61. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Kafka Consumer
and
Producers
Working with producers and
consumers
Step by step first example
62. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Objectives Create Producer and Consumer
example
❖ Create simple example that creates a Kafka Consumer
and a Kafka Producer
❖ Create a new replicated Kafka topic
❖ Create Producer that uses topic to send records
❖ Send records with Kafka Producer
❖ Create Consumer that uses topic to receive messages
❖ Process messages from Kafka with Consumer
63. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Replicated Kafka
Topic
64. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Build script
65. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Kafka Producer to send
records
❖ Specify bootstrap servers
❖ Specify client.id
❖ Specify Record Key serializer
❖ Specify Record Value serializer
66. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Common Kafka imports and
constants
67. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Kafka Producer to send
records
68. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Send sync records with Kafka
Producer
The response RecordMetadata has 'partition' where record was written and the 'offset' of the record.
69. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Send async records with Kafka
Producer
70. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Consumer using Topic to Receive
Records
❖ Specify bootstrap servers
❖ Specify client.id
❖ Specify Record Key deserializer
❖ Specify Record Value deserializer
❖ Specify Consumer Group
❖ Subscribe to Topic
71. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Create Consumer using Topic to Receive
Records
72. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Process messages from Kafka with
Consumer
73. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Consumer poll
❖ poll() method returns fetched records based on current
partition offset
❖ Blocking method waiting for specified time if no records
available
❖ When/If records available, method returns straight away
❖ Control the maximum records returned by the poll() with
props.put(ConsumerConfig.MAX_POLL_RECORDS_CON
FIG, 100);
❖ poll() is not meant to be called from multiple threads
74. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Running both Consumer and
Producer
75. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Java Kafka simple example
recap
❖ Created simple example that creates a Kafka
Consumer and a Kafka Producer
❖ Created a new replicated Kafka topic
❖ Created Producer that uses topic to send records
❖ Send records with Kafka Producer
❖ Created Consumer that uses topic to receive
messages
❖ Processed records from Kafka with Consumer
76. ™
Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
Kafka design Design discussion of Kafka
77. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Design Motivation
❖ Kafka unified platform for handling real-time data feeds/streams
❖ High-throughput supports high volume event streams like log aggregation
❖ Must support real-time analytics
❖ real-time processing of streams to create new, derived streams
❖ inspired partitioning and consumer model
❖ Handle large data backlogs - periodic data loads from offline systems
❖ Low-latency delivery to handle traditional messaging use-cases
❖ Scale writes and reads via partitioned, distributed, commit logs
❖ Fault-tolerance for machine failures
❖ Kafka design is more like database transaction log than a traditional messaging
system
78. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Persistence: Embrace
filesystem
❖ Kafka relies heavily on filesystem for storing and caching messages/records
❖ Disk performance of hard drives performance of sequential writes is fast
❖ JBOD configuration with six 7200rpm SATA RAID-5 array is about 600MB/sec
❖ Sequential reads and writes are predictable, and are heavily optimized by operating systems
❖ Sequential disk access can be faster than random memory access and SSD
❖ Operating systems use available of main memory for disk caching
❖ JVM GC overhead is high for caching objects whilst OS file caches are almost free
❖ Filesystem and relying on page-cache is preferable to maintaining an in-memory cache in the
JVM
❖ By relying on the OS page cache Kafka greatly simplifies code for cache coherence
❖ Since Kafka disk usage tends to do sequential reads the read-ahead cache of the OS pre-
populating its page-cache
Cassandra, Netty, and Varnish use similar techniques.
The above is explained well in the Kafka Documentation
And there is a more entertaining explanation at the Varn
79. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Long sequential disk access
❖ Like Cassandra, LevelDB, RocksDB, and others Kafka uses
a form of log structured storage and compaction instead of an
on-disk mutable BTree
❖ Kafka uses tombstones instead of deleting records right away
❖ Since disks these days have somewhat unlimited space and
are very fast, Kafka can provide features not usually found in
a messaging system like holding on to old messages for a
really long time
❖ This flexibility allows for interesting application of Kafka
80. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka compression
❖ Kafka provides End-to-end Batch Compression
❖ Bottleneck is not always CPU or disk but often network bandwidth
❖ especially in cloud and virtualized environments
❖ especially when talking datacenter to datacenter or WAN
❖ Instead of compressing records one at a time…
❖ Kafka enable efficient compression of a whole batch or a whole message set or
message batch
❖ Message batch can be compressed and sent to Kafka broker/server in one go
❖ Message batch will be written in compressed form in log partition
❖ don’t get decompressed until they consumer
❖ GZIP, Snappy and LZ4 compression protocols supported
Read more at Kafka documents on end to end compression
81. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Producer Load
Balancing
❖ Producer sends records directly to Kafka broker partition
leader
❖ Producer asks Kafka broker for metadata about which
Kafka broker has which topic partitions leaders - thus no
routing layer needed
❖ Producer client controls which partition it publishes
messages to
❖ Partitioning can be done by key, round-robin or using a
custom semantic partitioner
82. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Kafka Producer Record
Batching
❖ Kafka producers support record batching
❖ Batching is good for efficient compression and network IO throughput
❖ Batching can be configured by size of records in bytes in batch
❖ Batches can be auto-flushed based on time
❖ See code example on the next slide
❖ Batching allows accumulation of more bytes to send, which equate to few larger
I/O operations on Kafka Brokers and increase compression efficiency
❖ Buffering is configurable and lets you make a tradeoff between additional latency
for better throughput
❖ Or in the case of an heavily used system, it could be both better average
throughput and
QBit a microservice library uses message batching in an identical fashion as K
to send messages over WebSocket between nodes and from client to QBit ser
83. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
More producer settings for
performance
For higher throughput, Kafka Producer allows buffering based on time and size.
Multiple records can be sent as a batches with fewer network requests.
Speeds up throughput drastically.
84. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
Stay tuned
❖ More to come
85. Cassandra / Kafka Support in EC2/AWS. Kafka Training, Kafka
Consulting
™
References
❖ Learning Apache Kafka, Second Edition 2nd Edition by Nishant Garg (Author),
2015, ISBN 978-1784393090, Packet Press
❖ Apache Kafka Cookbook, 1st Edition, Kindle Edition by Saurabh Minni (Author),
2015, ISBN 978-1785882449, Packet Press
❖ Kafka Streams for Stream processing: A few words about how Kafka works,
Serban Balamaci, 2017, Blog: Plain Ol' Java
❖ Kafka official documentation, 2017
❖ Why we need Kafka? Quora
❖ Why is Kafka Popular? Quora
❖ Why is Kafka so Fast? Stackoverflow
❖ Kafka growth exploding (Tech Republic)