Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
In the last few years, Apache Kafka has been used extensively in enterprises for real-time data collecting, delivering, and processing. In this presentation, Jun Rao, Co-founder, Confluent, gives a deep dive on some of the key internals that help make Kafka popular.
- Companies like LinkedIn are now sending more than 1 trillion messages per day to Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
- Many companies (e.g., financial institutions) are now storing mission critical data in Kafka. Learn how Kafka supports high availability and durability through its built-in replication mechanism.
- One common use case of Kafka is for propagating updatable database records. Learn how a unique feature called compaction in Apache Kafka is designed to solve this kind of problem more naturally.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
In the last few years, Apache Kafka has been used extensively in enterprises for real-time data collecting, delivering, and processing. In this presentation, Jun Rao, Co-founder, Confluent, gives a deep dive on some of the key internals that help make Kafka popular.
- Companies like LinkedIn are now sending more than 1 trillion messages per day to Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
- Many companies (e.g., financial institutions) are now storing mission critical data in Kafka. Learn how Kafka supports high availability and durability through its built-in replication mechanism.
- One common use case of Kafka is for propagating updatable database records. Learn how a unique feature called compaction in Apache Kafka is designed to solve this kind of problem more naturally.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...confluent
Kafka Streams performance monitoring and tuning is important for many reasons, including identifying bottlenecks, achieving greater throughput, and capacity planning. In this talk we’ll share the techniques we used to achieve greater performance and save on compute, storage, and cost. We’ll cover: Identifying design bottlenecks in by reviewing logs, metrics, and serdes. State store access patterns, design, and optimization Using profiling tools such as JMX, YourKit etc. Performance tuning of Kafka and Kafka Streams configuration and properties. JVM optimization for correct heap size and garbage collection strategies. Functional programming and imperative programming trade offs.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Like many other messaging systems, Kafka has put limit on the maximum message size. User will fail to produce a message if it is too large. This limit makes a lot of sense and people usually send to Kafka a reference link which refers to a large message stored somewhere else. However, in some scenarios, it would be good to be able to send messages through Kafka without external storage. At LinkedIn, we have a few use cases that can benefit from such feature. This talk covers our solution to send large message through Kafka without additional storage.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Presentation by Colin McCabe, Confluent, Big Data Day LA
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
Infrastructure failures are a given in the cloud, but in a multi-tenant environment separating those failures from usage can be a challenge. I'll be presenting data gathered from over a hundred region server failures at HubSpot along with what we've done to improve our MTTR and what we're contributing back to the community. Covered topics will include separating usage-related failures from infrastructure and hardware failures, as well as steps we've taken to improve MTTR in both scenarios.
HBaseCon 2015: Analyzing HBase Data with Apache HiveHBaseCon
Hive/HBase integration extends the familiar analytical tooling of Hive to cover online data stored in HBase. This talk will walk through the architecture for Hive/HBase integration with a focus on some of the latest improvements such as Hive over HBase Snapshots, HBase filter pushdown, and composite keys. These changes improve the performance of the Hive/HBase integration and expose HBase to a larger audience of business users. We'll also discuss our future plans for HBase integration.
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...confluent
Kafka Streams performance monitoring and tuning is important for many reasons, including identifying bottlenecks, achieving greater throughput, and capacity planning. In this talk we’ll share the techniques we used to achieve greater performance and save on compute, storage, and cost. We’ll cover: Identifying design bottlenecks in by reviewing logs, metrics, and serdes. State store access patterns, design, and optimization Using profiling tools such as JMX, YourKit etc. Performance tuning of Kafka and Kafka Streams configuration and properties. JVM optimization for correct heap size and garbage collection strategies. Functional programming and imperative programming trade offs.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Like many other messaging systems, Kafka has put limit on the maximum message size. User will fail to produce a message if it is too large. This limit makes a lot of sense and people usually send to Kafka a reference link which refers to a large message stored somewhere else. However, in some scenarios, it would be good to be able to send messages through Kafka without external storage. At LinkedIn, we have a few use cases that can benefit from such feature. This talk covers our solution to send large message through Kafka without additional storage.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Presentation by Colin McCabe, Confluent, Big Data Day LA
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
Infrastructure failures are a given in the cloud, but in a multi-tenant environment separating those failures from usage can be a challenge. I'll be presenting data gathered from over a hundred region server failures at HubSpot along with what we've done to improve our MTTR and what we're contributing back to the community. Covered topics will include separating usage-related failures from infrastructure and hardware failures, as well as steps we've taken to improve MTTR in both scenarios.
HBaseCon 2015: Analyzing HBase Data with Apache HiveHBaseCon
Hive/HBase integration extends the familiar analytical tooling of Hive to cover online data stored in HBase. This talk will walk through the architecture for Hive/HBase integration with a focus on some of the latest improvements such as Hive over HBase Snapshots, HBase filter pushdown, and composite keys. These changes improve the performance of the Hive/HBase integration and expose HBase to a larger audience of business users. We'll also discuss our future plans for HBase integration.
[WSO2Con EU 2017] Keynote: Mobile Identity in the Digital EconomyWSO2
In this slide deck, Marie Austenaa, the vice president and head of personal data and mobile identity at GSMA, will explore mobile identity in the digital economy.
Identity Federation Patterns with WSO2 Identity ServerWSO2
The rapid growth of organizations, ever changing company policies, mergers and acquisitions often lead to the need for complex identity solutions that require integration with multiple heterogeneous systems. This makes traditional centralized identity management systems no longer viable.
Identity federation is now adopted as a solution for such complex systems. It allows you to link multiple identities that belong to different trust domains by means of a common set of policies, practices and protocols.
Join Darshana and Omindu in this webinar as they explore
The challenges of introducing identity federation with use cases
How to leverage identity federation patterns to overcome these challenges
[WSO2Con EU 2017] The Win-Win-Win of Water Authority HHNKWSO2
WSO2 technology enabled HHNK to create an online portal for citizens to check and pay tax statements directly. The results were amazing - HHNK received payments on tax statements sooner and irreversible due to increased usability. Additionally, they spend less effort on collection, they received fewer phone calls, and they received less mail. Underneath, the WSO2 API Manager, ESB, and DSS did all the intelligent work.
SFBigAnalytics_20190724: Monitor kafka like a ProChester Chen
Kafka operators need to provide guarantees to the business that Kafka is working properly and delivering data in real time, and they need to identify and triage problems so they can solve them before end users notice them. This elevates the importance of Kafka monitoring from a nice-to-have to an operational necessity. In this talk, Kafka operations experts Xavier Léauté and Gwen Shapira share their best practices for monitoring Kafka and the streams of events flowing through it. How to detect duplicates, catch buggy clients, and triage performance issues – in short, how to keep the business’s central nervous system healthy and humming along, all like a Kafka pro.
Speakers: Gwen Shapira, Xavier Leaute (Confluence)
Gwen is a software engineer at Confluent working on core Apache Kafka. She has 15 years of experience working with code and customers to build scalable data architectures. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an author of “Kafka - the Definitive Guide”, "Hadoop Application Architectures", and a frequent presenter at industry conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects.
Xavier Leaute is One of the first engineers to Confluent team, Xavier is responsible for analytics infrastructure, including real-time analytics in KafkaStreams. He was previously a quantitative researcher at BlackRock. Prior to that, he held various research and analytics roles at Barclays Global Investors and MSCI.
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...HostedbyConfluent
This talk is aimed to give developers who are interested to scale their streaming application with Exactly-Once (EOS) guarantees. Since the original release, EOS processing has received wide adoption as a much needed feature inside the community, and has also exposed various scalability and usability issues when applied in production systems.
To address those issues, we improved on the existing EOS model by integrating static Producer transaction semantics with dynamic Consumer group semantics. We will have a deep-dive into the newly added features (KIP-447), from which the audience will have more insight into the scalability v.s. semantics guarantees tradeoffs and how Kafka Streams specifically leveraged them to help scale EOS streaming applications written in this library. We would also present how the EOS code can be simplified with plain Producer and Consumer. Come to learn more if you wish to adopt this improved EOS feature and get started on building your own EOS application today!
Exactly-Once Made Easy: Transactional Messaging Improvement for Usability and...Guozhang Wang
Since the original release, EOS processing has received wide adoption as a much needed feature inside the community, and has also exposed various scalability and usability issues when applied in production systems.
To address those issues, we improved on the existing EOS model by integrating static Producer transaction semantics with dynamic Consumer group semantics. We will have a deep-dive into the newly added features (KIP-447), from which the audience will have more insight into the scalability v.s. semantics guarantees tradeoffs and how Kafka Streams specifically leveraged them to help scale EOS streaming applications written in this library.
Building data pipelines is pretty hard! Building a multi-datacenter active-active real time data pipeline for multiple classes of data with different durability, latency and availability guarantees is much harder.
Real time infrastructure powers critical pieces of Uber (think Surge) and in this talk we will discuss our architecture, technical challenges, learnings and how a blend of open source infrastructure (Apache Kafka and Samza) and in-house technologies have helped Uber scale.
Kafka is becoming an ever more popular choice for users to help enable fast data and Streaming. Kafka provides a wide landscape of configuration to allow you to tweak its performance profile. Understanding the internals of Kafka is critical for picking your ideal configuration. Depending on your use case and data needs, different settings will perform very differently. Lets walk through performance essentials of Kafka. Let's talk about how your Consumer configuration, can speed up or slow down the flow of messages to Brokers. Lets talk about message keys, their implications and their impact on partition performance. Lets talk about how to figure out how many partitions and how many Brokers you should have. Let's discuss consumers and what effects their performance. How do you combine all of these choices and develop the best strategy moving forward? How do you test performance of Kafka? I will attempt a live demo with the help of Zeppelin to show in real time how to tune for performance.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Intro to Apache Kafka I gave at the Big Data Meetup in Geneva in June 2016. Covers the basics and gets into some more advanced topics. Includes demo and source code to write clients and unit tests in Java (GitHub repo on the last slides).
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022HostedbyConfluent
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022
Apache Kafka without Zookeeper is now production ready! This talk is about how you can run without ZooKeeper, and why you should.
Kubernetes @ Squarespace: Kubernetes in the DatacenterKevin Lynch
This talk was presented at SRE NYC Meetup on August 16, 2017 at Squarespace HQ.
https://www.youtube.com/watch?v=UJ1QAKprVr4
As the engineering teams at Squarespace grow, we have been building more and more microservices. However, this has added operational strain as we try to shoehorn a growing, complex dynamic environment into our static data center infrastructure. We needed to rethink how we handle deployments, dependency management, resource allocation, monitoring, and alerting. Docker containerization and Kubernetes orchestration helps us tackle many of these problems, but the journey has been challenging. In this talk, we’ll discuss the challenges of running Kubernetes in a datacenter and how we switched to a more SLA-focused alert structure than per instance health with Prometheus and AlertManager.
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
Had a great time yesterday at the Kafka meetup where I spoke about Kafka and its Cost Dimensions. Kafka is known for its low maintenance and low infrastructure costs, but as scale increases, even kafka becomes a significant component.
Understanding which topic contributes to how much compute and network cost is pretty difficult. as currently, only storage cost is easy to figure out for each topic.
Meetup Link: https://lnkd.in/g4QpBJbh
Netflix Keystone Pipeline at Big Data Bootcamp, Santa Clara, Nov 2015Monal Daxini
Keystone - Processing over Half a Trillion events per day with 8 million events & 17 GB per second peaks, and at-least once processing semantics. We will explore in detail how we employ Kafka, Samza, and Docker at scale to implement a multi-tenant pipeline. We will also look at the evolution to its current state and where the pipeline is headed next in offering a self-service stream processing infrastructure atop the Kafka based pipeline and support Spark Streaming.
Kat Grigg, Confluent, Senior Customer Success Architect + Jen Snipes, Confluent, Senior Customer Success Architect
This presentation will cover tips and best practices for Apache Kafka. In this talk, we will be covering the basic internals of Kafka and how these components integrate together including brokers, topics, partitions, consumers and producers, replication, and Zookeeper. We will be talking about the major categories of operations you need to be setting up and monitoring including configuration, deployment, maintenance, monitoring and then debugging.
https://www.meetup.com/KafkaBayArea/events/270915296/
Similar to Producer Performance Tuning for Apache Kafka (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
2. What is Apache Kafka?
“ Kafka is a high throughput low latency ….. ”
Performance tuning is
● still very important!
● a case by case process based on
○ Different data pattern
○ Performance objectives
2
3. Agenda
● The goal of producer performance tuning
● Understand Kafka Producer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
3
4. Agenda
● The goal of producer performance tuning
● Understand Kafka Producer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
4
5. The goal of producer performance tuning
With a given dataset to send:
● Achieve the throughput and latency goals with guarantees of
○ Durability
○ Ordering
● Focus on the average producer performance
○ 99 percentile performance sometimes cannot be “tuned”
○ Tuning average performance also helps 99 percentile numbers
● Today’s talk
○ Fixed data pattern (1000 bytes of integers ranging between 0 - 50000)
5
6. Agenda
● The goal of producer performance tuning
● Understand the Kafka Producer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
6
7. Understand the Kafka producer
● org.apache.kafka.producer.KafkaProducer
○ If you are still using OLD producer, please upgrade!
● Benchmarks in this talk use Kafka 0.10.0
○ No broker side recompression (KIP-31)
○ 8 bytes overhead per message due to the introduction of timestamp (KIP-32)
7
14. Understand the Kafka producer
Record Accumulator
batch0 batch1 topic0, 0● Serialization
● Partitioning
● Compression
Tasks done by the user threads.
freeused
compressor
callbacks
M
CB
Topic
Metadata
topic=“topic0”
value=“hello”
PartitionerSerializer
topic=“topic0”
partition =0
value=“hello”
User: producer.send(new ProducerRecord(“topic0”, “hello”), callback);
14
15. Understand the Kafka producer
Record Accumulator
…...
batch0 batch1
batch0
batch0 batch1
topic0, 0
topic0, 1
topic1, 0batch2
freeused
compressor
callbacks
M
CB
sender thread
15
Sender:
1. polls batches from the batch queues (one batch / partition)
16. Sender:
1. polls batches from the batch queues (one batch / partition)
2. groups batches based on the leader broker
Understand the Kafka producer
Record Accumulator
…...
batch1
batch1
topic0, 0
topic0, 1
topicN, Mbatch2
sender thread
batch0
batch0
batch0
request 0
request 1
Broker 0
Broker 1
(one batch / partition)
freeused
compressor
callbacks
M
CB
16
17. Sender:
1. polls batches from the batch queues (one batch / partition)
2. groups batches based on the leader broker
3. sends the grouped batches to the brokers
4. Pipelining if max.in.flight.requests.per.connection > 1
Understand the Kafka producer
Record Accumulator
…...
batch1
batch1
topic0, 0
topic0, 1
topicN, Mbatch2
sender thread
batch0
batch0
batch0
request 0
request 1
Broker 0
Broker 1
(one batch / partition)
freeused
compressor
callbacks
M
CB
17
18. Sender:
1. polls batches from the batch queues (one batch / partition)
2. groups batches based on the leader broker
3. sends the grouped batches to the brokers
4. Pipelining if max.in.flight.requests.per.connection > 1
Understand the Kafka producer
Record Accumulator
…...
batch1
batch1
topic0, 0
topic0, 1
topicN, Mbatch2
sender thread
batch0
batch0
batch0
request 0
request 1
Broker 0
Broker 1
(one batch / partition)
freeused
compressor
callbacks
M
CB
18
resp
resp
19. sender thread
Sender:
A batch is ready when one of the following is true:
● batch.size is reached
● linger.ms is reached
● Another batch to the same broker is ready (piggyback)
● flush() or close() is called
Understand the Kafka producer
Record Accumulator
…...
batch1
batch1
topic0, 0
topic0, 1
topicN, Mbatch2
Broker 0
Broker 1
freeused
compressor
callbacks
M
CB
19
20. sender thread
Understand the Kafka producer
Record Accumulator
…...
topic0, 0
topic0, 1
topicN, Mbatch2
batch1 batch1
request 2
Broker 0
Broker 1
freeused
compressor
callbacks
M
CB
20
Sender:
A batch is ready when one of the following is true:
● batch.size is reached
● linger.ms is reached
● Another batch to the same broker is ready (piggyback)
● flush() or close() is called
(one batch / partition)
21. sender thread
Understand the Kafka producer
Record Accumulator
…...
topic0, 0
topic0, 1
topicN, Mbatch2
batch1 batch1
request 2
Broker 0
Broker 1
freeused
compressor
callbacks
M
CB
21
Sender:
A batch is ready when one of the following is true:
● batch.size is reached
● linger.ms is reached
● Another batch to the same broker is ready (piggyback)
● flush() or close() is called
(one batch / partition)
22. sender thread
Understand the Kafka producer
Record Accumulator
…...
topic0, 0
topic0, 1
topicN, Mbatch2
batch1 batch1
request 2
Broker 0
Broker 1
freeused
compressor
callbacks
M
CB
callbacks CB
resp
22
Sender:
On receiving the response
● The callbacks are fired in the message sending order
23. batch.size & linger.ms
batch.size is size based batching
linger.ms is time based batching
In general, more batching
● ⇒ Better compression ratio ⇒ Higher throughput
● ⇒ Higher latency
23
24. compression.type
● Compression is usually the dominant part of the producer.send()
● The speed of different compression types differs A LOT
● Compression is in user thread, so adding more user threads helps with
throughput if compression is slow
24
25. acks
Defines different durability level for producing.
acks Throughput Latency Durability
0 high low No guarantee
1 medium medium leader
-1 low high ISR
25
26. max.in.flight.requests.per.connection
max.in.flight.requests.per.connection > 1 means pipelining.
In general, pipelining
● gives better throughput
● may cause out of order delivery when retry occurs
● Excessive pipelining ⇒ Drop of throughput
○ lock contention
○ worse batching
We will be using max.in.flight.requests.per.connection=1 for today’s
talk.
26
27. Agenda
● The goal of producer performance tuning
● Understand KafkaProducer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
27
29. ProducerPerformance Tool
Useful producer performance tuning tool:
● org.apache.kafka.tools.ProducerPerformance
With KAFKA-3554:
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic
becket_test_3_replicas_1_partition --throughput 1000000 --num-threads 2 --value-bound 50000
--producer-props bootstrap.servers=localhost:9092 max.in.flight.requests.per.connection=1
batch.size=100000 compression.type=lz4
● --num-threads : The number of the user threads to send messages.
● --value-bound : The range of the random integer in the messages. This option is useful when compression is used.
Different integer range simulates different compression ratio.
29
30. Quantitative analysis using producer metrics
The improved ProducerPerformance tool (KAFKA-3554) prints the following
producer metrics.
● Select_Rate_Avg (The rate that the sender thread runs to check if it can send some messages)
● Request_Rate_Avg
● Request_Latency_Avg (Not including the callback execution time)
● Request_Size_Avg (After compression)
● Batch_Size_Avg (After compression)
● Records_Per_Request_Avg
● Record_Queue_Time_Avg
● Compression_Rate_Avg
Note: The metrics need some time (~ 1 min) to become stable.
30
31. Quantitative analysis using producer metrics
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic
becket_test_3_replicas_4_partition --throughput 100000 --num-threads 1 --value-bound 50000
--producer-props bootstrap.servers=localhost:9092 compression.type=gzip max.in.flight.
requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
31
32. Quantitative analysis using producer metrics
1. throughput_Avg ~= Request_Rate_Avg * Request_Size_Avg / Compression_Rate_Avg
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
1448.53 * 5034.47 / 0.68 = 10.22 MB/sec
(The gap is due to the request_overhead)
32
33. Quantitative analysis using producer metrics
2. Request_Size_Avg = Records_Per_Request_Avg * Record_Size * Compression_Rate_Avg +
Request_Overhead
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
7.05 * 1000 * 0.68 + Request_Overhead = 4794 + Request_Overhead
The Request_Overhead:
● depends on the number of topics and partitions
● usually ranges from dozens bytes to hundreds of bytes.
33
34. Quantitative analysis using producer metrics
3. Request_Rate_Upper_Limit = (1000 / Request_Latency_Avg) * Num_Brokers
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
(1000 / 2.73) * 5 = 1813
The gap is:
● due to the producer overhead,
● larger if request rate is higher
34
35. Quantitative analysis using producer metrics
4. latency_avg ~= (Record_Queue_Time_Avg / 2) + Request_Latency_Avg + Callback_Latency
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
2.45 / 2 + 2.73 + Callback_Latency = 3.96 + Callback_Latency
(The callback latency and some other time cost in the
ProducerPerformance tool is usually small enough to ignore)
35
36. Play with a toy example
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
● RTT=1.55 ms
● 5 broker cluster
36
37. Play with a toy example
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
● Network Bandwidth = 1 Gbps:
○ Throughput can be improved
37
38. Play with a toy example
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
throughput_Avg ~= Request_Rate_Avg * Request_Size_Avg / Compression_Rate_Avg
● RTT = 1.55 ms:
○ Request Latency is not bad.
● 1448 Request rate is not bad
○ 5 brokers * (1000 / 2.73) = 1813 (theoretical upper limit)
● Each request is too small
○ Default batch size = 16KB
38
39. Play with a toy example
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
throughput_Avg ~= Request_Rate_Avg * Request_Size_Avg / Compression_Rate_Avg
Ways to increase request size:
1. Add more user threads
2. Increase number of partitions
3. Increase linger.ms (more batching)
39
40. Play with a toy example
./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic becket_test_3_replicas_4_partition --
throughput 100000 --num-threads 1 --value-bound 50000 --producer-props bootstrap.servers=localhost:9092 compression.
type=gzip max.in.flight.requests.per.connection=1
1000000 records sent, 10444.300545 records/sec (9.96 MB/sec), 4.19 ms avg latency, 154.00 ms max latency, 4 ms 50th,
6 ms 95th, 6 ms 99th, 22 ms 99.9th.
Select_Rate_Avg: 3114.67
Request_Rate_Avg: 1448.53
Request_Latency_Avg: 2.73
Request_Size_Avg: 5034.47
Batch_Size_Avg: 4941.93
Records_Per_Request_Avg: 7.05
Record_Queue_Time_Avg: 2.45
Compression_Rate_Avg: 0.68
throughput_Avg ~= Request_Rate_Avg * Request_Size_Avg / Compression_Rate_Avg
Ways to increase request size:
1. Add more user threads
2. Increase number of partitions
3. Increase linger.ms (more batching)?
40
43. More Batching
43
More batching does not help!?
Compression Ratio is not improved much!
But why throughput becomes worse!?
44. Impact of Batch Size on GZIP Compression
44
When batch size is doubled from 16K to 32K, the time needed to fill in the batch is
almost tripled!
(The results are from JMH Micro Benchmark)
batch size (KB)
Time to fill up the
batch (ns)
16 1768583.939
32 5234432.44
64 13307411.93
128 29539258.9
192 45549678.03
256 61579312.57
45. Impact of Batch Size on GZIP Compression
45
Batch_Size_Avg: Record_Queue_Time_Avg:
15075.84 3.86
30681.25 9.37
61687.88 21.12
124451.51 47.2
248860.43 92.14
498640.58 184.84
When batch size is doubled from 16K to 32K, the time needed to fill in the batch is
2.5x!
(The results are from ProducerPerformance tool with 1 user thread, 1 partition,
linger.ms=20000)
46. Impact of Batch Size on GZIP Compression
46
Different value bounds show similar results.
47. Impact of Batch Size on Compression
47
Snappy and LZ4 has similar impact at different batch size.
48. When to increase batching
● Compression Ratio Gain > Compression Speed Loss
● Bottleneck is not in the user threads
48
49. Play with a toy example
OK… Let’s increase number of user threads then.
49
When there are 10 user threads:
● Throughput drops
● Latency soars
50. Play with a toy example
OK… Let’s increase number of user threads then.
50
● Lock contention (10 user threads V.S. 4 partitions)
● batches are piling up!
When there are 10 user threads:
● Throughput drops
● Latency soars
Records Queue Time = 3323.85 ms
51. Play with a toy example
When topic has 16 partitions
51
Batches are still piling up, but it
seems much better than before.
Records Queue Time = 252.69 ms
52. Play with a toy example
Let’s try tweak the batch size.
--num-threads 10, batch.size=16K, 16 partitions
2000000 records sent, 39791.492579 records/sec (37.95 MB/sec), 152.77 ms avg latency, 1182.00 ms max latency, 14 ms
50th, 841 ms 95th, 1128 ms 99th, 1162 ms 99.9th.
(Requests are piling up, so the bottleneck is the sender thread)
--num-threads 10, batch.size=800K, 16 partitions
2000000 records sent, 35647.446752 records/sec (34.00 MB/sec), 20.31 ms avg latency, 502.00 ms max latency, 17 ms
50th, 37 ms 95th, 68 ms 99th, 325 ms 99.9th.
(Requests are no longer piling up, the bottleneck shifted to user threads)
Usually bigger batch leads to bigger latency.
But increasing batch size can improve latency by preventing the batches from piling up!
52
53. Play with a toy example
To improve both latency & throughput? Increase number of partitions again.
--num-threads 10, batch.size=16K
2000000 records sent, 39791.492579 records/sec (37.95 MB/sec), 152.77 ms avg latency, 1182.00 ms max latency, 14 ms
50th, 841 ms 95th, 1128 ms 99th, 1162 ms 99.9th.
--num-threads 10, batch.size=800K
2000000 records sent, 35647.446752 records/sec (34.00 MB/sec), 20.31 ms avg latency, 502.00 ms max latency, 17 ms
50th, 37 ms 95th, 68 ms 99th, 325 ms 99.9th.
--num-threads 10, batch.size=16K, 32 partitions.
2000000 records sent, 43432.939541 records/sec (41.42 MB/sec), 47.29 ms avg latency, 291.00 ms max latency, 22 ms
50th, 225 ms 95th, 246 ms 99th, 270 ms 99.9th.
53
54. Find the throughput bottleneck
Is the bottleneck in user thread?
● Increase --num-threads and see if throughput increases accordingly
● Pay attention to lock contention
Is the bottleneck in sender thread?
● Is throughput (MB/sec) << network bandwidth
● Is Record_Queue_Time_Avg large?
● Is Batch_Size_Avg almost equals to batch.size?
Is the bottleneck in broker?
● Is the request latency very large?
54
55. Agenda
● The goal of producer performance tuning
● Understand KafkaProducer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
55
56. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
Partition 0
Partition 1
Partition 2
1
2
Producer 3
4
ProduceResponse
5
1. [Network] ProduceRequest Send Time
2. [Broker] ProduceRequest Queue Time
3. [Broker] ProduceRequest Local Time
4. [Broker] ProduceRequest Remote Time
5. [Broker] ProduceResponse Queue Time
6. [Broker] ProduceResponse Send Time
6
56
57. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
Partition 0
Partition 1
Partition 2
1
2
Producer 3
4
ProduceResponse
5
1. [Network] ProduceRequest Send Time
2. [Broker] ProduceRequest Queue Time
3. [Broker] ProduceRequest Local Time
4. [Broker] ProduceRequest Remote Time (Replication Time)
5. [Broker] ProduceResponse Queue Time
6. [Broker] ProduceResponse Send Time
6
57
58. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
high watermark fetch (p0,100; p1,200; p2,50)
100 105
● The followers fetches data of the partitions from the leader.
● The leader uses high watermarks to the In Sync Replicas
(ISR) of the all partitions
58
200
50
200
50
Partition 0
Partition 1
Partition 2
100
59. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
Partition 0
Partition 1
Partition 2
p0
high watermark
100 105
p0
The high watermarks are not updated after the data is sent.
100
fetch (p0,100; p1,200; p2,50)
59
200
50
200
50
60. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
Partition 0
Partition 1
Partition 2
p0
high watermark
100 105
p0
p0
The high watermarks are not updated after the data is sent.
100 105
fetch (p0,100; p1,200; p2,50)
60
200
50
200
50
61. Latency when acks=-1
Broker 0 Broker 1
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
Partition 0
Partition 1
Partition 2
p0
high watermark
100 105
p0
p0
The high watermarks are not updated after the data is sent.
100 105
fetch (p0,100; p1,200; p2,50)
61
200
50
200
50
p1
p2
203
60
62. fetch (p0,105; p1,200; p2,50)
Latency when acks=-1
Broker 0
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
100 105
p0
Broker 1
Partition 0
Partition 1
Partition 2
p0
The high watermark increases lazily. There is one fetch delay.
100 105
fetch (p0,100; p1,200; p2,50)
200
50
200
50
62
p1
p2
203
60
high watermark
63. fetch (p0,105; p1,200; p2,50)
Latency when acks=-1
Broker 0
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
100 105
p0
Broker 1
Partition 0
Partition 1
Partition 2
p0
The high watermark increases lazily. There is one fetch delay.
100 105
fetch (p0,100; p1,200; p2,50)
200
50
200
50
63
p1
p2
203
60
high watermark
64. fetch (p0,105; p1,200; p2,50)
Latency when acks=-1
Broker 0
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
100 105
p0
Broker 1
Partition 0
Partition 1
Partition 2
p0
ProduceResponse
p1 p1
p2 p2
The replication is not ProduceRequest aware.
● The messages produced in the same ProduceRequest may
be replicated in multiple fetches.
● ProduceRequests interfere with each other.
100 105
p1 p2
fetch (p0,100; p1,200; p2,50)
200 203
50 60
200 203
50 60
64
high watermark
65. fetch (p0,105; p1,200; p2,50)
Latency when acks=-1
Broker 0
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
100 105
p0
Broker 1
Partition 0
Partition 1
Partition 2
p0
ProduceResponse
p1 p1
p2 p2
Only when the high watermarks of all the partitions has passed the
offset of the appended messages in the ProduceRequest, the
ProduceResponse will be sent.
100 105
p1 p2
fetch (p0,100; p1,200; p2,50)
fetch (p0,105; p1,203; p2,60)
200 203
50 60
200 203
50 60
65
high watermark
66. fetch (p0,105; p1,200; p2,50)
Latency when acks=-1
Broker 0
Partition 0
Partition 1
Partition 2
p0 p1 p2
ProduceRequest
p0
100 105
p0
Broker 1
Partition 0
Partition 1
Partition 2
p0
ProduceResponse
p1 p1
p2 p2
Only when the high watermarks of all the partitions has passed the
offset of the appended messages in the ProduceRequest, the
ProduceResponse will be sent.
100 105
p1 p2
fetch (p0,100; p1,200; p2,50)
fetch (p0,105; p1,203; p2,60)
200 203
50 60
200 203
50 60
66
high watermark
68. Latency when acks=-1
Replication Time ~= Num_Fetches * (Local_Time_On_The_Follower +
Fetch_Request_Total_Time_On_The_Leader)
/ num.replica.fetchers
● Increase num.replica.fetchers
○ Parallel fetch
○ Each replica fetcher fetches from a distinct set of partitions.
● Not perfect solution
○ Diminishing effect
○ Scalability concern
■ Replica fetchers per broker = (Cluster_Size - 1 ) * num.replica.fetchers
68
69. Latency when acks=-1
How many replica fetchers are enough?
● The latency target is met
● If Replica FetchRequest Remote Time > 0
○ The replica fetch requests are waiting for the messages to arrive.
○ Increasing num.replica.fetchers won’t improve latency.
● Partitions per replica fetcher is low
69
70. Agenda
● The goal of producer performance tuning
● Understand KafkaProducer
● Producer performance tuning
○ ProducerPerformance tool
○ Quantitative analysis using producer metrics
○ Play with a toy example
● Some real world examples
○ Latency when acks=-1
○ Produce when RTT is long
● Q & A
70
71. Produce when RTT is long
We have a cross-ocean pipeline using remote produce
● RTT is big (~200 ms)
● Bandwidth = 1 Gb/s
● Batch size 800K
● Linger.ms 30ms
● 64 Partitions
● 8 user threads
71
72. Produce when RTT is long
We have a cross-ocean pipeline using remote produce
● RTT is big (~200 ms)
● Bandwidth = 1 Gb/s
● Batch size 800K
● Linger.ms 30ms
● 64 Partitions
● 8 user threads
● Throughput < 1 MB/s
72
73. Produce when RTT is long
The TCP connection needs more send and receive buffer.
Broker default: socket.receive.buffer.bytes = 100K
Producer default: send.buffer.bytes = 128K
Theoretical throughput with default setting = 100K / RTT = 500KB
Theoretical best buffer size = RTT * Bandwidth = 0.2s * 1 Gb/s = 25 MB
73
74. Produce when RTT is long
Changing socket buffer size may require OS TCP buffer limit.
74
75. Produce when RTT is long
Changing socket buffer size may require OS TCP buffer limit.
# The default setting of the socket receive buffer in bytes.
net.core.rmem_default = 124928
# The maximum receive socket buffer size in bytes.
net.core.rmem_max = 2048000
# The default setting (in bytes) of the socket send buffer.
net.core.wmem_default = 124928
# The maximum send socket buffer size in bytes.
net.core.wmem_max = 2048000
75
79. Lock contention for sender thread
Threads 2 4 8 10
linger.ms 50 30 20 15
Record_Queue_Time_Avg 50 31.08 21.21 17.27
Difference 0 1.08 1.21 2.27
This table shows some evidence of lock contention in the Kafka producer.
For a topic with 16 partitions, we do the following:
● Set linger.ms to make them almost the generating the similar size of batch.
● Compare the difference between linger.ms and Record_Queue_Time_Avg
● The bigger the difference is, the worse the lock contention is between the
sender thread and the user threads.
79