(Jason Gustafson, Confluent) Kafka Summit SF 2018
Kafka has a well-designed replication protocol, but over the years, we have found some extremely subtle edge cases which can, in the worst case, lead to data loss. We fixed the cases we were aware of in version 0.11.0.0, but shortly after that, another edge case popped up and then another. Clearly we needed a better approach to verify the correctness of the protocol. What we found is Leslie Lamport’s specification language TLA+.
In this talk I will discuss how we have stepped up our testing methodology in Apache Kafka to include formal specification and model checking using TLA+. I will cover the following:
1. How Kafka replication works
2. What weaknesses we have found over the years
3. How these problems have been fixed
4. How we have used TLA+ to verify the fixed protocol.
This talk will give you a deeper understanding of Kafka replication internals and its semantics. The replication protocol is a great case study in the complex behavior of distributed systems. By studying the faults and how they were fixed, you will have more insight into the kinds of problems that may lurk in your own designs. You will also learn a little bit of TLA+ and how it can be used to verify distributed algorithms.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Kafka is a high-throughput, fault-tolerant, scalable platform for building high-volume near-real-time data pipelines. This presentation is about tuning Kafka pipelines for high-performance.
Select configuration parameters and deployment topologies essential to achieve higher throughput and low latency across the pipeline are discussed. Lessons learned in troubleshooting and optimizing a truly global data pipeline that replicates 100GB data under 25 minutes is discussed.
Kafka Tiered Storage | Satish Duggana and Sriharsha Chintalapani, UberHostedbyConfluent
Kafka is a vital part of data infrastructure in many organizations. When the Kafka cluster grows and more data is stored in Kafka for a longer duration, several issues related to scalability, efficiency, and operations become important to address. Kafka cluster storage is typically scaled by adding more broker nodes to the cluster. But this also adds needless memory and CPUs to the cluster making overall storage cost less efficient compared to storing the older data in external storage.
Tiered storage is introduced to extend Kafka's storage beyond the local storage available on the Kafka cluster by retaining the older data in cheaper stores, such as HDFS, S3, Azure or GCS with minimal impact on the internals of Kafka.
We will talk about
- How tiered storage addresses the above problems and also brings several other advantages.
- High level architecture of tiered storage
- Future work planned as part of tiered storage.
Everything You Always Wanted to Know About Kafka’s Rebalance Protocol but Wer...confluent
Apache Kafka is a scalable streaming platform with built-in dynamic client scaling. The elastic scale-in/scale-out feature leverages Kafka’s “rebalance protocol” that was designed in the 0.9 release and improved ever since then. The original design aims for on-prem deployments of stateless clients. However, it does not always align with modern deployment tools like Kubernetes and stateful stream processing clients, like Kafka Streams. Those shortcoming lead to two mayor recent improvement proposals, namely static group membership and incremental rebalancing (which will hopefully be available in version 2.3). This talk provides a deep dive into the details of the rebalance protocol, starting from its original design in version 0.9 up to the latest improvements and future work. We discuss internal technical details, pros and cons of the existing approaches, and explain how you configure your client correctly for your use case. Additionally, we discuss configuration tradeoffs for stateless, stateful, on-prem, and containerized deployments.
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
Capacity Planning Your Kafka Cluster | Jason Bell, DigitalisHostedbyConfluent
"There's little talk about capacity planning Kafka clusters, it's very much learn as you go, every cluster is different. In this talk Kafka DevOps Engineer Jason Bell takes you through the things that will help you, from broker capacity, thinking about topics and how the other Confluent components can affect throughput and performance. With a number of production deployments under his watchful gaze for over six years Jason has plenty of experience, stories and useful information that will help you.
By the end of the talk you'll have a good understanding of designing the cluster for various scenarios, where the points of latency are to watch and monitor. And also how to prevent teams breaking the cluster behind your back.
This talk is designed for everyone, anyone who is just starting to those who are operating Kafka on a daily basis."
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Kafka is a high-throughput, fault-tolerant, scalable platform for building high-volume near-real-time data pipelines. This presentation is about tuning Kafka pipelines for high-performance.
Select configuration parameters and deployment topologies essential to achieve higher throughput and low latency across the pipeline are discussed. Lessons learned in troubleshooting and optimizing a truly global data pipeline that replicates 100GB data under 25 minutes is discussed.
Kafka Tiered Storage | Satish Duggana and Sriharsha Chintalapani, UberHostedbyConfluent
Kafka is a vital part of data infrastructure in many organizations. When the Kafka cluster grows and more data is stored in Kafka for a longer duration, several issues related to scalability, efficiency, and operations become important to address. Kafka cluster storage is typically scaled by adding more broker nodes to the cluster. But this also adds needless memory and CPUs to the cluster making overall storage cost less efficient compared to storing the older data in external storage.
Tiered storage is introduced to extend Kafka's storage beyond the local storage available on the Kafka cluster by retaining the older data in cheaper stores, such as HDFS, S3, Azure or GCS with minimal impact on the internals of Kafka.
We will talk about
- How tiered storage addresses the above problems and also brings several other advantages.
- High level architecture of tiered storage
- Future work planned as part of tiered storage.
Everything You Always Wanted to Know About Kafka’s Rebalance Protocol but Wer...confluent
Apache Kafka is a scalable streaming platform with built-in dynamic client scaling. The elastic scale-in/scale-out feature leverages Kafka’s “rebalance protocol” that was designed in the 0.9 release and improved ever since then. The original design aims for on-prem deployments of stateless clients. However, it does not always align with modern deployment tools like Kubernetes and stateful stream processing clients, like Kafka Streams. Those shortcoming lead to two mayor recent improvement proposals, namely static group membership and incremental rebalancing (which will hopefully be available in version 2.3). This talk provides a deep dive into the details of the rebalance protocol, starting from its original design in version 0.9 up to the latest improvements and future work. We discuss internal technical details, pros and cons of the existing approaches, and explain how you configure your client correctly for your use case. Additionally, we discuss configuration tradeoffs for stateless, stateful, on-prem, and containerized deployments.
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
Capacity Planning Your Kafka Cluster | Jason Bell, DigitalisHostedbyConfluent
"There's little talk about capacity planning Kafka clusters, it's very much learn as you go, every cluster is different. In this talk Kafka DevOps Engineer Jason Bell takes you through the things that will help you, from broker capacity, thinking about topics and how the other Confluent components can affect throughput and performance. With a number of production deployments under his watchful gaze for over six years Jason has plenty of experience, stories and useful information that will help you.
By the end of the talk you'll have a good understanding of designing the cluster for various scenarios, where the points of latency are to watch and monitor. And also how to prevent teams breaking the cluster behind your back.
This talk is designed for everyone, anyone who is just starting to those who are operating Kafka on a daily basis."
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Kat Grigg, Confluent, Senior Customer Success Architect + Jen Snipes, Confluent, Senior Customer Success Architect
This presentation will cover tips and best practices for Apache Kafka. In this talk, we will be covering the basic internals of Kafka and how these components integrate together including brokers, topics, partitions, consumers and producers, replication, and Zookeeper. We will be talking about the major categories of operations you need to be setting up and monitoring including configuration, deployment, maintenance, monitoring and then debugging.
https://www.meetup.com/KafkaBayArea/events/270915296/
Practical learnings from running thousands of Flink jobsFlink Forward
Flink Forward San Francisco 2022.
Task Managers constantly running out of memory? Flink job keeps restarting from cryptic Akka exceptions? Flink job running but doesn’t seem to be processing any records? We share practical learnings from running thousands of Flink Jobs for different use-cases and take a look at common challenges they have experienced such as out-of-memory errors, timeouts and job stability. We will cover memory tuning, S3 and Akka configurations to address common pitfalls and the approaches that we take on automating health monitoring and management of Flink jobs at scale.
by
Hong Teoh & Usamah Jassat
Meta/Facebook's database serving social workloads is running on top of MyRocks (MySQL on RocksDB). This means our performance and reliability depends a lot on RocksDB. Not just MyRocks, but also we have other important systems running on top of RocksDB. We have learned many lessons from operating and debugging RocksDB at scale.
In this session, we will offer an overview of RocksDB, key differences from InnoDB, and share a few interesting lessons learned from production.
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
Whoops, The Numbers Are Wrong! Scaling Data Quality @ NetflixDataWorks Summit
Netflix is a famously data-driven company. Data is used to make informed decisions on everything from content acquisition to content delivery, and everything in-between. As with any data-driven company, it’s critical that data used by the business is accurate. Or, at worst, that the business has visibility into potential quality issues as soon as they arise. But even in the most mature data warehouses, data quality can be hard. How can we ensure high quality in a cloud-based, internet-scale, modern big data warehouse employing a variety of data engineering technologies?
In this talk, Michelle Ufford will share how the Data Engineering & Analytics team at Netflix is doing exactly that. We’ll kick things off with a quick overview of Netflix’s analytics environment, then dig into details of our data quality solution. We’ll cover what worked, what didn’t work so well, and what we plan to work on next. We’ll conclude with some tips and lessons learned for ensuring data quality on big data.
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
ksqlDB: A Stream-Relational Database Systemconfluent
Speaker: Matthias J. Sax, Software Engineer, Confluent
ksqlDB is a distributed event streaming database system that allows users to express SQL queries over relational tables and event streams. The project was released by Confluent in 2017 and is hosted on Github and developed with an open-source spirit. ksqlDB is built on top of Apache Kafka®, a distributed event streaming platform. In this talk, we discuss ksqlDB’s architecture that is influenced by Apache Kafka and its stream processing library, Kafka Streams. We explain how ksqlDB executes continuous queries while achieving fault tolerance and high vailability. Furthermore, we explore ksqlDB’s streaming SQL dialect and the different types of supported queries.
Matthias J. Sax is a software engineer at Confluent working on ksqlDB. He mainly contributes to Kafka Streams, Apache Kafka's stream processing library, which serves as ksqlDB's execution engine. Furthermore, he helps evolve ksqlDB's "streaming SQL" language. In the past, Matthias also contributed to Apache Flink and Apache Storm and he is an Apache committer and PMC member. Matthias holds a Ph.D. from Humboldt University of Berlin, where he studied distributed data stream processing systems.
https://db.cs.cmu.edu/events/quarantine-db-talk-2020-confluent-ksqldb-a-stream-relational-database-system/
Kafka Streams is a new stream processing library natively integrated with Kafka. It has a very low barrier to entry, easy operationalization, and a natural DSL for writing stream processing applications. As such it is the most convenient yet scalable option to analyze, transform, or otherwise process data that is backed by Kafka. We will provide the audience with an overview of Kafka Streams including its design and API, typical use cases, code examples, and an outlook of its upcoming roadmap. We will also compare Kafka Streams' light-weight library approach with heavier, framework-based tools such as Spark Streaming or Storm, which require you to understand and operate a whole different infrastructure for processing real-time data in Kafka.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Stephan Ewen - Experiences running Flink at Very Large ScaleVerverica
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
Disaster Recovery Options Running Apache Kafka in Kubernetes with Rema Subra...HostedbyConfluent
Active-Active, Active-Passive, and stretch clusters are hallmark patterns that have been the gold standard in Apache Kafka® disaster recovery architectures for years. Moving to Kubernetes requires unpacking these patterns and choosing a configuration that allows you to meet the same RTO and RPO requirements.
In this talk, we will cover how Active-Active/Active-Passive modes for disaster recovery have worked in the past and how the architecture evolves with deploying Apache Kafka on Kubernetes. We'll also look at how stretch clusters sitting on this architecture give a disaster recovery solution that's built-in!
Armed with this information, you will be able to architect your new Apache Kafka Kubernetes deployment (or retool your existing one) to achieve the resilience you require.
Kat Grigg, Confluent, Senior Customer Success Architect + Jen Snipes, Confluent, Senior Customer Success Architect
This presentation will cover tips and best practices for Apache Kafka. In this talk, we will be covering the basic internals of Kafka and how these components integrate together including brokers, topics, partitions, consumers and producers, replication, and Zookeeper. We will be talking about the major categories of operations you need to be setting up and monitoring including configuration, deployment, maintenance, monitoring and then debugging.
https://www.meetup.com/KafkaBayArea/events/270915296/
Practical learnings from running thousands of Flink jobsFlink Forward
Flink Forward San Francisco 2022.
Task Managers constantly running out of memory? Flink job keeps restarting from cryptic Akka exceptions? Flink job running but doesn’t seem to be processing any records? We share practical learnings from running thousands of Flink Jobs for different use-cases and take a look at common challenges they have experienced such as out-of-memory errors, timeouts and job stability. We will cover memory tuning, S3 and Akka configurations to address common pitfalls and the approaches that we take on automating health monitoring and management of Flink jobs at scale.
by
Hong Teoh & Usamah Jassat
Meta/Facebook's database serving social workloads is running on top of MyRocks (MySQL on RocksDB). This means our performance and reliability depends a lot on RocksDB. Not just MyRocks, but also we have other important systems running on top of RocksDB. We have learned many lessons from operating and debugging RocksDB at scale.
In this session, we will offer an overview of RocksDB, key differences from InnoDB, and share a few interesting lessons learned from production.
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
Whoops, The Numbers Are Wrong! Scaling Data Quality @ NetflixDataWorks Summit
Netflix is a famously data-driven company. Data is used to make informed decisions on everything from content acquisition to content delivery, and everything in-between. As with any data-driven company, it’s critical that data used by the business is accurate. Or, at worst, that the business has visibility into potential quality issues as soon as they arise. But even in the most mature data warehouses, data quality can be hard. How can we ensure high quality in a cloud-based, internet-scale, modern big data warehouse employing a variety of data engineering technologies?
In this talk, Michelle Ufford will share how the Data Engineering & Analytics team at Netflix is doing exactly that. We’ll kick things off with a quick overview of Netflix’s analytics environment, then dig into details of our data quality solution. We’ll cover what worked, what didn’t work so well, and what we plan to work on next. We’ll conclude with some tips and lessons learned for ensuring data quality on big data.
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
ksqlDB: A Stream-Relational Database Systemconfluent
Speaker: Matthias J. Sax, Software Engineer, Confluent
ksqlDB is a distributed event streaming database system that allows users to express SQL queries over relational tables and event streams. The project was released by Confluent in 2017 and is hosted on Github and developed with an open-source spirit. ksqlDB is built on top of Apache Kafka®, a distributed event streaming platform. In this talk, we discuss ksqlDB’s architecture that is influenced by Apache Kafka and its stream processing library, Kafka Streams. We explain how ksqlDB executes continuous queries while achieving fault tolerance and high vailability. Furthermore, we explore ksqlDB’s streaming SQL dialect and the different types of supported queries.
Matthias J. Sax is a software engineer at Confluent working on ksqlDB. He mainly contributes to Kafka Streams, Apache Kafka's stream processing library, which serves as ksqlDB's execution engine. Furthermore, he helps evolve ksqlDB's "streaming SQL" language. In the past, Matthias also contributed to Apache Flink and Apache Storm and he is an Apache committer and PMC member. Matthias holds a Ph.D. from Humboldt University of Berlin, where he studied distributed data stream processing systems.
https://db.cs.cmu.edu/events/quarantine-db-talk-2020-confluent-ksqldb-a-stream-relational-database-system/
Kafka Streams is a new stream processing library natively integrated with Kafka. It has a very low barrier to entry, easy operationalization, and a natural DSL for writing stream processing applications. As such it is the most convenient yet scalable option to analyze, transform, or otherwise process data that is backed by Kafka. We will provide the audience with an overview of Kafka Streams including its design and API, typical use cases, code examples, and an outlook of its upcoming roadmap. We will also compare Kafka Streams' light-weight library approach with heavier, framework-based tools such as Spark Streaming or Storm, which require you to understand and operate a whole different infrastructure for processing real-time data in Kafka.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Stephan Ewen - Experiences running Flink at Very Large ScaleVerverica
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
Disaster Recovery Options Running Apache Kafka in Kubernetes with Rema Subra...HostedbyConfluent
Active-Active, Active-Passive, and stretch clusters are hallmark patterns that have been the gold standard in Apache Kafka® disaster recovery architectures for years. Moving to Kubernetes requires unpacking these patterns and choosing a configuration that allows you to meet the same RTO and RPO requirements.
In this talk, we will cover how Active-Active/Active-Passive modes for disaster recovery have worked in the past and how the architecture evolves with deploying Apache Kafka on Kubernetes. We'll also look at how stretch clusters sitting on this architecture give a disaster recovery solution that's built-in!
Armed with this information, you will be able to architect your new Apache Kafka Kubernetes deployment (or retool your existing one) to achieve the resilience you require.
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
Transforming applications built with traditional messaging solutions such as TIBCO, MQ and Solace to be scalable, reliable and ready for the move to cloud
How can applications built with traditional messaging technologies like TIBCO, Solace and IBM MQ be modernised and be made cloud ready? What are the advantages to Event Streaming approaches to pub/sub vs traditional message queues? What are the strengeths and weaknesses of both approaches, and what use cases and requirements are actually a better fit for messaging than Kafka?
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
2. ● At the heart of Kafka is the log
● Log replication provides high availability
● Kafka has a solid replication protocol
● 99.999% of the time it does the right thing
● This talk is about the remaining 0.001%
Overview
25. r0 r1
r0 r1 r2
A
B
C
Leader
Follower
Follower
Followers fetch from the
leader.
26. r0 r1
r0 r1 r2
r0
A
B
C
Leader
Follower
Follower
Followers fetch from the
leader.
27. r0 r1
r0 r1 r2
r0
A
B
C
Leader
Follower
Follower
Leader election is handled by
a separate component known
as the controller
28. r0 r1
r0 r1 r2
r0
A
B
C
Leader
Follower
Follower
Leader Epoch ISR
B 0 A, B, C
In order to enable election by
the controller, we maintain
state in Zookeeper about the
in-sync replicas (ISR).
29. r0 r1
r0 r1 r2
r0
A
B
C
Leader
Follower
Follower
When there is a state change
(e.g. a new leader), the
controller sends the updated
state to all the replicas.
Leader Epoch ISR
B 0 A, B, C
30. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
When there is a state change
(e.g. a new leader), the
controller sends the updated
state to all the replicas.
31. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
32. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
High Watermark
The high watermark is
the largest offset known
to be replicated to all
members of the ISR.
33. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
The high watermark is
the largest offset known
to be replicated to all
members of the ISR.
34. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Records below the high
watermark are considered
“committed” and are visible
to consumers.
Committed
35. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Records above the high
watermark are considered
uncommitted.
Committed Uncommitted
36. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
As records are replicated,
the high watermark moves
forward.
37. r0 r1
r0 r1 r2
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
As records are replicated,
the high watermark moves
forward.
38. r0 r1
r0 r1 r2
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
As records are replicated,
the high watermark moves
forward.
39. r0 r1
r0 r1 r2
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
40. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
41. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
42. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
43. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
44. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B
Follower
(epoch=0)
If a replica falls behind, it
can be removed from the
ISR by the leader.
45. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B
Follower
(epoch=0)
An out-of-sync replica that
catches up to the high
watermark is added back
to the ISR.
46. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B
Follower
(epoch=0)
An out-of-sync replica that
catches up to the high
watermark is added back
to the ISR.
47. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
An out-of-sync replica that
catches up to the high
watermark is added back
to the ISR.
48. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
49. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Only replicas in the ISR are
eligible to become leader
50. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
When a leader fails, the
controller will take it out of
the ISR and elect a new
leader from the remaining
ISR.
51. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
When a leader fails, the
controller will take it out of
the ISR and elect a new
leader from the remaining
ISR.
52. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
The new leader/ISR state is
propagated to the
remaining replicas
53. r0 r1 r2 r3 r4 r7 r8
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
The leader can begin
accepting writes
immediately.
54. r0 r1 r2 r3 r4 r7 r8
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=1)
Upon becoming a follower,
the replica may have
uncommitted data which
needs to be truncated.
55. r0 r1 r2 r3 r4 r7 r8
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=1)
Upon becoming a follower,
the replica may have
uncommitted data which
needs to be truncated.
56. r0 r1 r2 r3 r4 r7 r8
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r7
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=1)
Upon becoming a follower,
the replica may have
uncommitted data which
needs to be truncated.
63. r0 r1
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Every replica tracks the
high watermark separately
64. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Every replica tracks the
high watermark separately
65. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
The leader advances its
high watermark based on
the fetch offsets of replicas
66. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
The leader advances its
high watermark based on
the fetch offsets of replicas
67. r0 r1
r0 r1 r2
r0
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
The leader piggybacks its
high watermark onto fetch
responses
68. r0 r1
r0 r1 r2
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
The leader piggybacks its
high watermark onto fetch
responses
69. r0 r1
r0 r1 r2
r0 r1
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
At any point in time, the
follower high watermarks
may be a little behind the
leader’s.
71. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
72. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
Replica B fails.
73. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
C 1 A, C
Follower
(epoch=0)
Replica B is removed from
the ISR and C is elected as
the new leader.
74. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Replica B is removed from
the ISR and C is elected as
the new leader.
75. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Replica A finds the new
leader and truncates its log
to the local high watermark
76. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Replica A finds the new
leader and truncates its log
to the local high watermark
77. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Replica A finds the new
leader and truncates its log
to the local high watermark
78. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Before replica A begins
fetching, the new leader
fails.
79. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Before replica A begins
fetching, the new leader
fails.
80. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Before replica A begins
fetching, the new leader
fails.
81. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Before replica A begins
fetching, the new leader
fails.
82. r0 r1 r7 r8 r9
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Leader A then begins
accepting writes.
83. r0 r1 r7 r8 r9
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
But r2 and r3 had already
been committed to the ISR!
84. r0 r1 r7 r8 r9
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Suppose that B eventually
gets restarted.
85. r0 r1 r7 r8 r9
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Suppose that B eventually
gets restarted.
86. r0 r1 r7 r8 r9
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Follower
(epoch=2)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Suppose that B eventually
gets restarted.
87. r0 r1 r7 r8 r9
r0 r1 r2 r3
r0 r1 r2 r3 r4 r5
A
B
C
Follower
(epoch=2)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
Suppose that B eventually
gets restarted.
88. r0 r1 r7 r8 r9
r0 r1 r2 r3 r9
r0 r1 r2 r3 r4 r5
A
B
C
Follower
(epoch=2)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
The logs have now
diverged.
90. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Replica B has failed and
replica A needs to truncate
its log.
91. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
A -> C: What is the end offset
for epoch=0?
92. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
A -> C: What is the end offset
for epoch=0?
C -> A: The end offset is 6
Offset 6
93. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
A -> C: What is the end offset
for epoch=0?
C -> A: The end offset is 6
C: Cool, no truncation needed!
94. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
95. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
96. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
97. r0 r1 r2 r3 r7 r8
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Leader
(epoch=2)
Leader Epoch ISR
A 2 A
Leader
(epoch=1)
99. r0 r1 r2
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
Replica B has failed and
replica A has been elected
as the new leader
100. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
Replica B has failed and
replica A has been elected
as the new leader
101. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
Replica B has failed and
replica A has been elected
as the new leader
epoch=0
offset=0
epoch=1
offset=3
102. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
epoch=0
offset=0
epoch=1
offset=3
103. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
A 1 A, C
Follower
(epoch=0)
Before replica C can
truncate its log, it becomes
the new leader.
epoch=0
offset=0
epoch=1
offset=3
104. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
C 2 A, C
Follower
(epoch=0)
Before replica C can
truncate its log, it becomes
the new leader.
epoch=0
offset=0
epoch=1
offset=3
105. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
Before replica C can
truncate its log, it becomes
the new leader.
epoch=0
offset=0
epoch=1
offset=3
106. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Leader
(epoch=1)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
Before replica C can
truncate its log, it becomes
the new leader.
epoch=0
offset=0
epoch=1
offset=3
epoch=2
offset=5
107. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Follower
(epoch=2)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
epoch=0
offset=0
epoch=1
offset=3
epoch=2
offset=5
A -> C: What is the end offset
for epoch=1?
108. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Follower
(epoch=2)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
epoch=0
offset=0
epoch=1
offset=3
epoch=2
offset=5
A -> C: What is the end offset
for epoch=1?
C -> A: The end offset is 5
109. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Follower
(epoch=2)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
epoch=0
offset=0
epoch=1
offset=3
epoch=2
offset=5
A -> C: What is the end offset
for epoch=1?
C -> A: The end offset is 5
C: Cool, no truncation needed!
110. r0 r1 r2 r7 r8
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Follower
(epoch=2)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
111. r0 r1 r2 r7 r8 r9
r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r9
A
B
C
Leader
(epoch=0)
Follower
(epoch=2)
Leader Epoch ISR
C 2 A, C
Leader
(epoch=2)
113. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 A, B, C
Follower
(epoch=0)
114. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 A, B, C
Follower
(epoch=0)
Follower A fails and is
removed from the ISR.
115. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 B, C
Follower
(epoch=0)
Follower A fails and is
removed from the ISR.
116. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 B, C
Follower
(epoch=0)
117. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 B, C
Follower
(epoch=0)
Replica A could not re-register
in order to get the latest
leader/ISR state and continued
fetching from the current
leader.
118. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
A 0 B, C
Follower
(epoch=0)
Replica A could not re-register
in order to get the latest
leader/ISR state and continued
fetching from the current
leader.
119. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
A 0 B, C
Follower
(epoch=0)
Leader
(epoch=0)
120. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Follower
(epoch=0)
Leader
(epoch=0)
121. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Leader
(epoch=0)
122. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Leader
(epoch=0)
123. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Leader
(epoch=0)
124. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Leader
(epoch=0)
Meanwhile, replica A still
thought B was the leader and
was still trying to make
progress
125. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Leader
(epoch=0)
126. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Follower
(epoch=1)
127. r0 r1 r2 r3 r4
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Follower
(epoch=1)
128. r0 r1 r2 r3 r4
r0 r1 r2
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Follower
(epoch=1)
129. r0 r1 r2 r3 r4
r0 r1 r2
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Follower
(epoch=1)
130. r0 r1 r2 r3 r4
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 C
Leader
(epoch=1)
Follower
(epoch=1)
131. r0 r1 r2 r3 r4
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
Follower
(epoch=1)
132. r0 r1 r2 r3 r4
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Follower
(epoch=1)
Once back in the ISR, the
controller elected it as leader
133. r0 r1 r2 r3 r4
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
Once back in the ISR, the
controller elected it as leader
134. r0 r1 r2 r3 r4
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
Suddenly, replica A was able to
make progress again!
135. r0 r1 r2 r3 r4 r9
r0 r1 r2 r7 r8 r9
r0 r1 r2 r7 r8 r9
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
Suddenly, replica A was able to
make progress again!
136. Reflection
● Our mushy brains are not equipped to thinking
about edge cases in distributed systems
● How do we know that our fixes are not just
trading one edge case for another?
● How do we know there are not more edge
cases?
138. TLA+/TLC
● TLA+ is a specification language
created by Leslie Lamport
● TLC is a model checker
● Think “brute force proof by
mathematical induction”
139. TLA+/TLCUsing LaTeX syntax makes
model checking just as much
fun as writing research papers!● TLA+ is a specification language
created by Leslie Lamport
● TLC is a model checker
● Think “brute force proof by
mathematical induction”
141. ● Define the state and how to initialize it
● Define the valid state transitions
● Define expected state invariants
● Run model to check invariants
Model
Checklist
142. ● Define the state and how to initialize it
● Define the valid state transitions
● Define expected state invariants
● Run model to check invariants
Model
Checklist
143. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
144. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
147. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
148. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
2. Replica State
153. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
2. Replica State
154. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
2. Replica State
3. Quorum State
156. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
2. Replica State
3. Quorum State
157. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 A, B, C
Follower
(epoch=0)
1. Records and the log
2. Replica State
3. Quorum State
4. LeaderAndIsr Propagation
162. ● Define the state and how to initialize it
● Define the valid state transitions
● Define expected state invariants
● Run model to check invariants
Model
Checklist
179. ● Define the state and how to initialize it
● Define the valid state transitions
● Define expected state invariants
● Run model to check invariants
Model
Checklist
180. Replication
Invariant StrongIsr == A r1 in Replicas:
/ ~ ReplicaPresumesLeadership(r1)
/ LET hw == replicaState[r1].hw
IN A r2 in quorumState.isr:
HasMatchingLogsUpTo(r1, r2, hw)
181. Replication
Invariant StrongIsr == A r1 in Replicas:
/ ~ ReplicaPresumesLeadership(r1)
/ LET hw == replicaState[r1].hw
IN A r2 in quorumState.isr:
HasMatchingLogsUpTo(r1, r2, hw)
“If any replica is eligible to return data, then that data
must be replicated to all members of the current ISR”
182. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Leader A had failed and
replica C was being elected
as the new leader.
183. r0 r1 r2 r3
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
Upon becoming a follower
of C, replica A would
truncate its log to the local
high watermark.
184. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
185. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, C
Leader
(epoch=1)
This state violates the
StrongIsr property because
leader C is eligible to return
records r2 and r3, though
they are not present on A.
186. ● Define the state and how to initialize it
● Define the valid state transitions
● Define expected state invariants
● Run model to check invariants
Model
Checklist
188. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
B 0 B, C
Follower
(epoch=0)
The leader is B and replica
A is trying to catch up to
rejoin the ISR.
189. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
C 1 B, C
Follower
(epoch=0)
The leader changes to C.
190. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=0)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
The leader changes to C.
191. r0 r1
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
Follower A catches up and
rejoins the ISR.
192. r0 r1 r2
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
Follower A catches up and
rejoins the ISR.
193. r0 r1 r2
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, B, C
Leader
(epoch=1)
Follower A catches up and
rejoins the ISR.
194. r0 r1 r2
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, B, C
Leader
(epoch=1)
This violates StrongIsr
because replica B may
have returned records r3,
r4, and r5 which A does not
yet have.
196. r0 r1 r2
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
After becoming leader, C
only knows that the true
high watermark is between
its own high watermark and
the end of the log.
True high
watermark
197. r0 r1 r2
r0 r1 r2 r3 r4 r5 r6
r0 r1 r2 r3 r4 r5
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
So we wait until the
follower has reached the
starting offset of this
leader’s own epoch before
allowing it into the ISR.
True high
watermark
198. r0 r1 r2
r0 r1 r2 r3 r4 r5
r0 r1 r2 r3 r4 r5 r7 r8
A
B
C
Follower
(epoch=1)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
So we wait until the
follower has reached the
starting offset of this
leader’s own epoch before
allowing it into the ISR.
True high
watermark
199. r0 r1 r2 r3 r4 r5 r7
r0 r1 r2 r3 r4 r5
r0 r1 r2 r3 r4 r5 r7 r8
A
B
C
Follower
(epoch=1)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
So we wait until the
follower has reached the
starting offset of this
leader’s own epoch before
allowing it into the ISR.
True high
watermark
200. r0 r1 r2 r3 r4 r5 r7
r0 r1 r2 r3 r4 r5
r0 r1 r2 r3 r4 r5 r7 r8
A
B
C
Follower
(epoch=1)
Follower
(epoch=1)
Leader Epoch ISR
C 1 A, B, C
Leader
(epoch=1)
So we wait until the
follower has reached the
starting offset of this
leader’s own epoch before
allowing it into the ISR.
True high
watermark
202. r0 r1 r2 r3
r0 r1 r2 r5 r6
r0 r1 r2 r5 r6
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
Replica A was a zombie
which was still fetching
from B. After a couple
leader elections, replica B
became the leader again.
203. r0 r1 r2 r3
r0 r1 r2 r5 r6
r0 r1 r2 r5 r6
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
A -> B:
Fetch(offset=4, epoch=0)
204. r0 r1 r2 r3
r0 r1 r2 r5 r6
r0 r1 r2 r5 r6
A
B
C
Follower
(epoch=0)
Leader Epoch ISR
B 2 B, C
Leader
(epoch=1)
Leader
(epoch=2)
A -> B:
Fetch(offset=4, epoch=0)
B -> A:
You are fenced!
207. Summary
● Distributed systems are subtle and we are
poorly equipped to reason about edge cases.
● Model checking is a systematic approach to
finding these edge cases and verifying our
fixes address them.
● All of the replication fixes we know of will be
available in Apache Kafka 2.1.0.
208. Note of
Caution ● The model is not the implementation.
● The implementation will have complexity that
the model cannot capture.
212. r0 r1 r2
r0 r1 r2 r3
r0 r1 r2 r3
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
B became a zombie while it
was the leader for epoch 0.
213. r0 r1 r2
r0 r1 r2 r3
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
The new leader will be
accepting writes.
214. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
The old leader may accept
writes as well!
215. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR
C 1 B, C
Leader
(epoch=1)
As long as the leader
cannot advance its high
watermark, there is no
semantic violation.
216. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR Ver
C 1 B, C 1
Leader
(epoch=1)
As long as the leader
cannot advance its high
watermark, there is no
semantic violation.
217. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0)
Follower
(epoch=1)
Leader Epoch ISR Ver
C 1 B, C 1
Leader
(epoch=1)
The controller sends the
latest version of the leader
and ISR state to replicas in
the LeaderAndIsr request
218. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0,
version=0)
Follower
(epoch=1)
Leader Epoch ISR Ver
C 1 B, C 1
Leader
(epoch=1,
version=1)
The controller sends the
latest version of the leader
and ISR state to replicas in
the LeaderAndIsr request
219. r0 r1 r2
r0 r1 r2 r3 r9 r10
r0 r1 r2 r3 r7 r8
A
B
C
Leader
(epoch=0,
version=0)
Follower
(epoch=1)
Leader Epoch ISR Ver
C 1 B, C 1
Leader
(epoch=1,
version=1)
This allows for CAS
updates, which effectively
fences replicas which have
old state.
228. VARIABLES var1, var2, …
Init ==
/ var1 = 1
/ …
Action1 ==
/ var1 leq 10
/ var1’ = var + 1
…
Next ==
/ Action1
/ Action2
/ …
Spec == Init / []Next
Invariant ==
/ var1 geq 1
/ …
TLA+
Overview
Specify the set of valid
state transitions
229. VARIABLES var1, var2, …
Init ==
/ var1 = 1
/ …
Action1 ==
/ var1 leq 10
/ var1’ = var + 1
…
Next ==
/ Action1
/ Action2
/ …
Spec == Init / []Next
Invariant ==
/ var1 geq 1
/ …
TLA+
Overview
Specify the set of valid
state transitions
230. VARIABLES var1, var2, …
Init ==
/ var1 = 1
/ …
Action1 ==
/ var1 leq 10
/ var1’ = var + 1
…
Next ==
/ Action1
/ Action2
/ …
Spec == Init / []Next
Invariant ==
/ var1 geq 1
/ …
TLA+
Overview
The specification is the
conjunction of the initial state
and all the states reachable
by repeatedly applying the
`Next` state transition
231. VARIABLES var1, var2, …
Init ==
/ var1 = 1
/ …
Action1 ==
/ var1 leq 10
/ var1’ = var + 1
…
Next ==
/ Action1
/ Action2
/ …
Spec == Init / []Next
Invariant ==
/ var1 geq 1
/ …
TLA+
Overview
Define the model invariants
that should hold after every
state transition