Are you using the fastest query tool for Hadoop? Provide and discuss the latest performance results of the industry standard TPC_H benchmarks executed across an assortment of open source query tools such as Hive (using MR, TEZ, LLAP, SPARK), SparkSQL, Presto, and Drill. Additionally, the performance tests will utilize a variety of data sizes and popular storage formats such as ORC, Parquet and Text and compression codecs.
Slides for presentation on ZooKeeper I gave at Near Infinity (www.nearinfinity.com) 2012 spring conference.
The associated sample code is on GitHub at https://github.com/sleberknight/zookeeper-samples
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Stephan Ewen - Experiences running Flink at Very Large ScaleVerverica
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
Are you using the fastest query tool for Hadoop? Provide and discuss the latest performance results of the industry standard TPC_H benchmarks executed across an assortment of open source query tools such as Hive (using MR, TEZ, LLAP, SPARK), SparkSQL, Presto, and Drill. Additionally, the performance tests will utilize a variety of data sizes and popular storage formats such as ORC, Parquet and Text and compression codecs.
Slides for presentation on ZooKeeper I gave at Near Infinity (www.nearinfinity.com) 2012 spring conference.
The associated sample code is on GitHub at https://github.com/sleberknight/zookeeper-samples
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Stephan Ewen - Experiences running Flink at Very Large ScaleVerverica
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
Mario Molina, Software Engineer
CDC systems are usually used to identify changes in data sources, capture and replicate those changes to other systems. Companies are using CDC to sync data across systems, cloud migration or even applying stream processing, among others.
In this presentation we’ll see CDC patterns, how to use it in Apache Kafka, and do a live demo!
https://www.meetup.com/Mexico-Kafka/events/277309497/
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
Apache Spark continues to grow in popularity - due to advanced analytics/machine learning, high performance processing, real-time streaming and multiple language support. Big Data technology is adding more data processing options to an already long list of legacy databases and file systems. As a result, enterprises continue to look for effective and approachable ways to federate all these data sources to solve business information needs. One under-appreciated feature of Spark is its ability to help quickly and powerfully enable federated data access. This presentation will discuss and demonstrate using Spark to query/combine multiple disparate data sources. We will see how to access the various data sources from Spark, normalize to Spark RDDs and combine for processing. The demo will show combining sources such as HDFS, JSON files, HBase, Hive and PostgreSQL and write the result back to a Data Mart for analysis. Also we will show the use of SparkSQL to access federated data in Spark through the Spark Thrift Server using the the Tableau BI tool.
Observability for Data Pipelines With OpenLineageDatabricks
Data is increasingly becoming core to many products. Whether to provide recommendations for users, getting insights on how they use the product, or using machine learning to improve the experience. This creates a critical need for reliable data operations and understanding how data is flowing through our systems. Data pipelines must be auditable, reliable, and run on time. This proves particularly difficult in a constantly changing, fast-paced environment.
Collecting this lineage metadata as data pipelines are running provides an understanding of dependencies between many teams consuming and producing data and how constant changes impact them. It is the underlying foundation that enables the many use cases related to data operations. The OpenLineage project is an API standardizing this metadata across the ecosystem, reducing complexity and duplicate work in collecting lineage information. It enables many projects, consumers of lineage in the ecosystem whether they focus on operations, governance or security.
Marquez is an open source project part of the LF AI & Data foundation which instruments data pipelines to collect lineage and metadata and enable those use cases. It implements the OpenLineage API and provides context by making visible dependencies across organizations and technologies as they change over time.
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
Installation of Grafana on linux ; connectivity with Prometheus database , installation of Prometheus ; Installation of node_exporter ,Tomcat-exporter ; installation and configuration of alert manager .. Detailed step by step installation and working
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Using Apache Arrow, Calcite, and Parquet to Build a Relational CacheDremio Corporation
From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.
ODSC 2019: Sessionisation via stochastic periods for root event identificationKuldeep Jiwani
In todays world majority of information is generated by self sustaining systems like various kinds of bots, crawlers, servers, various online services, etc. This information is flowing on the axis of time and is generated by these actors under some complex logic. For example, a stream of buy/sell order requests by an Order Gateway in financial world, or a stream of web requests by a monitoring / crawling service in the web world, or may be a hacker's bot sitting on internet and attacking various computers. Although we may not be able to know the motive or intention behind these data sources. But via some unsupervised techniques we can try to infer the pattern or correlate the events based on their multiple occurrences on the axis of time. Associating a chain of events in order of time helps in doing a root event analysis. In certain cases a time ordered correlation and root event identification is good enough to automatically identify signatures of various malicious actors and take appropriate corrective actions to stop cyber attacks, stop malicious social campaigns, etc.
Sessionisation is one such unsupervised technique that tries to find the signal in a stream of events associated with a timestamp. In the ideal world it would resolve to finding periods with a mixture of sinusoidal waves. But for the real world this is a much complex activity, as even the systematic events generated by machines over the internet behave in a much erratic manner. So the notion of a period for a signal also changes in the real world. We can no longer associate it with a number, it has to be treated as a random variable, with expected values and associated variance. Hence we need to model "Stochastic periods" and learn their probability distributions in an unsupervised manner.
The main focus of this talk will be to showcase applied data science techniques to discover stochastic periods. There are many ways to obtain periods in data, so the journey would begin by a walk through of existing techniques like FFT (Fast Fourier Transform) then discuss about Gaussian Mixture Models. After highlighting the short comings of these techniques we will succinctly explain one of the most general non-parametric Bayesian approaches to solve this problem. Without going too deep in the complex math, we will get back to applied data science and discuss a much simpler technique that can solve the same problem if certain assumptions are satisfied.
In this talk we will demonstrate some time based pattern we discovered while working on a security analytics use case that uses Sessionisation. In the talk we will demonstrate such patterns based on an open source malware attack datasets that is available publicly.
Key concepts explained in talk: Sessionisation, Bayesian techniques of Machine Learning, Gaussian Mixture Models, Kernel density estimation, FFT, stochastic periods, probabilistic modelling, Bayesian non-parametric methods
Using Apache Pulsar to Provide Real-Time IoT Analytics on the EdgeDataWorks Summit
The business value of data decreases rapidly after it is created, particularly in use cases such as fraud prevention, cybersecurity, and real-time system monitoring. The high-volume, high-velocity datasets used to feed these use cases often contain valuable, but perishable, insights that must be acted upon immediately.
In order to maximize the value of their data enterprises must fundamentally change their approach to processing real-time data to focusing reducing their decision latency on the perishable insights that exist within their real-time data streams. Thereby enabling the organization to act upon them while the window of opportunity is open.
Generating timely insights in a high-volume, high-velocity data environment is challenging for a multitude of reasons. As the volume of data increases, so does the amount of time required to transmit it back to the datacenter and process it. Secondly, as the velocity of the data increases, the faster the data and the insights derived from it lose value.
In this talk, we will present a solution based on Apache Pulsar Functions that significantly reduces decision latency by using probabilistic algorithms to perform analytic calculations on the edge.
Mario Molina, Software Engineer
CDC systems are usually used to identify changes in data sources, capture and replicate those changes to other systems. Companies are using CDC to sync data across systems, cloud migration or even applying stream processing, among others.
In this presentation we’ll see CDC patterns, how to use it in Apache Kafka, and do a live demo!
https://www.meetup.com/Mexico-Kafka/events/277309497/
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
Apache Spark continues to grow in popularity - due to advanced analytics/machine learning, high performance processing, real-time streaming and multiple language support. Big Data technology is adding more data processing options to an already long list of legacy databases and file systems. As a result, enterprises continue to look for effective and approachable ways to federate all these data sources to solve business information needs. One under-appreciated feature of Spark is its ability to help quickly and powerfully enable federated data access. This presentation will discuss and demonstrate using Spark to query/combine multiple disparate data sources. We will see how to access the various data sources from Spark, normalize to Spark RDDs and combine for processing. The demo will show combining sources such as HDFS, JSON files, HBase, Hive and PostgreSQL and write the result back to a Data Mart for analysis. Also we will show the use of SparkSQL to access federated data in Spark through the Spark Thrift Server using the the Tableau BI tool.
Observability for Data Pipelines With OpenLineageDatabricks
Data is increasingly becoming core to many products. Whether to provide recommendations for users, getting insights on how they use the product, or using machine learning to improve the experience. This creates a critical need for reliable data operations and understanding how data is flowing through our systems. Data pipelines must be auditable, reliable, and run on time. This proves particularly difficult in a constantly changing, fast-paced environment.
Collecting this lineage metadata as data pipelines are running provides an understanding of dependencies between many teams consuming and producing data and how constant changes impact them. It is the underlying foundation that enables the many use cases related to data operations. The OpenLineage project is an API standardizing this metadata across the ecosystem, reducing complexity and duplicate work in collecting lineage information. It enables many projects, consumers of lineage in the ecosystem whether they focus on operations, governance or security.
Marquez is an open source project part of the LF AI & Data foundation which instruments data pipelines to collect lineage and metadata and enable those use cases. It implements the OpenLineage API and provides context by making visible dependencies across organizations and technologies as they change over time.
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
Installation of Grafana on linux ; connectivity with Prometheus database , installation of Prometheus ; Installation of node_exporter ,Tomcat-exporter ; installation and configuration of alert manager .. Detailed step by step installation and working
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Using Apache Arrow, Calcite, and Parquet to Build a Relational CacheDremio Corporation
From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.
ODSC 2019: Sessionisation via stochastic periods for root event identificationKuldeep Jiwani
In todays world majority of information is generated by self sustaining systems like various kinds of bots, crawlers, servers, various online services, etc. This information is flowing on the axis of time and is generated by these actors under some complex logic. For example, a stream of buy/sell order requests by an Order Gateway in financial world, or a stream of web requests by a monitoring / crawling service in the web world, or may be a hacker's bot sitting on internet and attacking various computers. Although we may not be able to know the motive or intention behind these data sources. But via some unsupervised techniques we can try to infer the pattern or correlate the events based on their multiple occurrences on the axis of time. Associating a chain of events in order of time helps in doing a root event analysis. In certain cases a time ordered correlation and root event identification is good enough to automatically identify signatures of various malicious actors and take appropriate corrective actions to stop cyber attacks, stop malicious social campaigns, etc.
Sessionisation is one such unsupervised technique that tries to find the signal in a stream of events associated with a timestamp. In the ideal world it would resolve to finding periods with a mixture of sinusoidal waves. But for the real world this is a much complex activity, as even the systematic events generated by machines over the internet behave in a much erratic manner. So the notion of a period for a signal also changes in the real world. We can no longer associate it with a number, it has to be treated as a random variable, with expected values and associated variance. Hence we need to model "Stochastic periods" and learn their probability distributions in an unsupervised manner.
The main focus of this talk will be to showcase applied data science techniques to discover stochastic periods. There are many ways to obtain periods in data, so the journey would begin by a walk through of existing techniques like FFT (Fast Fourier Transform) then discuss about Gaussian Mixture Models. After highlighting the short comings of these techniques we will succinctly explain one of the most general non-parametric Bayesian approaches to solve this problem. Without going too deep in the complex math, we will get back to applied data science and discuss a much simpler technique that can solve the same problem if certain assumptions are satisfied.
In this talk we will demonstrate some time based pattern we discovered while working on a security analytics use case that uses Sessionisation. In the talk we will demonstrate such patterns based on an open source malware attack datasets that is available publicly.
Key concepts explained in talk: Sessionisation, Bayesian techniques of Machine Learning, Gaussian Mixture Models, Kernel density estimation, FFT, stochastic periods, probabilistic modelling, Bayesian non-parametric methods
Using Apache Pulsar to Provide Real-Time IoT Analytics on the EdgeDataWorks Summit
The business value of data decreases rapidly after it is created, particularly in use cases such as fraud prevention, cybersecurity, and real-time system monitoring. The high-volume, high-velocity datasets used to feed these use cases often contain valuable, but perishable, insights that must be acted upon immediately.
In order to maximize the value of their data enterprises must fundamentally change their approach to processing real-time data to focusing reducing their decision latency on the perishable insights that exist within their real-time data streams. Thereby enabling the organization to act upon them while the window of opportunity is open.
Generating timely insights in a high-volume, high-velocity data environment is challenging for a multitude of reasons. As the volume of data increases, so does the amount of time required to transmit it back to the datacenter and process it. Secondly, as the velocity of the data increases, the faster the data and the insights derived from it lose value.
In this talk, we will present a solution based on Apache Pulsar Functions that significantly reduces decision latency by using probabilistic algorithms to perform analytic calculations on the edge.
Data Platform at Twitter: Enabling Real-time & Batch Analytics at ScaleSriram Krishnan
The Data Platform at Twitter supports engineers and data scientists running batch jobs on Hadoop clusters that are several 1000s of nodes, and real-time jobs on top of systems such as Storm. In this presentation, I discuss the overall Data Platform stack at Twitter. In particular, I talk about enabling real-time and batch analytics at scale with the help of Scalding, which is a Scala DSL for batch jobs using MapReduce, Summingbird, which is a framework for combined real-time and batch processing, and Tsar, which is a framework for real-time time-series aggregations.
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...NoSQLmatters
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs
There are several challenges in the NoSQL world. Especially if you have very high availability requirements you have to accept temporal inconsistencies which you need to resolve explicitly. This is usually a tough job which requires implementing case by case business logic or even bothering the users to decide about the correct state of your data.Wouldn't it be great if we could solve this conflict resolution and data reconciliation process in a generic way at a pure technical level?That's exactly what CRDTs (Conflict-free Replicated Data Types) are about. CRDTs are data structures that are guaranteed to converge to a desired state while enabling extreme availability of the datastore.In this session you will learn what CRDTs are, how to design them, what you can do with them, what their limitations and tradeoffs are – of course garnished with lots of tips and tricks. Get ready to push the availability of your datastore to the max!
A Production Quality Sketching Library for the Analysis of Big DataDatabricks
In the analysis of big data there are often problem queries that don’t scale because they require huge compute resources to generate exact results, or don’t parallelize well.
Big Data is a new term used in Business Analytics to identify datasets that we can not manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data.
In this talk, we will focus on advanced techniques in Big Data mining in real time using evolving data stream techniques: using a small amount of time and memory resources, and being able to adapt to changes. We will discuss a social network application of data stream mining to compute user influence probabilities. And finally, we will present the MOA software framework with classification, regression, and frequent pattern methods, and the SAMOA distributed streaming software that runs on top of Storm, Samza and S4.
With the raise of NoSQL databases consistency models that are less strict than ACID transactions became popular again. After the first enthusiasm the developer community became aware that those relaxed consistency models hold some new challenges they never knew about in the ACID world. Fortunately there are some concepts around how to deal with those challenges. This presentation gives a rough introduction into the different consistency models that are available and their characteristics. Then it focusses on two techniques to deal with relaxed consistency. The first one are quorum-based reads and writes which provides a client-side strong consistency model (even if the database only implements eventual consistency). Then CRDTs (Conflict-free Replicated Data Types) are presented. CRDTs are self-stabilizing data structures which are designed for environments with extremely high availability requirements - and thus extremely weak consistency guarantees. Even though as always the majority of the information is on the voice track, I designed the slides in a way that they also provide some useful information without the voice.
This slide deck explains a bit how to deal best with state in scalable systems, i.e. pushing it to the system boundaries (client, data store) and trying to avoid state in-between.
Then it picks arbitrarily two scenarios - one in the frontend part and one in the backend part of a system and shows concrete techniques to deal with them.
In the frontend part is examined how to deal with session state of servlet containers in scalable scenarios and introduces the concept of a shared session cache layer. Also an example implementation using Redis is shown.
In the backend part it is examined how to deal with potential data inconsistencies that can occur if maximum availability of the data store is required and eventual consistency is used. The normal way is to resolve inconsistencies manually implementing business specific logic or - even worse - asking the user to resolve it. A pure technical solution called CRDTs (Conflict-free Replicated Data Types) is then shown. CRDTs, based on sound mathematical concepts, are self-stabilizing data structures that offer a generic way to resolve inconsistencies in an eventual consistent data store. Besides some theory also some examples are shown to provide a feeling how CRDTs feel in practice.
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...huguk
The task of “data profiling”—assessing the overall content and quality of a data set—is a core aspect of the analytic experience. Traditionally, profiling was a fairly cut-and-dried task: load the raw numbers into a stat package, run some basic descriptive statistics, and report the output in a summary file or perhaps a simple data visualization. However, data volumes can be so large today that traditional tools and methods for computing descriptive statistics become intractable; even with scalable infrastructure like Hadoop, aggressive optimization and statistical approximation techniques must be used. In this talk Sean will cover technical challenges in keeping data profiling agile in the Big Data era. He will discuss both research results and real-world best practices used by analysts in the field, including methods for sampling, summarizing and sketching data, and the pros and cons of using these various approaches.
Sean is Trifacta’s Chief Technical Officer. He completed his Ph.D. at Stanford University, where his research focused on user interfaces for database systems. At Stanford, Sean led development of new tools for data transformation and discovery, such as Data Wrangler. He previously worked as a data analyst at Citadel Investment Group.
Similar to Approximation Data Structures for Streaming Applications (20)
Functional Domain Modeling - The ZIO 2 WayDebasish Ghosh
Principled way to design and implement functional domain models using some of the patterns of domain driven design. DDD, as the name suggests, is focused towards the domain model and the patterns of architecture that it encourages are also based on how we think of interactions amongst the basic abstractions of the domain. Of course the primary goal of the talk is to discuss how Scala and Zio 2 can be a potent combination in realizing the implementation of such models. This is not a talk on FP, the focus will be on how to structure and modularise an application based on some of the patterns of DDD.
Algebraic Thinking for Evolution of Pure Functional Domain ModelsDebasish Ghosh
The focus of the talk is to emphasize the importance of algebraic thinking when designing pure functional domain models. The talk begins with the definition of an algebra as consisting of a carrier type, a set of operations/functions and a set of laws on those operations. Using examples from the standard library, the talk shows how thinking of abstractions in terms of its algebra is more intuitive than discussing its operational semantics. The talk also discusses the virtues of parametricity and compositionality in designing proper algebras.
Algebras are compositional and help build larger algebras out of smaller ones. We start with base level types available in standard libraries and compose larger programs out of them. We take a real life use case for a domain model and illustrate how we can define the entire model using the power of algebraic composition of the various types. We talk about how to model side-effects as pure abstractions using algebraic effects. At no point we will talk about implementations.
At the end of the talk we will have a working model built completely out of the underlying algebra of the domain language.
Architectural Patterns in Building Modular Domain ModelsDebasish Ghosh
The main theme of the talk is how to use algebraic and functional techniques to build modular domain models that are pure and compositional even in the presence of side-effects. I discuss the use of pure algebraic effects to abstract side-effects thereby keeping the model compositional.
This is going to be a discussion about design patterns. But I promise it’s going to be very different from the Gang of Four patterns that we all have used and loved in Java.
It doesn’t have any mathematics or category theory - it’s about developing an insight that lets u identify code structures that u think may be improved with a beautiful transformation of an algebraic pattern.
In earlier days of Java coding we used to feel proud when we could locate a piece of code that could be transformed into an abstract factory and the factory bean could be injected using Spring DI. The result was we ended up maintaining not only Java code, but quite a bit of XML too, untyped and unsafe. This was the DI pattern in full glory. In this session we will discuss patterns that don’t look like external artifacts, they are part of the language, they have some mathematical foundations in the sense that they have an algebra that actually compose and compose organically to evolve larger abstractions.
DSL - expressive syntax on top of a clean semantic modelDebasish Ghosh
Does a DSL mean compromising the domain model purity for an ultra-expressive syntax. This presentation discusses how to evolve your DSL syntax as a sublanguage of combinators on top of an expressive domain model.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
7. Data Stream Model
• So big that it doesn’t fit in a single computer
(unbounded)
8. Data Stream Model
• So big that it doesn’t fit in a single computer
(unbounded)
• So big that a polynomial running time isn’t good
enough
9. Data Stream Model
• So big that it doesn’t fit in a single computer
(unbounded)
• So big that a polynomial running time isn’t good
enough
• An algorithm processing such data can only access
data in a single pass
10. Data Stream Model
• So big that it doesn’t fit in a single computer
(unbounded)
• So big that a polynomial running time isn’t good
enough
• An algorithm processing such data can only access
data in a single pass
• And yet data needs to be processed with a low
latency feedback loop with the consumers
11. Motivating Use Cases
• Monitor events when a user visits a web site. Event streams
drive analytics and generate various metrics on user
behaviors
• Traffic monitoring in network routers based on IP addresses -
explore heavy hitters (top traffic intensive IP addresses)
• Processing financial data streams (stock quotes & orders) to
facilitate real time decision making
• Online clustering algorithms - similarity detection in real time
• Real time anomaly detection on data streams
12. Algorithm Ideas
• Continuous processing of unbounded streams of data
• Single pass over the data
• Memory and time bounded - sublinear space
• Queries may not have to be served with hard accuracy -
some affordance of errors allowed
13. Can we have a deterministic and/or
exact algorithm that meets all of
these requirements ?
14. Distinct Elements Problem
• Input: Stream of integers
• Where: [n] denotes the Set { 1, 2, .. , n }
• Output: The number of distinct elements seen in the
stream
• Goal: Minimize space consumption
i1, . . . , im ∈ [n]
15. Distinct Elements Problem
• Solution 1: Keep a bit array of length n, initialized to all
zeroes. When you see i in the stream, set the ith bit to 1.
• Space required: n bits of memory
16. Distinct Elements Problem
• Solution 1: Keep a bit array of length n, initialized to all
zeroes. When you see i in the stream, set the ith bit to 1.
• Space required: n bits of memory
• Solution 2: Store the whole stream in memory explicitly
• Space required: bits of memory⌈mlog2n⌉
17. Can we have a deterministic and/or
exact algorithm that beats this space
bound of ?min{n, ⌈mlog2n⌉}
18. Sublinear with Deterministic
& Exact - Possible ?
• Each element of the stream can be represented by n bits. The
entire stream can then be mapped to {0, 1}n
• Suppose a deterministic & exact algorithm exists that uses s bits of
space where s < n
• Then there must exist some mapping from n-bit strings to s-bit
strings i.e. {0,1}n to {0,1}s
• And this mapping has to be injective (no 2 elements of the domain
can map to the same element in co-domain)
• It can be proved that such a mapping does not exist (there cannot
be an injective mapping from a larger set to a smaller set)
19. There exists NO deterministic and/or
exact algorithm that implements
Distinct Elements problem in
sublinear space
22. Randomized & Approximate
• Estimators - the algorithm returns an estimator in
response to a query
Unbiased ?
Variance ?
23. Randomized & Approximate
• Estimators - the algorithm returns an estimator in
response to a query
• Error bound - f(x) is accurate up to a certain bound
( bound )ϵ
24. Randomized & Approximate
• Estimators - the algorithm returns an estimator in
response to a query
• Error bound - f(x) is accurate up to a certain bound
( bound )
• Confidence of accuracy - probability that the estimator
will be within the above bound ( )
ϵ
1 − δ
30. • A Sketch C(X) of some data set X with respect to some
function f is a compression of X that allows us to
compute, or approximately compute f(X), given access
only to C(X)
31. Alice Bob
Data set X, which is
a list of Integers
Data set Y, which is
a list of Integers
f(X, Y) =
∑
z∈X∪Y
z
32. Alice Bob
Data set X, which is
a list of Integers
Data set Y, which is
a list of Integers
f(X, Y) =
∑
z∈X∪Y
z
Maintain Sketch of X
as the running sum of
the integers
Maintain Sketch of Y
as the running sum of
the integers
33. Source: https://highlyscalable.wordpress.com/2012/05/01/probabilistic-structures-web-analytics-data-mining/
Show me some data!
Membership Query
with 4% error - Bloom Filter
Exact Membership Query,
Cardinality Estimation - Sorted IDs or Hash Table
Frequencies of top-100 most frequent
elements with 4% error - Count Min Sketch
Top-100 most frequent
elements with 4% error - Stream-Summary
Cardinality Estimation
with 4% error - Loglog Counter
Cardinality Estimation
with 4% error - Linear Counter
Exact Frequency
Estimation, Range Query - Sorted Table or Hash Map
Raw Data
34. A Simple Counter
• Use Case - Monitor a stream of events
• At any point in time output (an estimate of) the number of
events seen so far.You may have to report from multiple
counters aggregated by event types
• Idea is to beat O(log2n) space. Any trivial algorithm can
implement this using log2n bits
35. • Using a suitable sketch, there exists an algorithm that
returns an estimator of the counter within a bound of
• and a small probability of failure
k(1 ± ϵ)
δ
ϵ − δ Approximation
36. Approximate Counting
(Morris ’78)
Counting Large Number of Events in Small Registers - Robert Morris, CACM, Volume 21,
Issue 10, Oct 1978: https://dl.acm.org/citation.cfm?id=359627
ℙ( ∣ ˜n − n ∣ > ϵn) < δ
1. Initialize X ⟵ 0.
2. For each u pdate, increment X with probability 1/2X
.
3. For a qu ery, ou tpu t ˜n= 2X
−1.
37. The steps to analyze this algorithm
generalize beautifully to all
approximation data structures used
to handle streaming data
38. Generalization steps ..
• Compute the expected value of the estimator. In [Morris
’78] we have
• Compute the variance of the estimator. In [Morris ’78] we
have
• Using median trick, establish
𝔼[2X
− 1] = n
var[2X
− 1] = O(n2
)
ϵ − δ Approximation
40. Use Case
• Continuous stream of IP
addresses hitting a router
• Updates of the form (i, ),
which means the count of IP
address i has to increase by
by
• Want an estimate of how
many times IP address i has
hit the router at any point in
time (Frequency Estimation)
Δ
Δ
Credit: http://voipstuff.net.au/routers/
41. Count Min Sketch
width w
d hash
functions
An Improved Data Stream Summary: The Count-Min Sketch and its Applications
- Graham Cormode and S. Muthukrishnan (http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf)
46. Count Min Sketch
Claim
1. Fo r ϵ − po in t qu ery with failu re pro bability δ .
2. qu ery(i) = xi ± ϵ ∥ x ∥1 with pro b ≥ 1 − δ .
3. Set w = ⌈2/ϵ⌉ an d d = ⌈lo g 2(1/δ)⌉ .
4. Space requ ired is O(ϵ−1
lo g 2(1/δ) .
49. Algebra of a Monoid
Set A
ϕ : A × A → A
given
a binary operation
(a ϕ b) ϕ c = a ϕ (b ϕ c)
associative
fo r (a, b, c) ∈ A
a ϕ I = I ϕ a = a
fo r (a, I ) ∈ A
identity
50. time 1 time 2 time 3 time 4 time 5
window
at time 1
window
at time 3
window
at time 5
window-based
operation
original
DStream
windowed
DStream
Stream of host IPs
hitting the router
CMS in the wild
51. time 1 time 2 time 3 time 4 time 5
window
at time 1
window
at time 3
window
at time 5
window-based
operation
original
DStream
windowed
DStream
Stream of host IPs
hitting the router
Frequency Sketch /
Heavy Hitter Sketch
for this batch
Frequency Sketch /
Heavy Hitter Sketch
for this window
Frequency Sketch /
Heavy Hitter Sketch
global
CMS in the wild
52. time 1 time 2 time 3 time 4 time 5
window
at time 1
window
at time 3
window
at time 5
window-based
operation
original
DStream
windowed
DStream
Stream of host IPs
hitting the router
Frequency Sketch /
Heavy Hitter Sketch
for this batch
Frequency Sketch /
Heavy Hitter Sketch
for this window
Frequency Sketch /
Heavy Hitter Sketch
global
Kafka
HDFS
Dashboard
CMS in the wild
53. Streaming CMS
// CMS parameters
val DELTA = 1E-3
val EPS = 0.01
val SEED = 1
// create CMS
val cmsMonoid = CMS.monoid[String](DELTA, EPS, SEED)
var globalCMS = cmsMonoid.zero
// Generate data stream
val hosts: DStream[String] = lines.flatMap(r =>
LogParseUtil.parseHost(r.value).toOption)
// load data into CMS
val approxHosts: DStream[CMS[String]] = hosts.mapPartitions(ids => {
val cms = CMS.monoid[String](DELTA, EPS, SEED)
ids.map(cms.create)
}).reduce(_ ++ _)
54. Streaming CMS
approxHosts.foreachRDD(rdd => {
if (rdd.count() != 0) {
val cmsThisBatch: CMS[String] = rdd.first
globalCMS ++= cmsThisBatch
val f1ThisBatch = cmsThisBatch.f1
val freqThisBatch = cmsThisBatch.frequency("world.std.com")
val f1Overall = globalCMS.f1
val freqOverall = globalCMS.frequency("world.std.com")
// ..
}
})
55. Motivation of Streaming
CMS
• Prepare the sketch online on streaming data
• Store it offline for future analytics
• It’s a small structure - hence ideal for serialization &
storage
• It’s a commutative monoid and hence you can distribute
many of them across multiple machines, do parallel
computations and again aggregate the results
56. Count Min Sketch -
Applications
• AT&T has used it in network switches to perform network analyses on streaming
network traffic with limited memory [1].
• Streaming log analysis
• Join size estimation for database query planners
• Heavy hitters -
• Top-k active users on Twitter
• Popular products - most viewed products page
• Compute frequent search queries
• Identify heavy TCP flow
• Identify volatile stocks
[1] G. Cormode, T. Johnson, F. Korn, S. Muthukrishnan, O. Spatscheck, and D. Srivastava. Holistic UDAFs at streaming speeds. In Proceedings of the 2004 ACM SIGMOD International
Conference on Management of Data, pages 35–46, 2004.
57. Heavy Hitters Problem
• Using a single pass over a data stream, find all elements
with frequencies greater than k percent of the total number of
elements seen so far.
• unbounded data stream
• will have to use sublinear space
• Fact: There is no deterministic algorithm that solves the
Heavy Hitters problems in 1 pass while using sublinear space
• Hence ϵ − approximate Heavy Hitters Problem
58. Approximate Heavy Hitters
using Count Min Sketch
Datastreamofelements
Count Min Sketch Heap
N
Count seen so far
(1) Element Xi comes
(2) Add Xi to CMS
(3) Check freq of Xi
> Threshold ?
Yes
(4)AddtoHeap
No
59. Streaming Approximate
Heavy Hitters
// create heavy hitter CMS
val approxHH: DStream[TopCMS[String]] = hosts.mapPartitions(ids => {
val cms = TopPctCMS.monoid[String](DELTA, EPS, SEED, 0.15)
ids.map(cms.create(_))
}).reduce(_ ++ _)
// analyze in microbatch
approxHH.foreachRDD(rdd => {
if (rdd.count() != 0) {
val hhThisBatch: TopCMS[String] = rdd.first
hhThisBatch.heavyHitters.foreach(println)
}
})
60. Bloom Filter
• Another sketching data structure (based on hashing)
• Solves the same problem as Hash Map but with much
less space
• Great tool to have if you want approximate membership
query with sublinear storage
• Can give false positives
61. Bloom Filter - Under the
Hood
• Ingredients
• Array A of n bits. If we store a dataset S, then number of bits used per object = n/|S|
• k hash functions (h1,h2, ..,hk) (usually k is small)
• Insert(x)
• For i=1,2, ..,k set A[hi(x)]=1 irrespective of what the previous values of those
bits were
• Query(x)
• if for every i=1,2, ..,k A[hi(x)]=1 return true
• No false negatives
• Can have false positives
Space/time trade-offs in hash coding with allowable errors - B. H. Bloom.
Communications of the ACM 13(7): 422-426. 1970.
ByDavidEppstein-self-made,originallyforatalkatWADS2007,PublicDomain,https://commons.wikimedia.org/w/index.p
62. Bloom Filter as Application
State
Kafka Streams*
Application
Kafka Streams*
Application
Local State Local State
Rebalancing
Partition #1
Partition #2
Partition #3
Data Stream Kafka Topic
* 2 instances of the same application
63. Bloom Filter State Store
// Bloom Filter as a StateStore. The only query it supports is membership.
class BFStore[T: Hash128](
override val name: String,
val loggingEnabled: Boolean = true,
val numHashes: Int = 6,
val width: Int = 32,
val seed: Int = 1) extends WriteableBFStore[T] with StateStore {
// monoid!
private val bfMonoid =
new BloomFilterMonoid[T](numHashes, width)
// initialize
private[processor] var bf: BF[T] = bfMonoid.zero
// ..
}
64. Bloom Filter State Store
// Bloom Filter as a StateStore. The only query it supports is membership.
class BFStore[T: Hash128](
override val name: String,
val loggingEnabled: Boolean = true,
val numHashes: Int = 6,
val width: Int = 32,
val seed: Int = 1) extends WriteableBFStore[T] with StateStore {
// ..
def +(item: T): Unit = bf = bf + item
def contains(item: T): Boolean = {
val v = bf.contains(item)
v.isTrue && v.withProb > ACCEPTABLE_PROBABILITY
}
def maybeContains(item: T): Boolean = bf.maybeContains(item)
def size: Approximate[Long] = bf.size
}
65. BF Store with Kafka
Streams Processor
// the Kafka Streams processor that will be part of the topology
class WeblogProcessor extends AbstractProcessor[String, String]
// the store instance
private var bfStore: BFStore[String] = _
override def init(context: ProcessorContext): Unit = {
super.init(context)
// ..
bfStore = this.context.getStateStore(
WeblogDriver.LOG_COUNT_STATE_STORE).asInstanceOf[BFStore[String]]
}
override def process(dummy: String, record: String): Unit =
LogParseUtil.parseLine(record) match {
case Success(r) => {
bfStore + r.host
bfStore.changeLogger.logChange(bfStore.changelogKey, bfStore.bf)
}
case Failure(ex) => // ..
}
// ..
}