Developing Big Data Analytics often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions about data. Data scientists typically write code that implements a data processing pipeline and test it on their local workstation with a small sample data, downloaded from a TB-scale data warehouse. They cross fingers and hope that the program works in the expensive production cloud.When a job fails or they get a suspicious result, data scientists spend hours guessing at the source of the error, digging through post-mortem logs.
In such cases, the data scientists may want to pinpoint the root cause of errors by investigating a subset of corresponding input records. In this talk, we present BigSift, an automated debugger for Apache Spark that data engineers and scientists can use. It takes an Apache Spark program, a user-defined test oracle function, and a dataset as input and outputs a minimum set of input records that reproduces the same test failure.
BigSift combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. It redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BigSift exposes an interactive web interface where a user can monitor a big data analytics job running remotely on the cloud, write a user-defined test oracle function, and then trigger the automated debugging process. BigSift also provides a set of predefined test oracle functions, which can be used for explaining common types of anomalies in big data analytics. This debugging effort is led by UCLA Professors Miryung Kim and Tyson Condie, and produced several research papers in top Software Engineering and Database conferences.
Timeli: Believing Cassandra: Our Big-Data Journey To Enlightenment under the ...DataStax Academy
Thursday, September 24
1:50 PM - 2:30 PM
Ballroom G
Believing Cassandra: Our Big-Data Journey To Enlightenment under the Cassandra Paradigm
It turns out that much can be learned about Cassandra in a year's time, given a high enough pain tolerance on the part of an organization's founders. Join us as we at Timeli.io step you through exactly what happened when we walked into an in-production time series implementation that somehow could not return data in a time series format to its existing customers. We will then discuss how we then re-worked that same implementation to be fully functional, and how we started on the road to finding the keys to Cassandra's legendary performance capabilities in a Zookeeper/AMQP/SQL/Tomcat stack. The path for this journey left no block unstumbled, so if your organization still has toes that are left unbruised this talk could well save you pain.
DataStax: Old Dogs, New Tricks. Teaching your Relational DBA to fetchDataStax Academy
Do you love some Cassandra, but that relational brain is still on? You aren't alone. Let's take that OLAP data model and get it OLTP. This will be an updated talk with some of the new features brought to you by Cassandra 3.0. Real techniques to translate application patterns into effective models. Common pitfalls that can slow you down and send you running back to RDBMS land. Don't do it! Finally, if you didn't get it right the first time, I'll show you how to fix that data model without any downtime. Turn a hot cup of fail into a tall glass of awesome!
Timeli: Believing Cassandra: Our Big-Data Journey To Enlightenment under the ...DataStax Academy
Thursday, September 24
1:50 PM - 2:30 PM
Ballroom G
Believing Cassandra: Our Big-Data Journey To Enlightenment under the Cassandra Paradigm
It turns out that much can be learned about Cassandra in a year's time, given a high enough pain tolerance on the part of an organization's founders. Join us as we at Timeli.io step you through exactly what happened when we walked into an in-production time series implementation that somehow could not return data in a time series format to its existing customers. We will then discuss how we then re-worked that same implementation to be fully functional, and how we started on the road to finding the keys to Cassandra's legendary performance capabilities in a Zookeeper/AMQP/SQL/Tomcat stack. The path for this journey left no block unstumbled, so if your organization still has toes that are left unbruised this talk could well save you pain.
DataStax: Old Dogs, New Tricks. Teaching your Relational DBA to fetchDataStax Academy
Do you love some Cassandra, but that relational brain is still on? You aren't alone. Let's take that OLAP data model and get it OLTP. This will be an updated talk with some of the new features brought to you by Cassandra 3.0. Real techniques to translate application patterns into effective models. Common pitfalls that can slow you down and send you running back to RDBMS land. Don't do it! Finally, if you didn't get it right the first time, I'll show you how to fix that data model without any downtime. Turn a hot cup of fail into a tall glass of awesome!
In this talk I present a regular Lucee CFML script, which was written to import several records into a database and performance tune it step by step. You will be amazed of what is possible. The code examples can be downloaded here: http://www.rasia.ch/downloads/performance-tuning-through-iterations-code.zip
Oracle trace data collection errors: the story about oceans, islands, and riversCary Millsap
When you execute a business task on a computer system, you create an experience. The duration of this experience is called response time. The richest and easiest information about response time to obtain in the whole Oracle technology stack is available from the Oracle Database tier: Oracle's extended SQL trace data. But in almost 100% of first tries with using trace data, people make a data collection mistake that complicates their analysis. This is the story of that mistake.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
What SQL functionality was added in the past year or so. The presentation covers default expressions, functional key parts, lateral derived tables, CHECK constraints, JSON and spatial improvements. Also some other small SQL and other improvements.
Third normal form? That’s so 20th century. Learn the newest techniques to make your Cassandra database sing from the rafters in performance and scalability. AND it uses concepts that you already know and apply every day. You can do this. This is the must-see half hour of your professional life! These developers found a new way to work with databases. First you will be shocked, then you will be inspired!
OpenWorld 2018 - Common Application Developer DisastersConnor McDonald
Two of the critical requirements of a database are:
- run fast
- data integrity
The database can achieve these things, but only as long as you understand the mechanisms correctly. If you don't, then things can go downhill fast.
In this talk I present a regular Lucee CFML script, which was written to import several records into a database and performance tune it step by step. You will be amazed of what is possible. The code examples can be downloaded here: http://www.rasia.ch/downloads/performance-tuning-through-iterations-code.zip
Oracle trace data collection errors: the story about oceans, islands, and riversCary Millsap
When you execute a business task on a computer system, you create an experience. The duration of this experience is called response time. The richest and easiest information about response time to obtain in the whole Oracle technology stack is available from the Oracle Database tier: Oracle's extended SQL trace data. But in almost 100% of first tries with using trace data, people make a data collection mistake that complicates their analysis. This is the story of that mistake.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
What SQL functionality was added in the past year or so. The presentation covers default expressions, functional key parts, lateral derived tables, CHECK constraints, JSON and spatial improvements. Also some other small SQL and other improvements.
Third normal form? That’s so 20th century. Learn the newest techniques to make your Cassandra database sing from the rafters in performance and scalability. AND it uses concepts that you already know and apply every day. You can do this. This is the must-see half hour of your professional life! These developers found a new way to work with databases. First you will be shocked, then you will be inspired!
OpenWorld 2018 - Common Application Developer DisastersConnor McDonald
Two of the critical requirements of a database are:
- run fast
- data integrity
The database can achieve these things, but only as long as you understand the mechanisms correctly. If you don't, then things can go downhill fast.
Data Lakehouse Symposium | Day 1 | Part 1Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
Why APM Is Not the Same As ML MonitoringDatabricks
Application performance monitoring (APM) has become the cornerstone of software engineering allowing engineering teams to quickly identify and remedy production issues. However, as the world moves to intelligent software applications that are built using machine learning, traditional APM quickly becomes insufficient to identify and remedy production issues encountered in these modern software applications.
As a lead software engineer at NewRelic, my team built high-performance monitoring systems including Insights, Mobile, and SixthSense. As I transitioned to building ML Monitoring software, I found the architectural principles and design choices underlying APM to not be a good fit for this brand new world. In fact, blindly following APM designs led us down paths that would have been better left unexplored.
In this talk, I draw upon my (and my team’s) experience building an ML Monitoring system from the ground up and deploying it on customer workloads running large-scale ML training with Spark as well as real-time inference systems. I will highlight how the key principles and architectural choices of APM don’t apply to ML monitoring. You’ll learn why, understand what ML Monitoring can successfully borrow from APM, and hear what is required to build a scalable, robust ML Monitoring architecture.
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
Autonomy and ownership are core to working at Stitch Fix, particularly on the Algorithms team. We enable data scientists to deploy and operate their models independently, with minimal need for handoffs or gatekeeping. By writing a simple function and calling out to an intuitive API, data scientists can harness a suite of platform-provided tooling meant to make ML operations easy. In this talk, we will dive into the abstractions the Data Platform team has built to enable this. We will go over the interface data scientists use to specify a model and what that hooks into, including online deployment, batch execution on Spark, and metrics tracking and visualization.
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
In this talk, I will dive into the stage level scheduling feature added to Apache Spark 3.1. Stage level scheduling extends upon Project Hydrogen by improving big data ETL and AI integration and also enables multiple other use cases. It is beneficial any time the user wants to change container resources between stages in a single Apache Spark application, whether those resources are CPU, Memory or GPUs. One of the most popular use cases is enabling end-to-end scalable Deep Learning and AI to efficiently use GPU resources. In this type of use case, users read from a distributed file system, do data manipulation and filtering to get the data into a format that the Deep Learning algorithm needs for training or inference and then sends the data into a Deep Learning algorithm. Using stage level scheduling combined with accelerator aware scheduling enables users to seamlessly go from ETL to Deep Learning running on the GPU by adjusting the container requirements for different stages in Spark within the same application. This makes writing these applications easier and can help with hardware utilization and costs.
There are other ETL use cases where users want to change CPU and memory resources between stages, for instance there is data skew or perhaps the data size is much larger in certain stages of the application. In this talk, I will go over the feature details, cluster requirements, the API and use cases. I will demo how the stage level scheduling API can be used by Horovod to seamlessly go from data preparation to training using the Tensorflow Keras API using GPUs.
The talk will also touch on other new Apache Spark 3.1 functionality, such as pluggable caching, which can be used to enable faster dataframe access when operating from GPUs.
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
In this talk, I would like to introduce an open-source tool built by our team that simplifies the data conversion from Apache Spark to deep learning frameworks.
Imagine you have a large dataset, say 20 GBs, and you want to use it to train a TensorFlow model. Before feeding the data to the model, you need to clean and preprocess your data using Spark. Now you have your dataset in a Spark DataFrame. When it comes to the training part, you may have the problem: How can I convert my Spark DataFrame to some format recognized by my TensorFlow model?
The existing data conversion process can be tedious. For example, to convert an Apache Spark DataFrame to a TensorFlow Dataset file format, you need to either save the Apache Spark DataFrame on a distributed filesystem in parquet format and load the converted data with third-party tools such as Petastorm, or save it directly in TFRecord files with spark-tensorflow-connector and load it back using TFRecordDataset. Both approaches take more than 20 lines of code to manage the intermediate data files, rely on different parsing syntax, and require extra attention for handling vector columns in the Spark DataFrames. In short, all these engineering frictions greatly reduced the data scientists’ productivity.
The Databricks Machine Learning team contributed a new Spark Dataset Converter API to Petastorm to simplify these tedious data conversion process steps. With the new API, it takes a few lines of code to convert a Spark DataFrame to a TensorFlow Dataset or a PyTorch DataLoader with default parameters.
In the talk, I will use an example to show how to use the Spark Dataset Converter to train a Tensorflow model and how simple it is to go from single-node training to distributed training on Databricks.
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
There is no doubt Kubernetes has emerged as the next generation of cloud native infrastructure to support a wide variety of distributed workloads. Apache Spark has evolved to run both Machine Learning and large scale analytics workloads. There is growing interest in running Apache Spark natively on Kubernetes. By combining the flexibility of Kubernetes and scalable data processing with Apache Spark, you can run any data and machine pipelines on this infrastructure while effectively utilizing resources at disposal.
In this talk, Rajesh Thallam and Sougata Biswas will share how to effectively run your Apache Spark applications on Google Kubernetes Engine (GKE) and Google Cloud Dataproc, orchestrate the data and machine learning pipelines with managed Apache Airflow on GKE (Google Cloud Composer). Following topics will be covered: – Understanding key traits of Apache Spark on Kubernetes- Things to know when running Apache Spark on Kubernetes such as autoscaling- Demonstrate running analytics pipelines on Apache Spark orchestrated with Apache Airflow on Kubernetes cluster.
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
Sawtooth Windows for Feature AggregationsDatabricks
In this talk about zipline, we will introduce a new type of windowing construct called a sawtooth window. We will describe various properties about sawtooth windows that we utilize to achieve online-offline consistency, while still maintaining high-throughput, low-read latency and tunable write latency for serving machine learning features.We will also talk about a simple deployment strategy for correcting feature drift – due operations that are not “abelian groups”, that operate over change data.
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Re-imagine Data Monitoring with whylogs and SparkDatabricks
In the era of microservices, decentralized ML architectures and complex data pipelines, data quality has become a bigger challenge than ever. When data is involved in complex business processes and decisions, bad data can, and will, affect the bottom line. As a result, ensuring data quality across the entire ML pipeline is both costly, and cumbersome while data monitoring is often fragmented and performed ad hoc. To address these challenges, we built whylogs, an open source standard for data logging. It is a lightweight data profiling library that enables end-to-end data profiling across the entire software stack. The library implements a language and platform agnostic approach to data quality and data monitoring. It can work with different modes of data operations, including streaming, batch and IoT data.
In this talk, we will provide an overview of the whylogs architecture, including its lightweight statistical data collection approach and various integrations. We will demonstrate how the whylogs integration with Apache Spark achieves large scale data profiling, and we will show how users can apply this integration into existing data and ML pipelines.
Raven: End-to-end Optimization of ML Prediction QueriesDatabricks
Machine learning (ML) models are typically part of prediction queries that consist of a data processing part (e.g., for joining, filtering, cleaning, featurization) and an ML part invoking one or more trained models. In this presentation, we identify significant and unexplored opportunities for optimization. To the best of our knowledge, this is the first effort to look at prediction queries holistically, optimizing across both the ML and SQL components.
We will present Raven, an end-to-end optimizer for prediction queries. Raven relies on a unified intermediate representation that captures both data processing and ML operators in a single graph structure.
This allows us to introduce optimization rules that
(i) reduce unnecessary computations by passing information between the data processing and ML operators
(ii) leverage operator transformations (e.g., turning a decision tree to a SQL expression or an equivalent neural network) to map operators to the right execution engine, and
(iii) integrate compiler techniques to take advantage of the most efficient hardware backend (e.g., CPU, GPU) for each operator.
We have implemented Raven as an extension to Spark’s Catalyst optimizer to enable the optimization of SparkSQL prediction queries. Our implementation also allows the optimization of prediction queries in SQL Server. As we will show, Raven is capable of improving prediction query performance on Apache Spark and SQL Server by up to 13.1x and 330x, respectively. For complex models, where GPU acceleration is beneficial, Raven provides up to 8x speedup compared to state-of-the-art systems. As part of the presentation, we will also give a demo showcasing Raven in action.
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
Semantic segmentation is the classification of every pixel in an image/video. The segmentation partitions a digital image into multiple objects to simplify/change the representation of the image into something that is more meaningful and easier to analyze [1][2]. The technique has a wide variety of applications ranging from perception in autonomous driving scenarios to cancer cell segmentation for medical diagnosis.
Exponential growth in the datasets that require such segmentation is driven by improvements in the accuracy and quality of the sensors generating the data extending to 3D point cloud data. This growth is further compounded by exponential advances in cloud technologies enabling the storage and compute available for such applications. The need for semantically segmented datasets is a key requirement to improve the accuracy of inference engines that are built upon them.
Streamlining the accuracy and efficiency of these systems directly affects the value of the business outcome for organizations that are developing such functionalities as a part of their AI strategy.
This presentation details workflows for labeling, preprocessing, modeling, and evaluating performance/accuracy. Scientists and engineers leverage domain-specific features/tools that support the entire workflow from labeling the ground truth, handling data from a wide variety of sources/formats, developing models and finally deploying these models. Users can scale their deployments optimally on GPU-based cloud infrastructure to build accelerated training and inference pipelines while working with big datasets. These environments are optimized for engineers to develop such functionality with ease and then scale against large datasets with Spark-based clusters on the cloud.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Automated Debugging of Big Data Analytics in Apache Spark Using BigSift with Muhammad Ali Gulzar and Miryung Kim
1. Muhammad Ali Gulzar Miryung Kim
University of California, Los Angeles
Automated Debugging of Big Data
Analytics in Apache Spark Using BigSift
#Res3SAIS
2. Big Data Debugging in the Dark
2#Res3SAIS
Map Reduce
Develop locally Hope it works Run in cloud Bug!
Guesswork
3. BigDebug: Debugging Primitives for
Interactive Big Data Processing
ICSE 2016
Muhammad Ali Gulzar, Matteo Interlandi, Seunghyun Yoo, Sai
Deep Tetali Tyson Condie, Todd Millstein, Miryung Kim
Presented at Spark Summit 2017
3#Res3SAIS
4. Interactive Debugging with BigDebug
4#Res3SAIS
4. Backward and Forward Tracing
1. Simulated Breakpoint 2. On Demand Guarded Watchpoint
3. Crash Culprit Identification
5. Titian: Data Provenance support in Spark
VLDB 2015
Matteo Interlandi, Kshitij Shah, Sai Deep Tetali, Muhammad Ali
Gulzar, Seunghyun Yoo, Miryung Kim, Todd Millstein, Tyson Condie
5#Res3SAIS
6. Data Provenance – in SQL
6#Res3SAIS
Tuple-ID Time Sendor-ID Temperature
T1 11AM 1 34
T2 11AM 2 35
T3 11AM 3 35
T4 12PM 1 35
T5 12PM 2 35
T6 12PM 3 100
T7 1PM 1 35
T8 1PM 2 35
T9 1PM 3 80
SELECT time, AVG(temp)
FROM sensors
GROUP BY time
Sensors
Result-
ID
Time AVG(temp)
ID-1 11AM 34.6
ID-2 12PM 56.6
ID-3 1PM 50
Outlier
Outlier
Why ID-2 and
ID-3 have those
high values?
7. Step 1: Instrument Workflow in Spark
7#Res3SAIS
Combiner
LineageRDD
Reducer
LineageRDD
Hadoop
LineageRDD
counts
pairscodeserrorslines
Stage 1
Stage 2
reports
Stage
LineageRDD
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Input
ID
Output
ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
[p1, p2] 400
[ p1 ] 4
Input ID Output ID
400 id1
4 id2
8. Step 2: Example Backward Tracing
8#Res3SAIS
Worker1
Worker3
Input ID Output ID
[p1, p2] 400
[ p1 ] 4
Input ID Output ID
400 id1
4 id2
Reducer Stage
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner
Worker1
Worker3
Input ID Output ID
[p1, p2] 400
[ p1 ] 4
Input ID Output ID
400 id1
4 id2
Reducer Stage
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner
Input ID Output ID
p1 400
Input ID Output ID
p1 400
Stage.Input IDReducer.Output ID
Worker2
9. Step 2: Example Backward Tracing
9#Res3SAIS
Worker1
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner
Worker1
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner
Input ID Output ID
p1 400
Input ID Output ID
p1 400
Worker2
Combiner.Output ID Reducer.Output ID
Combiner.Output ID Reducer.Output ID
10. Step 2: Example Backward Tracing
10#Res3SAIS
Worker1
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner
Worker1
Input ID Output ID
offset1 id1
offset2 id2
offset3 id3
Input ID Output ID
{ id1, id 3} 400
{ id2 } 4
Hadoop Combiner
Input ID Output ID
offset1 id1
… …
Input ID Output ID
{ id1, …} 400
Hadoop Combiner Worker2
Combiner.Input IDHadoop.Output ID
Combiner.Input IDHadoop.Output ID
11. BigSift: Automated Debugging of Big
Data Analytics in DISC Applications
SoCC 2017
Muhammad Ali Gulzar, Matteo Interlandi, Xueyuan Han, Tyson
Condie, Miryung Kim
11#Res3SAIS
12. Motivating Example for Automated
Debugging with BigSift
• Alice writes a Spark program that identifies, for each state in the US,
the delta between the minimum and the maximum snowfall reading
for each day of any year and for any particular year.
• An input data record that measures 1 foot of snowfall on January 1st of
Year 1992, in the 99504 zip code (Anchorage, AK) area, appears as
99504 , 01/01/1992 , 1ft
12#Res3SAIS
13. Debugging Incorrect Output
13
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
99504 , 03/01/1993 , 145mm
99504 , 01/01/1994 , 245mm
99504 , 01/01/1993 , 85mm
90031 , 02/01/1991 , 0mm
AK , 01/01 , [304.8, 21336, 245, 85]
AK , 03/01 , [30.5 , 145]
AK , 1992 , [304.8 , 30.5]
AK , 1993 , [21336, 145, 85]
AK , 1994 , [245]
CA , 02/01 , [0]
CA , 1991 , [0]
TextFile FlatMap GroupByKey Map Output
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 03/01 , 145
AK , 1993 , 145
AK , 01/01 , 245
AK , 1994 , 245
…. ….
AK , 01/01 , 21251
AK , 03/01 , 114.5
AK , 1992 , 274.3
AK , 1993 , 21251
AK , 1994 , 0
CA , 02/01 , 0
CA , 1991 , 0
It is challenging because the input records is very large and the program
involves computing min and max and a unit conversion
#Res3SAIS
14. Data Provenance with Titian [VLDB ‘15]
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
99504 , 03/01/1993 , 145mm
99504 , 01/01/1994 , 245mm
99504 , 01/01/1993 , 85mm
90031 , 02/01/1991 , 0mm
AK , 01/01 , [304.8, 21336, 245, 85]
AK , 03/01 , [30.5 , 145]
AK , 1992 , [304.8 , 30.5]
AK , 1993 , [21336, 145, 85]
AK , 1994 , [245]
CA , 02/01 , [0]
CA , 1991 , [0]
TextFile FlatMap GroupByKey Map Output
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 03/01 , 145
AK , 1993 , 145
AK , 01/01 , 245
AK , 1994 , 245
…. ….
AK , 01/01 , 21251
AK , 03/01 , 114.5
AK , 1992 , 274.3
AK , 1993 , 21251
AK , 1994 , 0
CA , 02/01 , 0
CA , 1991 , 0
#Res3SAIS 14
It over-approximates the scope of failure-inducing inputs i.e. records in
the faulty key-group are all marked as faulty
15. Delta Debugging
• Alice tests the output from different input configurations using a test function
15
Run 1 fails the test
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
99504 , 03/01/1993 , 145mm
99504 , 01/01/1994 , 245mm
99504 , 01/01/1993 , 85mm
90031 , 02/01/1991 , 0mm
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 03/01 , 145
AK , 1993 , 145
AK , 01/01 , 245
AK , 1994 , 245
…. ….
AK , 01/01 , [304.8, 21336, 245, 85]
AK , 03/01 , [30.5 , 145]
AK , 1992 , [304.8 , 30.5]
AK , 1993 , [21336, 145, 85]
AK , 1994 , [245]
CA , 02/01 , [0]
CA , 1991 , [0]
AK , 01/01 , 21251
AK , 03/01 , 114.5
AK , 1992 , 274.3
AK , 1993 , 21251
AK , 1994 , 0
CA , 02/01 , 0
CA , 1991 , 0
TextFile FlatMap GroupByKey Map Output
1
2
def test(key:String, delta: Float) : Boolean = {
delta < 6000
}
#Res3SAIS
16. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
16
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 01/01 , [304.8, 21336]
AK , 03/01 , [30.5]
AK , 1992 , [304.8 , 30.5]
AK , 1993 , [21336]
AK , 01/01 , 21031.2
AK , 03/01 , 0
AK , 1992 , 274.3
AK , 1993 , 0
Run 2 fails the test
1
2
17. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
17
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , [304.8]
AK , 03/01 , [30.5]
AK , 1992 , [304.8 , 30.5]
AK , 01/01 , 0
AK , 03/01 , 0
AK , 1992 , 274.3
Run 3 passes the test
18. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
18
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 01/01 , [21336]
AK , 1993 , [21336]
AK , 01/01 , 0
AK , 1993 , 0
Run 4 passes the test
19. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
19
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 01/01 , [304.8]
AK , 1992 , [304.8]
AK , 01/01 , 0
AK , 1992 , 0
Run 5 passes the test
20. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
20
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 03/01 , [30.5]
AK , 1992 , [30.5]
AK , 03/01 , 0
AK , 1992 , 0
Run 6 passes the test
21. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
21
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 01/01 , [21336]
AK , 1993 , [21336]
AK , 01/01 , 0
AK , 1993 , 0
Run 7 passes the test
22. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
22
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 03/01 , [30.5]
AK , 1992 , [30.5]
AK , 01/01 , [21336]
AK , 1993 , [21336]
AK , 03/01 , 0
AK , 1992 , 0
AK , 01/01 , 0
AK , 1993 , 0
Run 8 passes the test
23. Delta Debugging
• DD is an systematic debugging procedure guided by a test function.
DD is inefficient because it does not eliminate irrelevant input records
upfront.
23
TextFile FlatMap GroupByKey Map Output
#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 01/01 , [304.8 , 21336]
AK , 1992 , [304.8]
AK , 1993 , [21336]
AK , 01/01 , 21031.2
AK , 1992 , 0
AK , 1993 , 0
Run 9 fails the test
24. Automated Debugging in DISC with BigSift
24#Res3SAIS
Test
Predicate
Pushdown
Prioritizing
Backward
Traces
Bitmap based
Test
Memoization
Input: A Spark
Program, A Test
Function
Output: Minimum Fault-
Inducing
Input Records
Data Provenance + Delta Debugging
25. Optimization 1: Test Predicate Pushdown
• Observation: During backward tracing, data provenance traces
through all the partitions even though only a few partitions are faulty
25#Res3SAIS
If applicable, BigSift pushes down the test function to test the
output of combiners in order to isolate the faulty partitions.
Test
Test
Test
Test
Test
Test
Test
26. Optimization 2 :Prioritizing Backward Traces
• Observation: The same faulty input record may contribute to multiple
output records failing the test.
26#Res3SAIS
99504 , 01/01/1992 , 1ft
99504 , 03/01/1992 , 0.1ft
99504 , 01/01/1993 , 70in
99504 , 03/01/1993 , 145mm
99504 , 01/01/1994 , 245mm
99504 , 01/01/1993 , 85mm
90031 , 02/01/1991 , 0mm
AK , 01/01 , [304.8, 21336, 245, 85]
AK , 03/01 , [30.5 , 145]
AK , 1992 , [304.8 , 30.5]
AK , 1993 , [21336, 145, 85]
AK , 1994 , [245]
CA , 02/01 , [0]
CA , 1991 , [0]
TextFile FlatMap GroupByKey Map Output
AK , 01/01 , 304.8
AK , 1992 , 304.8
AK , 03/01 , 30.5
AK , 1992 , 30.5
AK , 01/01 , 21336
AK , 1993 , 21336
AK , 03/01 , 145
AK , 1993 , 145
AK , 01/01 , 245
AK , 1994 , 245
…. ….
AK , 01/01 , 21251
AK , 03/01 , 114.5
AK , 1992 , 274.3
AK , 1993 , 21251
AK , 1994 , 0
CA , 02/01 , 0
CA , 1991 , 0
In case of multiple faulty outputs, BigSift overlaps two backward
traces to minimize the scope of fault-inducing input records
27. Optimization 3: Bitmap based Test Memoization
• Observation: Delta debugging may
try running a program on the same
subset of input redundantly.
• BigSift leverages bitmap to compactly
encode the offsets of original input to
refer to an input subset
#Res3SAIS
0
1
0
1
0
0
0
0
1
1
Input Data Bitmap
!
Test Outcome
We use a bitmap based test memoization technique to avoid
redundant testing of the same input dataset.
29. Debugging Time
29#Res3SAIS
Subject Program Running Time (sec) Debugging Time (sec)
Subject Program Fault Original Job DD BigSift Improvement
Movie Histogram Code 56.2 232.8 17.3 13.5X
Inverted Index Code 107.7 584.2 13.4 43.6X
Rating Histogram Code 40.3 263.4 16.6 15.9X
Sequence Count Code 356.0 13772.1 208.8 66.0X
Rating Frequency Code 77.5 437.9 14.9 29.5X
College Student Data 53.1 235.3 31.8 7.4X
Weather Analysis Data 238.5 999.1 89.9 11.1X
Transit Analysis Code 45.5 375.8 20.2 18.6X
BigSift provides up to a 66X speed up in isolating the precise
fault-inducing input records, in comparison to the baseline DD
30. Debugging Time
30#Res3SAIS
Subject Program Running Time (sec) Debugging Time (sec)
Subject Program Fault Original Job DD BigSift Improvement
Movie Histogram Code 56.2 232.8 17.3 13.5X
Inverted Index Code 107.7 584.2 13.4 43.6X
Rating Histogram Code 40.3 263.4 16.6 15.9X
Sequence Count Code 356.0 13772.1 208.8 66.0X
Rating Frequency Code 77.5 437.9 14.9 29.5X
College Student Data 53.1 235.3 31.8 7.4X
Weather Analysis Data 238.5 999.1 89.9 11.1X
Transit Analysis Code 45.5 375.8 20.2 18.6X
On average, BigSift takes 62% less time to debug a single faulty
output than the time taken for a single run on the entire data.
31. Debugging Time
31#Res3SAIS
1
10
100
1000
10000
100000
1000000
10000000
100000000
1E+09
0 2000 4000 6000 8000 10000 12000 14000
#offault-inducinginput
records
Fault Localization Time (s)
Sequence Count
Delta Debugging
BigSift
Test Driven Data
Provenance
Data Provenance
On average, BigSift takes 62% less time to debug a single faulty
output than the time taken for a single run on the entire data.
32. Fault Localizability over Data Provenance
32#Res3SAIS
143796
6487290
520904
23411
5800
15003060
2554788
350
2
1350
15 13
1 1 1 1 1 1 2
1
10
100
1000
10000
100000
1000000
10000000
100000000
Movie
Historgram
Inverted Index Rating
Histogram
Sequence
Count
Rating
Frequency
College
Students
Weather
Analysis
#offault-inducinginputrecords
Data Provenance Test Driven Data Provenance BigSift & DD
BigSift leverages DD after DP to continue fault isolation, achieving
several orders of magnitude 103 to 107 better precision.
33. Summary
• BigSift is the first piece of work in automated debugging of big data
analytics in DISC.
• BigSift provides 103X – 107X more precision than data provenance
in terms of fault localizability.
• It provides up to 66X speed up in debugging time over baseline
Delta Debugging.
• In our evaluation we have observed that, on average, BigSift finds
the faulty input in 62% less than the original job execution time.
33#Res3SAIS