Machine learning (ML) models are typically part of prediction queries that consist of a data processing part (e.g., for joining, filtering, cleaning, featurization) and an ML part invoking one or more trained models. In this presentation, we identify significant and unexplored opportunities for optimization. To the best of our knowledge, this is the first effort to look at prediction queries holistically, optimizing across both the ML and SQL components.
We will present Raven, an end-to-end optimizer for prediction queries. Raven relies on a unified intermediate representation that captures both data processing and ML operators in a single graph structure.
This allows us to introduce optimization rules that
(i) reduce unnecessary computations by passing information between the data processing and ML operators
(ii) leverage operator transformations (e.g., turning a decision tree to a SQL expression or an equivalent neural network) to map operators to the right execution engine, and
(iii) integrate compiler techniques to take advantage of the most efficient hardware backend (e.g., CPU, GPU) for each operator.
We have implemented Raven as an extension to Spark’s Catalyst optimizer to enable the optimization of SparkSQL prediction queries. Our implementation also allows the optimization of prediction queries in SQL Server. As we will show, Raven is capable of improving prediction query performance on Apache Spark and SQL Server by up to 13.1x and 330x, respectively. For complex models, where GPU acceleration is beneficial, Raven provides up to 8x speedup compared to state-of-the-art systems. As part of the presentation, we will also give a demo showcasing Raven in action.
Vector Database is a new vertical of databases used to index and measure the similarity between different pieces of data. While it works well with structured data, when utilized for Vector Similarity Search (VSS) it really shines when comparing similarity in unstructured data, such as vector embedding of images, audio, or long pieces of text
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...Databricks
As Apache Spark applications move to a containerized environment, there are many questions about how to best configure server systems in the container world. In this talk we will demonstrate a set of tools to better monitor performance and identify optimal configuration settings. We will demonstrate how Prometheus, a project that is now part of the Cloud Native Computing Foundation (CNCF: https://www.cncf.io/projects/), can be applied to monitor and archive system performance data in a containerized spark environment.
In our examples, we will gather spark metric output through Prometheus and present the data with Grafana dashboards. We will use our examples to demonstrate how performance can be enhanced through different tuned configuration settings. Our demo will show how to configure settings across the cluster as well as within each node.
Emily Denton - Unsupervised Learning of Disentangled Representations from Vid...Luba Elliott
This talk by Emily Denton from New York University on "Unsupervised Learning of Disentangled Representations from Video" was presented at the Learning Image Representations event on 30th August at Twitter as part of the Creative AI meetup.
Visual Prompt Tuning (VPT),Parameter-efficient fine-tuning
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
발표자료 : https://www.slideshare.net/taeseonryu/mplug
안녕하세요 딥러닝 논문읽기 모임 입니다! 오늘 소개 드릴 논문은 'Visual Prompt Tuning for Transformers with Frozen Weights' 입니다.
오늘 소개하는 논문은 대규모 Transformer 모델을 비전에 효율적이고 효과적으로 미세조정하는 대안인 Visual Prompt Tuning (VPT)를 소개하고 있습니다. VPT는 입력 공간에서 작은 양의 훈련 가능한 매개변수를 도입하면서 모델 백본을 고정합니다.
이 방법을 통해, VPT는 다른 매개변수 효율적인 튜닝 프로토콜에 비해 상당한 성능 향상을 달성하며, 많은 경우에는 전체 미세조정을 능가하면서 작업당 저장 비용을 줄인다는 것을 실험적으로 보여줍니다.
이 논문은 효과성과 효율성 면에서 대규모 사전 훈련된 Transformer를 하위 작업에 적용하는 도전을 다룹니다. 이를 통해, 더 효율적인 방식으로 다양한 비전-언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 이미지처리 조경진님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/bVOk-hSYyZw
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
Unlocking the Power of Lakehouse Architectures with Apache Pulsar and Apache ...StreamNative
Lakehouses are quickly growing in popularity as a new approach to Data Platform Architecture bringing some of the long-established benefits from OLTP world to OLAP, including transactions, record-level updates/deletes, and changes streaming. In this talk, we will discuss Apache Hudi and how it unlocks possibilities of building your own fully open-source Lakehouse featuring a rich set of integrations with existing technologies, including Apache Pulsar. In this session, we will present: - What Lakehouses are, and why they are needed. - What Apache Hudi is and how it works. - Provide a use-case and demo that applies Apache Hudi’s DeltaStreamer tool to ingest data from Apache Pulsar.
Vector Database is a new vertical of databases used to index and measure the similarity between different pieces of data. While it works well with structured data, when utilized for Vector Similarity Search (VSS) it really shines when comparing similarity in unstructured data, such as vector embedding of images, audio, or long pieces of text
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...Databricks
As Apache Spark applications move to a containerized environment, there are many questions about how to best configure server systems in the container world. In this talk we will demonstrate a set of tools to better monitor performance and identify optimal configuration settings. We will demonstrate how Prometheus, a project that is now part of the Cloud Native Computing Foundation (CNCF: https://www.cncf.io/projects/), can be applied to monitor and archive system performance data in a containerized spark environment.
In our examples, we will gather spark metric output through Prometheus and present the data with Grafana dashboards. We will use our examples to demonstrate how performance can be enhanced through different tuned configuration settings. Our demo will show how to configure settings across the cluster as well as within each node.
Emily Denton - Unsupervised Learning of Disentangled Representations from Vid...Luba Elliott
This talk by Emily Denton from New York University on "Unsupervised Learning of Disentangled Representations from Video" was presented at the Learning Image Representations event on 30th August at Twitter as part of the Creative AI meetup.
Visual Prompt Tuning (VPT),Parameter-efficient fine-tuning
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
발표자료 : https://www.slideshare.net/taeseonryu/mplug
안녕하세요 딥러닝 논문읽기 모임 입니다! 오늘 소개 드릴 논문은 'Visual Prompt Tuning for Transformers with Frozen Weights' 입니다.
오늘 소개하는 논문은 대규모 Transformer 모델을 비전에 효율적이고 효과적으로 미세조정하는 대안인 Visual Prompt Tuning (VPT)를 소개하고 있습니다. VPT는 입력 공간에서 작은 양의 훈련 가능한 매개변수를 도입하면서 모델 백본을 고정합니다.
이 방법을 통해, VPT는 다른 매개변수 효율적인 튜닝 프로토콜에 비해 상당한 성능 향상을 달성하며, 많은 경우에는 전체 미세조정을 능가하면서 작업당 저장 비용을 줄인다는 것을 실험적으로 보여줍니다.
이 논문은 효과성과 효율성 면에서 대규모 사전 훈련된 Transformer를 하위 작업에 적용하는 도전을 다룹니다. 이를 통해, 더 효율적인 방식으로 다양한 비전-언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 이미지처리 조경진님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/bVOk-hSYyZw
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
Unlocking the Power of Lakehouse Architectures with Apache Pulsar and Apache ...StreamNative
Lakehouses are quickly growing in popularity as a new approach to Data Platform Architecture bringing some of the long-established benefits from OLTP world to OLAP, including transactions, record-level updates/deletes, and changes streaming. In this talk, we will discuss Apache Hudi and how it unlocks possibilities of building your own fully open-source Lakehouse featuring a rich set of integrations with existing technologies, including Apache Pulsar. In this session, we will present: - What Lakehouses are, and why they are needed. - What Apache Hudi is and how it works. - Provide a use-case and demo that applies Apache Hudi’s DeltaStreamer tool to ingest data from Apache Pulsar.
Security is one of fundamental features for enterprise adoption. Specifically, for SQL users, row/column-level access control is important. However, when a cluster is used as a data warehouse accessed by various user groups via different ways, it is difficult to guarantee data governance in a consistent way. In this talk, we focus on SQL users and talk about how to provide row/column-level access controls with common access control rules throughout the whole cluster with various SQL engines, e.g., Apache Spark 2.1, Apache Spark 1.6 and Apache Hive 2.1. If some of rules are changed, all engines are controlled consistently in near real-time. Technically, we enables Spark Thrift Server to work with an identify given by JDBC connection and take advantage of Hive LLAP daemon as a shared and secured processing engine. We demonstrate row-level filtering, column-level filtering and various column maskings in Apache Spark with Apache Ranger. We use Apache Ranger as a single point of security control center.
Evaluation of TPC-H on Spark and Spark SQL in ALOJADataWorks Summit
The Evaluation of TPC-H on Spark and Spark SQL in ALOJA was conducted at the Big Data Lab to obtain the master degree in Management Information Systems at the Johann-Wolfgang Goethe University in Frankfurt, Germany. Furthermore, the analysis was partially accomplished in collaboration and close coordination with the Barcelona Super Computer Center.
The intention of this research was the integration of a TPC-H on Spark Scala benchmark into ALOJA, an open-source and public platform for automated and cost-efficient benchmarks and to perform an evaluation on the runtime of Spark Scala with or without Hive Metastore compared to Spark SQL. Various alternate file formats with different applied compressions on underlying data and its impact are evaluated. The conducted performance evaluation exposed diverse and captivating outcomes for both benchmarks. Further investigations attempt to detect possible bottlenecks and other irregularities. The aim is to provide an explanation to enhance knowledge of Spark’s engine based on examining the physical plans. Our experiments show, inter alia, that: (1) Spark Scala performs better in case of heavy expression calculation, (2) Spark SQL is the better choice in case of strong data access locality in combination with heavyweight parallel execution. In conclusion, diverse results were observed with the consequence that each API has its advantages and disadvantages.
Surprisingly, our findings are well spread between Spark SQL and Spark Scala and contrary to our expectations Spark Scala did not outperform Spark SQL in all aspects but support the idea that applied optimizations appear to be implemented in a different way by Spark for its core and its extension Spark SQL. The API on top of Spark provides extra information about the underlying structured data, which is probably used to perform additional optimizations.
In conclusion, our research demonstrates that there are differences in the generation of query execution plans that goes hand-in-hand with similar discoveries leading to inefficient joins, and it underlines the importance of our benchmark to identify disparities and bottlenecks.
Speaker
Raphael Radowitz, Quality Specialist, SAP Labs Korea
Apache Pulsar Development 101 with PythonTimothy Spann
Apache Pulsar Development 101 with Python PS2022_Ecosystem_v0.0
There is always the fear a speaker cannot make it. So just in case, since I was the MC for the ecosystem track I put together a talk just in case.
Here it is. Never seen or presented.
Data Build Tool (DBT) is an open source technology to set up your data lake using best practices from software engineering. This SQL first technology is a great marriage between Databricks and Delta. This allows you to maintain high quality data and documentation during the entire datalake life-cycle. In this talk I’ll do an introduction into DBT, and show how we can leverage Databricks to do the actual heavy lifting. Next, I’ll present how DBT supports Delta to enable upserting using SQL. Finally, we show how we integrate DBT+Databricks into the Azure cloud. Finally we show how we emit the pipeline metrics to Azure monitor to make sure that you have observability over your pipeline.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Don’t Forget About Your Past—Optimizing Apache Druid Performance With Neil Bu...HostedbyConfluent
Don’t Forget About Your Past—Optimizing Apache Druid Performance With Neil Buesing | Current 2022
Businesses need to react to results immediately; to achieve this, real-time processing is becoming a requirement in many analytic verticals. But sometimes, the move from batch to real-time can leave you in a pinch. How do you handle and correct mistakes in your data? How do you migrate a new system to real-time along with historical data?
Let’s start with how to run Apache Druid locally with your containerized-based development environment. While streaming real-time events from Kafka into Druid, an S3 Complaint Store captures messages via Kafka Connect, for historical processing. An exploration of performance implications when the real-time stream of events contains historical data and how that affects performance and the techniques to prevent those issues, leaving a high-performance analytic platform supporting real-time and historical processing.
You’ll leave with the tools of doing real-time analytic processing and historical batch processing from a single source of truth. Your Druid cluster will have better rollups (pre-computed aggregates) and fewer segments, which reduces cost and improves query performance.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Building an ML Platform with Ray and MLflowDatabricks
A successful machine learning platform allows ML practitioners to focus solely on their experiments and models and minimizes the time it takes to develop ML applications and take them to production. However, building an ML Platform is typically not an easy task due to the many different components involved in the process. In this talk, we will show how two open source projects, Ray (https://ray.io/) and MLflow (https://mlflow.org/), work together to make it easy for ML platform developers to add scaling and experiment management to their platform.
We will first provide an overview of Ray and its native libraries: Ray Tune (https://tune.io) for distributed hyperparameter tuning and Ray Serve (https://docs.ray.io/en/master/serve/index.html) for scalable model serving. Then we will showcase how MLflow provides a perfect solution for managing experiments through integrations with Ray for tracking and model deployment. Finally, we will finish with a demo of an ML platform built on Ray, MLflow, and other open source tools.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I will dive deep into different stateful operations (streaming aggregations, deduplication and joins) and how they work under the hood in the Structured Streaming engine.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
There is no doubt Kubernetes has emerged as the next generation of cloud native infrastructure to support a wide variety of distributed workloads. Apache Spark has evolved to run both Machine Learning and large scale analytics workloads. There is growing interest in running Apache Spark natively on Kubernetes. By combining the flexibility of Kubernetes and scalable data processing with Apache Spark, you can run any data and machine pipelines on this infrastructure while effectively utilizing resources at disposal.
In this talk, Rajesh Thallam and Sougata Biswas will share how to effectively run your Apache Spark applications on Google Kubernetes Engine (GKE) and Google Cloud Dataproc, orchestrate the data and machine learning pipelines with managed Apache Airflow on GKE (Google Cloud Composer). Following topics will be covered: – Understanding key traits of Apache Spark on Kubernetes- Things to know when running Apache Spark on Kubernetes such as autoscaling- Demonstrate running analytics pipelines on Apache Spark orchestrated with Apache Airflow on Kubernetes cluster.
Biological, chemical and physical properties of molecules are encoded in their molecular structure. The challenge lies in discovering the relationships between the molecular graphs and the measured activity. Where data is measured, collected and curated for a series of compounds there is an opportunity to find the hidden relationships.
Chemical structures come in various shapes and sizes, depending on the scientists or even algorithms that create them. Though variability may sometimes seem subtle to a trained chemist’s eyes, these can introduce inconsistencies that impair chemical search algorithms or model building. Structure normalization is a key component of any cheminformatics workflow with an often underestimated significance. Finding relationships between chemical structures and their measured properties primarily relies on the representation of the chemical matter. Variability of the calculated features and descriptors for these representations can influence data analysis and accuracy of the predictions. During the first part of the presentation we will present the effect of chemical normalization on investigating correlations and building predictive models.
The second part of the talk will incorporate the results of model building on 163 ChEMBL targets extracted from the bioactivity benchmark set1. Results with different descriptor generation methods including ECFP fingerprints, MACCS key, structural properties, geometry properties and phy-chem properties will be discussed in detail. This part focuses on summarizing the results of more than 3000 Random Forest models.
Finally model development for ADMET targets will be highlighted including hERG cardiotoxicity prediction, permeability and blood brain barrier penetration. We will describe how these models can be built, analyzed, optimized and deployed using our new machine learning platform.
Security is one of fundamental features for enterprise adoption. Specifically, for SQL users, row/column-level access control is important. However, when a cluster is used as a data warehouse accessed by various user groups via different ways, it is difficult to guarantee data governance in a consistent way. In this talk, we focus on SQL users and talk about how to provide row/column-level access controls with common access control rules throughout the whole cluster with various SQL engines, e.g., Apache Spark 2.1, Apache Spark 1.6 and Apache Hive 2.1. If some of rules are changed, all engines are controlled consistently in near real-time. Technically, we enables Spark Thrift Server to work with an identify given by JDBC connection and take advantage of Hive LLAP daemon as a shared and secured processing engine. We demonstrate row-level filtering, column-level filtering and various column maskings in Apache Spark with Apache Ranger. We use Apache Ranger as a single point of security control center.
Evaluation of TPC-H on Spark and Spark SQL in ALOJADataWorks Summit
The Evaluation of TPC-H on Spark and Spark SQL in ALOJA was conducted at the Big Data Lab to obtain the master degree in Management Information Systems at the Johann-Wolfgang Goethe University in Frankfurt, Germany. Furthermore, the analysis was partially accomplished in collaboration and close coordination with the Barcelona Super Computer Center.
The intention of this research was the integration of a TPC-H on Spark Scala benchmark into ALOJA, an open-source and public platform for automated and cost-efficient benchmarks and to perform an evaluation on the runtime of Spark Scala with or without Hive Metastore compared to Spark SQL. Various alternate file formats with different applied compressions on underlying data and its impact are evaluated. The conducted performance evaluation exposed diverse and captivating outcomes for both benchmarks. Further investigations attempt to detect possible bottlenecks and other irregularities. The aim is to provide an explanation to enhance knowledge of Spark’s engine based on examining the physical plans. Our experiments show, inter alia, that: (1) Spark Scala performs better in case of heavy expression calculation, (2) Spark SQL is the better choice in case of strong data access locality in combination with heavyweight parallel execution. In conclusion, diverse results were observed with the consequence that each API has its advantages and disadvantages.
Surprisingly, our findings are well spread between Spark SQL and Spark Scala and contrary to our expectations Spark Scala did not outperform Spark SQL in all aspects but support the idea that applied optimizations appear to be implemented in a different way by Spark for its core and its extension Spark SQL. The API on top of Spark provides extra information about the underlying structured data, which is probably used to perform additional optimizations.
In conclusion, our research demonstrates that there are differences in the generation of query execution plans that goes hand-in-hand with similar discoveries leading to inefficient joins, and it underlines the importance of our benchmark to identify disparities and bottlenecks.
Speaker
Raphael Radowitz, Quality Specialist, SAP Labs Korea
Apache Pulsar Development 101 with PythonTimothy Spann
Apache Pulsar Development 101 with Python PS2022_Ecosystem_v0.0
There is always the fear a speaker cannot make it. So just in case, since I was the MC for the ecosystem track I put together a talk just in case.
Here it is. Never seen or presented.
Data Build Tool (DBT) is an open source technology to set up your data lake using best practices from software engineering. This SQL first technology is a great marriage between Databricks and Delta. This allows you to maintain high quality data and documentation during the entire datalake life-cycle. In this talk I’ll do an introduction into DBT, and show how we can leverage Databricks to do the actual heavy lifting. Next, I’ll present how DBT supports Delta to enable upserting using SQL. Finally, we show how we integrate DBT+Databricks into the Azure cloud. Finally we show how we emit the pipeline metrics to Azure monitor to make sure that you have observability over your pipeline.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Don’t Forget About Your Past—Optimizing Apache Druid Performance With Neil Bu...HostedbyConfluent
Don’t Forget About Your Past—Optimizing Apache Druid Performance With Neil Buesing | Current 2022
Businesses need to react to results immediately; to achieve this, real-time processing is becoming a requirement in many analytic verticals. But sometimes, the move from batch to real-time can leave you in a pinch. How do you handle and correct mistakes in your data? How do you migrate a new system to real-time along with historical data?
Let’s start with how to run Apache Druid locally with your containerized-based development environment. While streaming real-time events from Kafka into Druid, an S3 Complaint Store captures messages via Kafka Connect, for historical processing. An exploration of performance implications when the real-time stream of events contains historical data and how that affects performance and the techniques to prevent those issues, leaving a high-performance analytic platform supporting real-time and historical processing.
You’ll leave with the tools of doing real-time analytic processing and historical batch processing from a single source of truth. Your Druid cluster will have better rollups (pre-computed aggregates) and fewer segments, which reduces cost and improves query performance.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Building an ML Platform with Ray and MLflowDatabricks
A successful machine learning platform allows ML practitioners to focus solely on their experiments and models and minimizes the time it takes to develop ML applications and take them to production. However, building an ML Platform is typically not an easy task due to the many different components involved in the process. In this talk, we will show how two open source projects, Ray (https://ray.io/) and MLflow (https://mlflow.org/), work together to make it easy for ML platform developers to add scaling and experiment management to their platform.
We will first provide an overview of Ray and its native libraries: Ray Tune (https://tune.io) for distributed hyperparameter tuning and Ray Serve (https://docs.ray.io/en/master/serve/index.html) for scalable model serving. Then we will showcase how MLflow provides a perfect solution for managing experiments through integrations with Ray for tracking and model deployment. Finally, we will finish with a demo of an ML platform built on Ray, MLflow, and other open source tools.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I will dive deep into different stateful operations (streaming aggregations, deduplication and joins) and how they work under the hood in the Structured Streaming engine.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
There is no doubt Kubernetes has emerged as the next generation of cloud native infrastructure to support a wide variety of distributed workloads. Apache Spark has evolved to run both Machine Learning and large scale analytics workloads. There is growing interest in running Apache Spark natively on Kubernetes. By combining the flexibility of Kubernetes and scalable data processing with Apache Spark, you can run any data and machine pipelines on this infrastructure while effectively utilizing resources at disposal.
In this talk, Rajesh Thallam and Sougata Biswas will share how to effectively run your Apache Spark applications on Google Kubernetes Engine (GKE) and Google Cloud Dataproc, orchestrate the data and machine learning pipelines with managed Apache Airflow on GKE (Google Cloud Composer). Following topics will be covered: – Understanding key traits of Apache Spark on Kubernetes- Things to know when running Apache Spark on Kubernetes such as autoscaling- Demonstrate running analytics pipelines on Apache Spark orchestrated with Apache Airflow on Kubernetes cluster.
Biological, chemical and physical properties of molecules are encoded in their molecular structure. The challenge lies in discovering the relationships between the molecular graphs and the measured activity. Where data is measured, collected and curated for a series of compounds there is an opportunity to find the hidden relationships.
Chemical structures come in various shapes and sizes, depending on the scientists or even algorithms that create them. Though variability may sometimes seem subtle to a trained chemist’s eyes, these can introduce inconsistencies that impair chemical search algorithms or model building. Structure normalization is a key component of any cheminformatics workflow with an often underestimated significance. Finding relationships between chemical structures and their measured properties primarily relies on the representation of the chemical matter. Variability of the calculated features and descriptors for these representations can influence data analysis and accuracy of the predictions. During the first part of the presentation we will present the effect of chemical normalization on investigating correlations and building predictive models.
The second part of the talk will incorporate the results of model building on 163 ChEMBL targets extracted from the bioactivity benchmark set1. Results with different descriptor generation methods including ECFP fingerprints, MACCS key, structural properties, geometry properties and phy-chem properties will be discussed in detail. This part focuses on summarizing the results of more than 3000 Random Forest models.
Finally model development for ADMET targets will be highlighted including hERG cardiotoxicity prediction, permeability and blood brain barrier penetration. We will describe how these models can be built, analyzed, optimized and deployed using our new machine learning platform.
Automation of building reliable modelsEszter Szabó
Volume and velocity of bioactivity data available in public or in-house sources represent an immense opportunity to be exploited in novel compound design. Wider and wider array of targets with labelled data necessitates efficient solutions to build a large number of individual models. Velocity of data growth provides the possibility to yield higher accuracy through continuous re-training of the existing models. Automatic re-training maximizes the applicability domain and minimizes the risk of accuracy drop while a project expands into novel chemical series.
Formal methods help improve the quality and reliability of software by providing proof of correctness. However, ensuring the correctness of verification tools that apply these formal methods, is itself a much harder problem. A typical way to justify the correctness is to provide soundness proofs based on semantic models. For program verifiers these soundness proofs are quite large and complex. In this thesis, we introduce certified reasoning to provide machine checked proofs of various components of an automated verification system. We develop new certified decision procedures (Omega++) and certified proofs (for compatible sharing) and integrate with an existing automated verification system (HIP/SLEEK). We show that certified reasoning improves the correctness and expressivity of automated verification without sacrificing on performance.
The workshop is an overview of creating predictive models using R. An example data set will be used to demonstrate a typical workflow: data splitting, pre-processing, model tuning and evaluation. Several R packages will be shown along with the caret package which provides a unified interface to a large number of R’s modeling functions and enables parallel processing. Participants should have a basic understanding of R data structures and basic language elements (i.e. functions, classes, etc).
Static analysis: Around Java in 60 minutesAndrey Karpov
Theory
Code quality (bugs, vulnerabilities)
Methodologies of code protection against defects
Code Review
Static analysis and everything related to it
Tools
Existing tools of static analysis
SonarQube
PVS-Studio for Java what is it?
Several detected examples of code with defects
More about static analysis
Conclusions
Lab 2: Classification and Regression Prediction Models, training and testing ...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Lab 2: Classification and Regression Prediction Models, training and testing splits, optimization of K Nearest Neighbors (KD tree), optimization of Random Forest, optimization of Naive Bayes (Gaussian), advantages and model comparisons, feature importance, Feature ranking with recursive feature elimination, Two dimensional Linear Discriminant Analysis
EVERYTHING ABOUT STATIC CODE ANALYSIS FOR A JAVA PROGRAMMERAndrey Karpov
Theory
Code quality (bugs, vulnerabilities)
Methodologies of code protection against defects
Code Review
Static analysis and everything related to it
Tools
Existing tools of static analysis
SonarQube
PVS-Studio for Java what is it?
Several detected examples of code with defects
More about static analysis
Conclusions
Nyc open-data-2015-andvanced-sklearn-expandedVivian S. Zhang
Scikit-learn is a machine learning library in Python, that has become a valuable tool for many data science practitioners.
This talk will cover some of the more advanced aspects of scikit-learn, such as building complex machine learning pipelines, model evaluation, parameter search, and out-of-core learning.
Apart from metrics for model evaluation, we will cover how to evaluate model complexity, and how to tune parameters with grid search, randomized parameter search, and what their trade-offs are. We will also cover out of core text feature processing via feature hashing.
---------------------------------------------------------
Andreas is an Assistant Research Scientist at the NYU Center for Data Science, building a group to work on open source software for data science. Previously he worked as a Machine Learning Scientist at Amazon, working on computer vision and forecasting problems. He is one of the core developers of the scikit-learn machine learning library, and maintained it for several years.
Material will be posted here:
https://github.com/amueller/pydata-nyc-advanced-sklearn
Blog:
peekaboo-vision.blogspot.com
Twitter:
https://twitter.com/t3kcit
Running Intelligent Applications inside a Database: Deep Learning with Python...Miguel González-Fierro
In this talk we present a new paradigm of computation where the intelligence is computed inside the database. Standard software systems must get the data from the database to execute a routine. If the size of the data is big, there are inefficiencies due to the data movement. Store procedures tried to solve this issue in the past, allowing for computing simple functions inside the database. However, only simple routines can be executed.
To showcase the capabilities of our new system, we created a lung cancer detection algorithm using Microsoft’s Cognitive Toolkit, also known as CNTK. We used transfer learning between ImageNet dataset, which contains natural images, and a lung cancer dataset, which contains scans of horizontal sections of the lung for healthy and sick patients. Specifically, a pretrained Convolutional Neural Network on ImageNet is used on the lung cancer dataset to generate features. Once the features are computed, a boosted tree is applied to predict whether the patient has cancer or not.
All this process is computed inside the database, so the data movement is minimized. We are even able to execute the algorithm using the GPU of the virtual machine that hosts the database. Using a GPU, we can compute the featurization in less than 1h, in contrast to using a CPU, that would take up to 32h. Finally, we set up an API to connect the solution to a web app, where a doctor can analyze the images and get a prediction of a patient.
Data Lakehouse Symposium | Day 1 | Part 1Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
Why APM Is Not the Same As ML MonitoringDatabricks
Application performance monitoring (APM) has become the cornerstone of software engineering allowing engineering teams to quickly identify and remedy production issues. However, as the world moves to intelligent software applications that are built using machine learning, traditional APM quickly becomes insufficient to identify and remedy production issues encountered in these modern software applications.
As a lead software engineer at NewRelic, my team built high-performance monitoring systems including Insights, Mobile, and SixthSense. As I transitioned to building ML Monitoring software, I found the architectural principles and design choices underlying APM to not be a good fit for this brand new world. In fact, blindly following APM designs led us down paths that would have been better left unexplored.
In this talk, I draw upon my (and my team’s) experience building an ML Monitoring system from the ground up and deploying it on customer workloads running large-scale ML training with Spark as well as real-time inference systems. I will highlight how the key principles and architectural choices of APM don’t apply to ML monitoring. You’ll learn why, understand what ML Monitoring can successfully borrow from APM, and hear what is required to build a scalable, robust ML Monitoring architecture.
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
Autonomy and ownership are core to working at Stitch Fix, particularly on the Algorithms team. We enable data scientists to deploy and operate their models independently, with minimal need for handoffs or gatekeeping. By writing a simple function and calling out to an intuitive API, data scientists can harness a suite of platform-provided tooling meant to make ML operations easy. In this talk, we will dive into the abstractions the Data Platform team has built to enable this. We will go over the interface data scientists use to specify a model and what that hooks into, including online deployment, batch execution on Spark, and metrics tracking and visualization.
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
In this talk, I will dive into the stage level scheduling feature added to Apache Spark 3.1. Stage level scheduling extends upon Project Hydrogen by improving big data ETL and AI integration and also enables multiple other use cases. It is beneficial any time the user wants to change container resources between stages in a single Apache Spark application, whether those resources are CPU, Memory or GPUs. One of the most popular use cases is enabling end-to-end scalable Deep Learning and AI to efficiently use GPU resources. In this type of use case, users read from a distributed file system, do data manipulation and filtering to get the data into a format that the Deep Learning algorithm needs for training or inference and then sends the data into a Deep Learning algorithm. Using stage level scheduling combined with accelerator aware scheduling enables users to seamlessly go from ETL to Deep Learning running on the GPU by adjusting the container requirements for different stages in Spark within the same application. This makes writing these applications easier and can help with hardware utilization and costs.
There are other ETL use cases where users want to change CPU and memory resources between stages, for instance there is data skew or perhaps the data size is much larger in certain stages of the application. In this talk, I will go over the feature details, cluster requirements, the API and use cases. I will demo how the stage level scheduling API can be used by Horovod to seamlessly go from data preparation to training using the Tensorflow Keras API using GPUs.
The talk will also touch on other new Apache Spark 3.1 functionality, such as pluggable caching, which can be used to enable faster dataframe access when operating from GPUs.
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
In this talk, I would like to introduce an open-source tool built by our team that simplifies the data conversion from Apache Spark to deep learning frameworks.
Imagine you have a large dataset, say 20 GBs, and you want to use it to train a TensorFlow model. Before feeding the data to the model, you need to clean and preprocess your data using Spark. Now you have your dataset in a Spark DataFrame. When it comes to the training part, you may have the problem: How can I convert my Spark DataFrame to some format recognized by my TensorFlow model?
The existing data conversion process can be tedious. For example, to convert an Apache Spark DataFrame to a TensorFlow Dataset file format, you need to either save the Apache Spark DataFrame on a distributed filesystem in parquet format and load the converted data with third-party tools such as Petastorm, or save it directly in TFRecord files with spark-tensorflow-connector and load it back using TFRecordDataset. Both approaches take more than 20 lines of code to manage the intermediate data files, rely on different parsing syntax, and require extra attention for handling vector columns in the Spark DataFrames. In short, all these engineering frictions greatly reduced the data scientists’ productivity.
The Databricks Machine Learning team contributed a new Spark Dataset Converter API to Petastorm to simplify these tedious data conversion process steps. With the new API, it takes a few lines of code to convert a Spark DataFrame to a TensorFlow Dataset or a PyTorch DataLoader with default parameters.
In the talk, I will use an example to show how to use the Spark Dataset Converter to train a Tensorflow model and how simple it is to go from single-node training to distributed training on Databricks.
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
Sawtooth Windows for Feature AggregationsDatabricks
In this talk about zipline, we will introduce a new type of windowing construct called a sawtooth window. We will describe various properties about sawtooth windows that we utilize to achieve online-offline consistency, while still maintaining high-throughput, low-read latency and tunable write latency for serving machine learning features.We will also talk about a simple deployment strategy for correcting feature drift – due operations that are not “abelian groups”, that operate over change data.
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Re-imagine Data Monitoring with whylogs and SparkDatabricks
In the era of microservices, decentralized ML architectures and complex data pipelines, data quality has become a bigger challenge than ever. When data is involved in complex business processes and decisions, bad data can, and will, affect the bottom line. As a result, ensuring data quality across the entire ML pipeline is both costly, and cumbersome while data monitoring is often fragmented and performed ad hoc. To address these challenges, we built whylogs, an open source standard for data logging. It is a lightweight data profiling library that enables end-to-end data profiling across the entire software stack. The library implements a language and platform agnostic approach to data quality and data monitoring. It can work with different modes of data operations, including streaming, batch and IoT data.
In this talk, we will provide an overview of the whylogs architecture, including its lightweight statistical data collection approach and various integrations. We will demonstrate how the whylogs integration with Apache Spark achieves large scale data profiling, and we will show how users can apply this integration into existing data and ML pipelines.
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
Semantic segmentation is the classification of every pixel in an image/video. The segmentation partitions a digital image into multiple objects to simplify/change the representation of the image into something that is more meaningful and easier to analyze [1][2]. The technique has a wide variety of applications ranging from perception in autonomous driving scenarios to cancer cell segmentation for medical diagnosis.
Exponential growth in the datasets that require such segmentation is driven by improvements in the accuracy and quality of the sensors generating the data extending to 3D point cloud data. This growth is further compounded by exponential advances in cloud technologies enabling the storage and compute available for such applications. The need for semantically segmented datasets is a key requirement to improve the accuracy of inference engines that are built upon them.
Streamlining the accuracy and efficiency of these systems directly affects the value of the business outcome for organizations that are developing such functionalities as a part of their AI strategy.
This presentation details workflows for labeling, preprocessing, modeling, and evaluating performance/accuracy. Scientists and engineers leverage domain-specific features/tools that support the entire workflow from labeling the ground truth, handling data from a wide variety of sources/formats, developing models and finally deploying these models. Users can scale their deployments optimally on GPU-based cloud infrastructure to build accelerated training and inference pipelines while working with big datasets. These environments are optimized for engineers to develop such functionality with ease and then scale against large datasets with Spark-based clusters on the cloud.
Machine Learning CI/CD for Email Attack DetectionDatabricks
Detecting advanced email attacks at scale is a challenging ML problem, particularly due to the rarity of attacks, adversarial nature of the problem, and scale of data. In order to move quickly and adapt to the newest threat we needed to build a Continuous Integration / Continuous Delivery pipeline for the entire ML detection stack. Our goal is to enable detection engineers and data scientists to make changes to any part of the stack including joined datasets for hydration, feature extraction code, detection logic, and develop/train ML models.
In this talk, we discuss why we decided to build this pipeline, how it is used to accelerate development and ensure quality, and dive into the nitty-gritty details of building such a system on top of an Apache Spark + Databricks stack.
Jeeves Grows Up: An AI Chatbot for Performance and QualityDatabricks
Sarah: CEO-Finance-Report pipeline seems to be slow today. Why
Jeeves: SparkSQL query dbt_fin_model in CEO-Finance-Report is running 53% slower on 2/28/2021. Data skew issue detected. Issue has not been seen in last 90 days.
Jeeves: Adding 5 more nodes to cluster recommended for CEO-Finance-Report to finish in its 99th percentile time of 5.2 hours.
Who is Jeeves? An experienced Spark developer? A seasoned administrator? No, Jeeves is a chatbot created to simplify data operations management for enterprise Spark clusters. This chatbot is powered by advanced AI algorithms and an intuitive conversational interface that together provide answers to get users in and out of problems quickly. Instead of being stuck to screens displaying logs and metrics, users can now have a more refreshing experience via a two-way conversation with their own personal Spark expert.
We presented Jeeves at Spark Summit 2019. In the two years since, Jeeves has grown up a lot. Jeeves can now learn continuously as telemetry information streams in from more and more applications, especially SQL queries. Jeeves now “knows” about data pipelines that have many components. Jeeves can also answer questions about data quality in addition to performance, cost, failures, and SLAs. For example:
Tom: I am not seeing any data for today in my Campaign Metrics Dashboard.
Jeeves: 3/5 validations failed on the cmp_kpis table on 2/28/2021. Run of pipeline cmp_incremental_daily failed on 2/28/2021.
This talk will give an overview of the newer capabilities of the chatbot, and how it now fits in a modern data stack with the emergence of new data roles like analytics engineers and machine learning engineers. You will learn how to build chatbots that tackle your complex data operations challenges.
Intuitive & Scalable Hyperparameter Tuning with Apache Spark + FugueDatabricks
Hyperparameter tuning is critical in model development. And its general form: parameter tuning with an objective function is also widely used in industry. On the other hand, Apache Spark can handle massive parallelism, and Apache Spark ML is a solid machine learning solution.
But we have not seen a general and intuitive distributed parameter tuning solution based on Apache Spark, why?
Not every tuning problem is on Apache Spark ML models. How can Apache Spark handle general models?
Not every tuning problem is a parallelizable grid or random search. Bayesian optimization is sequential, how can Apache Spark help in this case?
Not every tuning problem is single epoch, deep learning is not. How to fit algos such as hyperband and ASHA into Apache Spark?
Not every tuning problem is a machine learning problem, for example simulation + tuning is also common. How to generalize?
In this talk, we are going to show how using Fugue-Tune and Apache Spark together can eliminate these painpoints
Fugue-Tune like Fugue, is a “super framework” – an absraction layer unifying existing solutions such as Hyperopt and Optuna
It firstly models the general tuning problems, independent from machine learning
It is designed for both small and large scale problems. It can always fully parallelize the distributable part of a tuning problem
It works for both classical and deep learning models. With Fugue, running hyperband and ASHA becomes possible on Apache Spark.
In the demo, you will see how to do any type of tuning in a consistent, intuitive, scalable and minimal way. And you will see a live demo of the amazing performance.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
3. Data Scientist
Analyst/Developer
model
training
model scoring
data exploration/
preparation
data selection/
transformation
model
deployment
Use Case: Length-of-stay in Hospital
Model:
“Predict length of stay of
a patient in the hospital”
Prediction query:
“Find pregnant patients that
are expected to stay in the
hospital more than a week”
4. Featurization Model
Container
REST
Prediction Queries: Baseline Approach
policies
HTTP
WebServer
App logic
ODBC
DBMS
Enterprise Features
• Security: data and models outside of the DB
• Extra infrastructure
• High TCO
• Lack tooling/best-practices
Performance
• Data movement
• Latency
• Throughput on batch-scoring
5. Prediction Queries: In-Engine Evaluation
policies
HTTP
WebServer
App logic
ODBC
DBMS
Enterprise Features
• Security: Data and models within the DBMS
• Reuse Existing infrastructure
• Language/tools/best practices
• Low TCO
Performance ?
• Up to 13x faster on Spark
• Up to 330x faster on SQL Server
6. Raven: An Optimizer for Prediction Queries in Azure Data
+
data
models
Unified IR
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
Optimized
IR
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
Embed high-performance
ML inference runtimes
within our data engines
Express data and
ML operations in
a common graph
7. Constructing the IR
Raven IR operators
Relational algebra
Linear algebra
Other ML operators and data featurizers
UDFs
Static analysis of the prediction query
Support for SQL+ML
Adding support for Python
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
MQ: inference query
8. ML Inference in Azure Data Engines
SQL Server
PREDICT statement in SQL Server
Embedded ONNX Runtime in the engine
Available in Azure SQL Edge and SQL DW
(part of Azure Synapse Analytics)
Spark
Introduced a new PREDICT operator
Similar syntax to SQL Server
Support for different types of models
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
pa
Un
Static
Analysis
MQ: inference query
9. INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
Q: “Find pregnant patients
expected to stay in the hospital
more than a week”
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
Raven: An Optimizer for Prediction Queries
+
+
Runtime
Code
Gen
10. Raven optimizations in practice
(name, model) AS
”,
eline import Pipeline
rocessing import StandardScaler
import DecisionTreeClassifier
n’, FeatureUnion(…
scaler’,StandardScaler()), …))
reeClassifier())])”);
Data Scientist)
ng model (Data Analyst)
rbinary(max) = (
OM scoring_models
e = ”duration_of_stay“ );
nfo AS pi
ts AS be ON pi.id = be.id
tests AS pt ON be.id = pt.id
ngth_of_stay
L=@model, DATA=data AS d)
ay Pred float) AS p
= 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
Runti
Code
1. Predicate-based
model pruning
2. Model projection
pushdown
3. Model splitting
4. Model-to-SQL
translation
5. NN translation
6. Standard DB
optimizations
7. Compiler
optimizations
11. 1. Avoid unnecessary computation
Information passing between model and data
2. Pick the right runtime for each operation
Translation between data and ML operations
3. Hardware acceleration
Translation to tensor computations
(Hummingbird)
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
Raven optimizations: Key Ideas
12. 0
1,000
2,000
3,000
4,000
5,000
DT-depth5 DT-depth8 LR-.001 GB-20est DT-depth5 DT-depth8 LR-.001 GB-20est
Hospital - 2 billion rows Expedia - 500 million rows
Elapsed
time
(seconds)
End-to-end inferene query time
SparkML Sklearn ONNX runtime Raven
Performance Evaluation: Raven in Spark (HDI)
Best of Raven:
• Decision Trees (DT) and Logistic Regressions (LR): Model Projection Pushdown + ML-to-SQL
• Gradient Boost (GB): Model Projection Pushdown
SELECT PREDICT(model, col1, …)
FROM Hospital
SELECT PREDICT(model, S.col1, …)
FROM listings S, hotels R1, searches R2
WHERE S.prop_id = R1.prop_id AND S.srch_id = R2.srch_id
Raven outperforms other ML runtimes (SparkML, Sklearn, ONNX runtime) by up to ~44x
~44x
13. 0
500
60Est/Dep5 100Est/Dep4 100Est/Dep8 500Est/Dep8
Elapsed
time
(seconds)
Gradient Boost Models (Hospital 200M rows)
ONNX runtime Raven - CPU Raven - GPU
2500
3000
3500
End-to-end inference query time
Performance Evaluation: Raven in Spark with GPU
SELECT PREDICT(model, col1, …) FROM Hospital
Raven + GPU outperforms ONNX runtime by up to ~8x for complex models
~8x
14. 1
10
100
1,000
10,000
100,000
DT-depth5 DT- depth8 LR-.001 GB/RF-20est DT-depth5 DT- depth8 LR-.001 GB-20est
hospital - 100M rows expedia - 100M rows
End-to-end
Time
(sec)
Log
Scale
End-to-end inference query time
MADlib SQL Server (DOP1) Raven (DOP1) SQL Server (DOP16) Raven (DOP16)
Performance Evaluation: Raven Plans in SQL Server
Potential gains with Raven in SQL Server are significantly large!
~230x
~100x
Best of Raven:
• Decision Trees (DT) and Logistic Regressions (LR): Model Projection Pushdown + ML-to-SQL
• Gradient Boost (GB): Model Projection Pushdown
15. Performance Evaluation: Raven in SQL Server with GPU
Potential gains with Raven and GPU acceleration are significantly large!
~100x
Batch size:
• CPU: Minimum query time obtained with optimal choice of batch size (50K/100K rows).
• GPU: 600K rows.
0
200
400
600
800
1000
1200
1400
depth3-
20est
depth5-
60est
depth4-
100est
depth8-
100est
depth8-
500est
End-to-end
Time
(secs)
Min. CPU-SKL GPU-HB
~2.6x
hospital – 100M rows, GB models
17. Conclusion: in-DBMS model inference
• Raven is the first step in a long journey of incorporating ML inference
as a foundational extension of relational algebra and an integral
part of SQL query optimizers and runtimes
• Novel Raven optimizer with cross optimizations and operator
transformations
Ø Up to 13x performance improvements on Spark
Ø Up to 330x performance improvements on SQL Server
• Integration of Raven within Spark and SQL Server
20. Current state of affairs: In-application model inference
Use case: hospital length-of-stay
“Find pregnant patients that are
expected to stay in the hospital more
than a week”
Security
• Data leaves the DB
• Model outside of the DB
Performance
• Data movement
• Use of Python for data operations
DBMS
21. Raven: In-DBMS model inference
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
Static
Analysis
MQ: inference query
Inference query: SQL + PREDICT (SQL Server
2017 syntax) to combine SQL operations with ML
inference
DBMS
DBMS
Raven
model
data
SQL
+
ML
22. Raven: In-DB model inference
DBMS
Raven
Security
• Data and models within the DB
• Treat models as data
User experience
• Leverage maturity of RDBMS
• Connectivity, tool integration
Can in-DBMS ML inference match (or exceed?)
the performance of state-of-the-art ML
frameworks?
Yes, by up to 230x!
23. Cross-optimizations in practice
l (name, model) AS
ay”,
ipeline import Pipeline
eprocessing import StandardScaler
ee import DecisionTreeClassifier
=
ion’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
nTreeClassifier())])”);
(Data Scientist)
oking model (Data Analyst)
varbinary(max) = (
FROM scoring_models
ame = ”duration_of_stay“ );
_info AS pi
ests AS be ON pi.id = be.id
l_tests AS pt ON be.id = pt.id
length_of_stay
DEL=@model, DATA=data AS d)
stay Pred float) AS p
t = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
y
Run
Cod
Cross-IR optimizations
and operator
transformations:
Ø Predicate-based
model pruning
Ø Model projection
pushdown
Ø Model splitting
Ø Model inlining
Ø NN translation
Ø Standard DB
optimizations
Ø Compiler
optimizations
24. Raven overview
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
patient_info blood_tests
Categorical
Encoding
FeatureExtractor
DecisionTreeClassifier
Rescaling
Concat
prenatal_tests
σ pregnant = 1
age
pregnant
gender
1 0
F M X
<35 >=35
…
bp … …
…
…
…
Unified IR for MQ
patient_info blood_tests
NeuralNet
prenatal_tests
Optimized plan for MQ
switch:
case (bp>140): 7
case (120<bp<140): 4
case (bp<120): 2
σage >35
σ pregnant = 1
π π π
σage <=35
U
σlength_of_stay >= 7
Static
Analysis
Cross
Optimization
2 4 7
… … … …
σlength_of_stay >= 7
σbp>140
SQL-inlined model
MQ: inference query
Runtime
Code gen
+
Key ideas:
1. Novel cross-optimizations between SQL and ML operations
2. Combine high-performance ML inference engines with SQL Server
26. Execution modes
In-process
Deep integration of
ONNX Runtime in
SQL Server
Out-of-process
For queries/models not
supported by our static
analyzer
sp_execute_external_script
(Python, R, Java)
Containerized
For languages not
supported by out-
of-process execution
27. In-process execution
Native predict: execute the model in the same process as SQL Server
Rudimentary support since SQL Server 2017 (five hardcoded models)
Take advantage of state-of-the-art ML inference engines
Compiler optimizations, Code generation, Hardware acceleration
SQL Server + ONNX Runtime
Some challenges
Align schemata between DB and model
Transform data to/from tensors (avoid copying)
Cache inference sessions
Allow for different ML engines
INSERT INTO model (name, model) AS
(“duration_of_stay”,
“from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from …
model_pipeline =
Pipeline([(‘union’, FeatureUnion(…
(‘scaler’,StandardScaler()), …))
(‘clf’,DecisionTreeClassifier())])”);
M: model pipeline (Data Scientist)
Q: SQL query invoking model (Data Analyst)
DECLARE @model varbinary(max) = (
SELECT model FROM scoring_models
WHERE model_name = ”duration_of_stay“ );
WITH data AS(
SELECT *
FROM patient_info AS pi
JOIN blood_tests AS be ON pi.id = be.id
JOIN prenatal_tests AS pt ON be.id = pt.id
);
SELECT d.id, p.length_of_stay
FROM PREDICT(MODEL=@model, DATA=data AS d)
WITH(length_of_stay Pred float) AS p
WHERE d.pregnant = 1 AND p.length_of_stay > 7;
St
Ana
MQ: inference query
28. Current status
In-process predictions
Ø Implementation in SQL Server 2019
Ø Public preview in Azure SQL DB Edge
Ø Private preview in Azure SQL DW
Out-of-process predictions
Ø ONNX Runtime as an external
language
(ongoing)
29. Benefits of deep integration
1
10
100
1K
10k
1K 10K 100K 1M 10M 1K 10K 100K 1M 10M
Total
Inference
Time
(ms)
Log
Scale
Dataset Size
Random Forest MLP
ORT
Raven
Raven ext.