Stage Level Scheduling Improving Big Data and AI Integration

Databricks
DatabricksDeveloper Marketing and Relations at MuleSoft
Stage Level Scheduling
Improving Big Data and
AI integration
Thomas Graves
Software Engineer at NVIDIA
Spark PMC
Agenda
§ Resource Scheduling
§ Stage Level Scheduling
§ Use Case Example
§ Demo
Resource Scheduling On Spark
Resource Scheduling
• Driver
• Cores
• Memory
• Accelerators (GPU/FPGA/etc)
• Executors
• Cores
• Memory (overhead, pyspark, heap, offheap)
• Accelerators (GPU/FPGA/etc)
• Tasks (requirements)
• CPUs
• Accelerators (GPU/FPGA/etc)
Resource Scheduling
• Tasks Per Executor
• Executor Resources / Task Requirements
• Configs
spark.driver.cores=1
spark.executor.cores=4
spark.task.cpus=1
spark.driver.memory=4g
spark.executor.memory=4g
spark.executor.memoryOverhead=2g
spark.driver.resource.gpu.amount=1
spark.driver.resource.gpu.discoveryScript=./getGpuResources.sh
spark.executor.resource.gpu.amount=1
spark.executor.resource.gpu.discoveryScript=./getGpuResources.sh
spark.task.resource.gpu.amount=0.25
Stage Level Scheduling
Overview
Spark ETL Stage Spark ML Stage
NODE NODE
GPU
CPU
CPU
Stage Level Scheduling
• Stage level resource scheduling (SPARK-27495)
• Specify resource requirements per RDD operation
• Spark dynamically allocates containers to meet resource requirements
• Spark schedules tasks on appropriate containers
• Benefits
• Hardware utilization and cost
• Ease of programming
• Application no longer required split ETL and Deep Learning into separate
applications
• Pipeline simplification
Use Cases
• Beneficial any time the user wants to change container resources between
stages in a single Spark application
• ETL to Deep Learning
• Skewed data
• Data size large in certain stages
• Jobs that use caching, switch to higher memory containers during those
stages
Resources Supported
• Executor Resources
• Cores
• Heap Memory
• OffHeap Memory
• Pyspark Memory
• Memory Overhead
• Additional Resources (GPUs, etc)
• Task Resources
• CPUs
• Additional Resources (GPUs, etc)
Requirements
• Spark 3.1.1
• Dynamic Allocation with External Shuffle Service or Shuffle tracking
enabled
• YARN and Kubernetes
• RDD API only
• Scala, Java, Python
Implementation Details
• New container acquired with new ResourceProfile
• Does NOT try to fit into existing container with different ResourceProfile
(Future Enhancement)
• Unused containers idle timeout
• Default to one ResourceProfile per stage
• Config to allow multiple ResourceProfiles per stage
• Multiple profiles will be merged with simple max of each resource
YARN Implementation Details
• External Shuffle Service and Dynamic Allocation
• YARN Container Priority – ResourceProfile Id becomes container priority
• YARN lower numbers are higher priority
• Job Server type scenario that may come into affect
• GPU and FPGA predefined, other resources require additional
configurations
• Custom resources via spark.yarn.executor.resource.* only apply in default
profile – do not propogate because no way to override
• Discovery script must be accessible – sent with job submission
Kubernetes Implementation Details
• Requires shuffle tracking enabled
(spark.dynamicAllocaiton.shuffleTracking.enabled)
• May not idel timeout if have shuffle data on the node
• Result in more cluster resource used
• spark.dynamicAllocaiton.shuffleTracking.timeout
• Pod Template Behavior
• Resource in Pod Template only used in default profile
• Specify all resources needed in the ResourceProfile
UI Screen Shots
--executor-cores 2 --conf spark.executor.resource.gpu.amount=1 --conf spark.task.resource.gpu.amount=0.5
UI Screen Shots
API
> import org.apache.spark.resource.{ExecutorResourceRequests, ResourceProfileBuilder,
TaskResourceRequests}
> val rpb = new ResourceProfileBuilder()
> val ereq = new ExecutorResourceRequests()
> val treq = new TaskResourceRequests()
> ereq.cores(4).memory("6g”).memoryOverhead("2g”).resource("gpu", 2, "./getGpus")
> treq.cpus(4).resource("gpu", 2)
> rpb.require(ereq)
> rpb.require(treq)
> val rp = rpb.build()
// use the ResourceProfile with the RDD
> val mlRdd = df.rdd.withResources(rp)
> mlRdd.mapPartitions { x =>
// feed data into ML and get result
}.collect()
UI Screen Shots
UI Screen Shots
API
> rpb
Profile executor resources: ArrayBuffer(memoryOverhead=name: memoryOverhead, amount:
2048, script: , vendor: , cores=name: cores, amount: 4, script: , vendor: , memory=name:
memory, amount: 6144, script: , vendor: , gpu=name: gpu, amount: 2, script: ./getGpus,
vendor: ), task resources: ArrayBuffer(cpus=name: cpus, amount: 4.0, gpu=name: gpu,
amount: 2.0)
> mlRdd.getResourceProfile
: org.apache.spark.resource.ResourceProfile = Profile: id = 1, executor resources:
memoryOverhead -> name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name:
cores, amount: 4, script: , vendor: ,memory -> name: memory, amount: 6144, script: ,
vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus, vendor: , task resources: cpus
-> name: cpus, amount: 4.0,gpu -> name: gpu, amount: 2.0
API - Mutable vs Immutable
> ereq.cores(2).memory("6g”).memoryOverhead("2g”).resource("gpu", 2, "./getGpus")
> treq.cpus(1).resource("gpu", 1)
> rpb.require(ereq).require(treq)
> val rp = rpb.build()
> rp
: org.apache.spark.resource.ResourceProfile = Profile: id = 2, executor resources: memoryOverhead ->
name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name: cores, amount: 2, script: , vendor:
,memory -> name: memory, amount: 6144, script: , vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus,
vendor: , task resources: cpus -> name: cpus, amount: 1.0,gpu -> name: gpu, amount: 1.0
> treq.cpus(2).resource("gpu", 2)
> rpb.require(treq)
> val rpNew = rpb.build()
> rpNew
: org.apache.spark.resource.ResourceProfile = Profile: id = 3, executor resources: memoryOverhead ->
name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name: cores, amount: 2, script: , vendor:
,memory -> name: memory, amount: 6144, script: , vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus,
vendor: , task resources: cpus -> name: cpus, amount: 2.0,gpu -> name: gpu, amount: 2.0
Use Case Example
End to End Pipeline
ETL Using Rapids Accelerator For Spark
Rapids Accelerator For Spark
• Run Spark on a GPU to accelerate processing
• combines the power of the RAPIDS cuDF library and the scale of the Spark distributed
computing framework
• Spark SQL and DataFrames
• Requires Spark 3.0+
• No user code changes
• If operation not supported, run on CPU like normal
• built-in accelerated shuffle based on UCX that can be configured to
leverage GPU-to-GPU communication and RDMA capabilities’
ETL Technology Stack
Dask cuDF
cuDF, Pandas
Python
Cython
cuDF C++
CUDA Libraries
CUDA
Java
JNI bindings
Spark dataframes,
Scala, PySpark
Rapids Accelerator For Apache Spark (Plugin)
DISTRIBUTED SCALE-OUT SPARK APPLICATIONS
APACHE SPARK CORE
RAPIDS
Accelerator
for Spark
Spark SQL API DataFrame API Spark Shuffle
if gpu_enabled(operation, data_type)
call-out to RAPIDS
else
execute standard Spark operation
● Custom Implementation of Spark
Shuffle
● Optimized to use RDMA and GPU-
to-GPU direct communication
JNI bindings
Mapping From Java/Scala to C++
RAPIDS C++ Libraries UCX Libraries
CUDA
JNI bindings
Mapping From Java/Scala to C++
Spark SQL & Dataframe Compilation Flow
DataFrame
Logical Plan
Physical Plan
bar.groupBy(
col(”product_id”),
col(“ds”))
.agg(
maxcol(“price”)) -
min(col(“p(rice”)).alias(“range”))
SELECT product_id, ds,
max(price) – min(price) AS
range FROM bar GROUP BY
product_id, ds
QUERY
GPU
PHYSICAL
PLAN
GPU Physical Plan
RAPIDS SQL
Plugin
RDD[InternalRow]
RDD[ColumnarBatch]
NDS Query 38 Results
Entire query is GPU accelerated
CPU Cluster: Driver: 1 x m5dn.large;
Workers: 8 x m5dn.2xlarge
On-demand cluster cost (US West): $4.488/hr
GPU Cluster: Driver: 1 x m5dn.large;
Workers: 8 x g4dn.2xlarge
On-demand cluster cost (US West): $6.152/hr
163.0
53.2
0.0
40.0
80.0
120.0
160.0
200.0
CPU: 8 x m5dn.2xlarge
(64-core 256GB)
GPU: 8 x g4dn.2xlarge
(64-core 256GB 8xT4
GPU)
Time
(secs)
Query Time
$0.20
$0.09
$0.00
$0.05
$0.10
$0.15
$0.20
$0.25
CPU: 8 x m5dn.2xlarge
(64-core 256GB)
GPU: 8 x g4dn.2xlarge
(64-core 256GB 8xT4 GPU)
Total Costs
3X Speed-up 55% Cost Saving
Deep Learning
Horovod Introduction
• Distributed Deep learning training framework
• TensorFlow, Keras, PyTorch, Apache MXNet
• High Performance features
• NCCL< GpuDirect, RDMA, tensor fusion
• Easy to use
• Just 5 lines of Python
• Open Source
• Linux Foundation AI Foundation
• Easy to install
• pip install horovod
horovod.ai
Demo
End to End Horovod Demo
Future Enhancements
Future Enhancements
• Collect feedback from users
• Allow setting certain configs – like dynamic allocation
• Fitting new ResourceProfiles into existing containers
• Better cleanup of ResourceProfiles
• Catalyst internally
Other Performance Enhancements
Other Enhancements
• Pluggable Caching
• Allows developers to try different caching solutions
• Custom GPU implementation
Questions
Feedback
Your feedback is important to us.
Don’t forget to rate and review the sessions.
1 of 38

Recommended

Deep Dive: Memory Management in Apache Spark by
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
14.5K views54 slides
Apache Spark in Depth: Core Concepts, Architecture & Internals by
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
9.6K views26 slides
Understanding Query Plans and Spark UIs by
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsDatabricks
4.7K views50 slides
Spark shuffle introduction by
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introductioncolorant
50.6K views33 slides
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P... by
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Databricks
1K views25 slides
Building a SIMD Supported Vectorized Native Engine for Spark SQL by
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLDatabricks
725 views28 slides

More Related Content

What's hot

Fine Tuning and Enhancing Performance of Apache Spark Jobs by
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
2.5K views31 slides
Apache Spark Architecture by
Apache Spark ArchitectureApache Spark Architecture
Apache Spark ArchitectureAlexey Grishchenko
75.9K views114 slides
Apache Spark overview by
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
1.2K views41 slides
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia by
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaDatabricks
11.7K views32 slides
Cosco: An Efficient Facebook-Scale Shuffle Service by
Cosco: An Efficient Facebook-Scale Shuffle ServiceCosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
2.9K views35 slides
Apache spark - Architecture , Overview & libraries by
Apache spark - Architecture , Overview & librariesApache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & librariesWalaa Hamdy Assy
148 views58 slides

What's hot(20)

Fine Tuning and Enhancing Performance of Apache Spark Jobs by Databricks
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Databricks2.5K views
Apache Spark overview by DataArt
Apache Spark overviewApache Spark overview
Apache Spark overview
DataArt1.2K views
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia by Databricks
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Databricks11.7K views
Cosco: An Efficient Facebook-Scale Shuffle Service by Databricks
Cosco: An Efficient Facebook-Scale Shuffle ServiceCosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle Service
Databricks2.9K views
Apache spark - Architecture , Overview & libraries by Walaa Hamdy Assy
Apache spark - Architecture , Overview & librariesApache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & libraries
Walaa Hamdy Assy148 views
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J... by Databricks
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Databricks98.6K views
Designing Structured Streaming Pipelines—How to Architect Things Right by Databricks
Designing Structured Streaming Pipelines—How to Architect Things RightDesigning Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things Right
Databricks4.5K views
Tuning and Debugging in Apache Spark by Patrick Wendell
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Patrick Wendell36K views
Memory Management in Apache Spark by Databricks
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
Databricks9.7K views
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark by Bo Yang
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Bo Yang2.7K views
Simplifying Big Data Analytics with Apache Spark by Databricks
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks12K views
Enabling Vectorized Engine in Apache Spark by Kazuaki Ishizaki
Enabling Vectorized Engine in Apache SparkEnabling Vectorized Engine in Apache Spark
Enabling Vectorized Engine in Apache Spark
Kazuaki Ishizaki826 views
Physical Plans in Spark SQL by Databricks
Physical Plans in Spark SQLPhysical Plans in Spark SQL
Physical Plans in Spark SQL
Databricks7K views
Apache Spark At Scale in the Cloud by Databricks
Apache Spark At Scale in the CloudApache Spark At Scale in the Cloud
Apache Spark At Scale in the Cloud
Databricks3.2K views
Top 5 Mistakes to Avoid When Writing Apache Spark Applications by Cloudera, Inc.
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.127.8K views
Performance Optimizations in Apache Impala by Cloudera, Inc.
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
Cloudera, Inc.10.7K views
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc... by Databricks
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Databricks10.8K views

Similar to Stage Level Scheduling Improving Big Data and AI Integration

Deep Dive into GPU Support in Apache Spark 3.x by
Deep Dive into GPU Support in Apache Spark 3.xDeep Dive into GPU Support in Apache Spark 3.x
Deep Dive into GPU Support in Apache Spark 3.xDatabricks
2.3K views52 slides
Build Large-Scale Data Analytics and AI Pipeline Using RayDP by
Build Large-Scale Data Analytics and AI Pipeline Using RayDPBuild Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
272 views24 slides
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha... by
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...Databricks
732 views25 slides
Spark Summit EU talk by Luca Canali by
Spark Summit EU talk by Luca CanaliSpark Summit EU talk by Luca Canali
Spark Summit EU talk by Luca CanaliSpark Summit
1.8K views34 slides
SFBigAnalytics_SparkRapid_20220622.pdf by
SFBigAnalytics_SparkRapid_20220622.pdfSFBigAnalytics_SparkRapid_20220622.pdf
SFBigAnalytics_SparkRapid_20220622.pdfChester Chen
90 views33 slides
Spark 2.x Troubleshooting Guide by
Spark 2.x Troubleshooting GuideSpark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting GuideIBM
51.1K views19 slides

Similar to Stage Level Scheduling Improving Big Data and AI Integration(20)

Deep Dive into GPU Support in Apache Spark 3.x by Databricks
Deep Dive into GPU Support in Apache Spark 3.xDeep Dive into GPU Support in Apache Spark 3.x
Deep Dive into GPU Support in Apache Spark 3.x
Databricks2.3K views
Build Large-Scale Data Analytics and AI Pipeline Using RayDP by Databricks
Build Large-Scale Data Analytics and AI Pipeline Using RayDPBuild Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDP
Databricks272 views
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha... by Databricks
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Databricks732 views
Spark Summit EU talk by Luca Canali by Spark Summit
Spark Summit EU talk by Luca CanaliSpark Summit EU talk by Luca Canali
Spark Summit EU talk by Luca Canali
Spark Summit1.8K views
SFBigAnalytics_SparkRapid_20220622.pdf by Chester Chen
SFBigAnalytics_SparkRapid_20220622.pdfSFBigAnalytics_SparkRapid_20220622.pdf
SFBigAnalytics_SparkRapid_20220622.pdf
Chester Chen90 views
Spark 2.x Troubleshooting Guide by IBM
Spark 2.x Troubleshooting GuideSpark 2.x Troubleshooting Guide
Spark 2.x Troubleshooting Guide
IBM51.1K views
Deploying Accelerators At Datacenter Scale Using Spark by Jen Aman
Deploying Accelerators At Datacenter Scale Using SparkDeploying Accelerators At Datacenter Scale Using Spark
Deploying Accelerators At Datacenter Scale Using Spark
Jen Aman2.6K views
Apache Spark on K8S Best Practice and Performance in the Cloud by Databricks
Apache Spark on K8S Best Practice and Performance in the CloudApache Spark on K8S Best Practice and Performance in the Cloud
Apache Spark on K8S Best Practice and Performance in the Cloud
Databricks8K views
SF Big Analytics 20191112: How to performance-tune Spark applications in larg... by Chester Chen
SF Big Analytics 20191112: How to performance-tune Spark applications in larg...SF Big Analytics 20191112: How to performance-tune Spark applications in larg...
SF Big Analytics 20191112: How to performance-tune Spark applications in larg...
Chester Chen466 views
A really really fast introduction to PySpark - lightning fast cluster computi... by Holden Karau
A really really fast introduction to PySpark - lightning fast cluster computi...A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...
Holden Karau22.6K views
Mixing Analytic Workloads with Greenplum and Apache Spark by VMware Tanzu
Mixing Analytic Workloads with Greenplum and Apache SparkMixing Analytic Workloads with Greenplum and Apache Spark
Mixing Analytic Workloads with Greenplum and Apache Spark
VMware Tanzu960 views
Introduction to Apache Spark by Rahul Jain
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
Rahul Jain24.8K views
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio by Alluxio, Inc.
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAdvancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Alluxio, Inc.397 views
Updates from Project Hydrogen: Unifying State-of-the-Art AI and Big Data in A... by Databricks
Updates from Project Hydrogen: Unifying State-of-the-Art AI and Big Data in A...Updates from Project Hydrogen: Unifying State-of-the-Art AI and Big Data in A...
Updates from Project Hydrogen: Unifying State-of-the-Art AI and Big Data in A...
Databricks479 views
Spark on Yarn by Qubole
Spark on YarnSpark on Yarn
Spark on Yarn
Qubole1.8K views
ETL with SPARK - First Spark London meetup by Rafal Kwasny
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
Rafal Kwasny30.7K views
End-to-End Deep Learning with Horovod on Apache Spark by Databricks
End-to-End Deep Learning with Horovod on Apache SparkEnd-to-End Deep Learning with Horovod on Apache Spark
End-to-End Deep Learning with Horovod on Apache Spark
Databricks988 views

More from Databricks

DW Migration Webinar-March 2022.pptx by
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDatabricks
4.3K views25 slides
Data Lakehouse Symposium | Day 1 | Part 1 by
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Databricks
1.5K views43 slides
Data Lakehouse Symposium | Day 1 | Part 2 by
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Databricks
741 views16 slides
Data Lakehouse Symposium | Day 4 by
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
1.8K views74 slides
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop by
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
6.3K views64 slides
Democratizing Data Quality Through a Centralized Platform by
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDatabricks
1.4K views36 slides

More from Databricks(20)

DW Migration Webinar-March 2022.pptx by Databricks
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
Databricks4.3K views
Data Lakehouse Symposium | Day 1 | Part 1 by Databricks
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
Databricks1.5K views
Data Lakehouse Symposium | Day 1 | Part 2 by Databricks
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
Databricks741 views
Data Lakehouse Symposium | Day 4 by Databricks
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
Databricks1.8K views
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop by Databricks
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks6.3K views
Democratizing Data Quality Through a Centralized Platform by Databricks
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks1.4K views
Learn to Use Databricks for Data Science by Databricks
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
Databricks1.6K views
Why APM Is Not the Same As ML Monitoring by Databricks
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
Databricks743 views
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix by Databricks
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks689 views
Simplify Data Conversion from Spark to TensorFlow and PyTorch by Databricks
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks1.8K views
Scaling your Data Pipelines with Apache Spark on Kubernetes by Databricks
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks2.1K views
Scaling and Unifying SciKit Learn and Apache Spark Pipelines by Databricks
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks667 views
Sawtooth Windows for Feature Aggregations by Databricks
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
Databricks605 views
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink by Databricks
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks677 views
Re-imagine Data Monitoring with whylogs and Spark by Databricks
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
Databricks550 views
Raven: End-to-end Optimization of ML Prediction Queries by Databricks
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks450 views
Processing Large Datasets for ADAS Applications using Apache Spark by Databricks
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks513 views
Massive Data Processing in Adobe Using Delta Lake by Databricks
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
Databricks719 views
Machine Learning CI/CD for Email Attack Detection by Databricks
Machine Learning CI/CD for Email Attack DetectionMachine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack Detection
Databricks389 views
Jeeves Grows Up: An AI Chatbot for Performance and Quality by Databricks
Jeeves Grows Up: An AI Chatbot for Performance and QualityJeeves Grows Up: An AI Chatbot for Performance and Quality
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Databricks260 views

Recently uploaded

Amy slides.pdf by
Amy slides.pdfAmy slides.pdf
Amy slides.pdfStatsCommunications
5 views13 slides
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx by
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptxDataScienceConferenc1
5 views15 slides
Short Story Assignment by Kelly Nguyen by
Short Story Assignment by Kelly NguyenShort Story Assignment by Kelly Nguyen
Short Story Assignment by Kelly Nguyenkellynguyen01
19 views17 slides
3196 The Case of The East River by
3196 The Case of The East River3196 The Case of The East River
3196 The Case of The East RiverErickANDRADE90
17 views4 slides
CRM stick or twist workshop by
CRM stick or twist workshopCRM stick or twist workshop
CRM stick or twist workshopinfo828217
11 views16 slides
Survey on Factuality in LLM's.pptx by
Survey on Factuality in LLM's.pptxSurvey on Factuality in LLM's.pptx
Survey on Factuality in LLM's.pptxNeethaSherra1
7 views9 slides

Recently uploaded(20)

Short Story Assignment by Kelly Nguyen by kellynguyen01
Short Story Assignment by Kelly NguyenShort Story Assignment by Kelly Nguyen
Short Story Assignment by Kelly Nguyen
kellynguyen0119 views
3196 The Case of The East River by ErickANDRADE90
3196 The Case of The East River3196 The Case of The East River
3196 The Case of The East River
ErickANDRADE9017 views
CRM stick or twist workshop by info828217
CRM stick or twist workshopCRM stick or twist workshop
CRM stick or twist workshop
info82821711 views
Survey on Factuality in LLM's.pptx by NeethaSherra1
Survey on Factuality in LLM's.pptxSurvey on Factuality in LLM's.pptx
Survey on Factuality in LLM's.pptx
NeethaSherra17 views
Chapter 3b- Process Communication (1) (1)(1) (1).pptx by ayeshabaig2004
Chapter 3b- Process Communication (1) (1)(1) (1).pptxChapter 3b- Process Communication (1) (1)(1) (1).pptx
Chapter 3b- Process Communication (1) (1)(1) (1).pptx
ayeshabaig20047 views
PRIVACY AWRE PERSONAL DATA STORAGE by antony420421
PRIVACY AWRE PERSONAL DATA STORAGEPRIVACY AWRE PERSONAL DATA STORAGE
PRIVACY AWRE PERSONAL DATA STORAGE
antony4204215 views
[DSC Europe 23] Stefan Mrsic_Goran Savic - Evolving Technology Excellence.pptx by DataScienceConferenc1
[DSC Europe 23] Stefan Mrsic_Goran Savic - Evolving Technology Excellence.pptx[DSC Europe 23] Stefan Mrsic_Goran Savic - Evolving Technology Excellence.pptx
[DSC Europe 23] Stefan Mrsic_Goran Savic - Evolving Technology Excellence.pptx
CRM stick or twist.pptx by info828217
CRM stick or twist.pptxCRM stick or twist.pptx
CRM stick or twist.pptx
info82821711 views
LIVE OAK MEMORIAL PARK.pptx by ms2332always
LIVE OAK MEMORIAL PARK.pptxLIVE OAK MEMORIAL PARK.pptx
LIVE OAK MEMORIAL PARK.pptx
ms2332always7 views
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation by DataScienceConferenc1
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init... by DataScienceConferenc1
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P... by DataScienceConferenc1
[DSC Europe 23][AI:CSI]  Dragan Pleskonjic - AI Impact on Cybersecurity and P...[DSC Europe 23][AI:CSI]  Dragan Pleskonjic - AI Impact on Cybersecurity and P...
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P...
[DSC Europe 23] Danijela Horak - The Innovator’s Dilemma: to Build or Not to ... by DataScienceConferenc1
[DSC Europe 23] Danijela Horak - The Innovator’s Dilemma: to Build or Not to ...[DSC Europe 23] Danijela Horak - The Innovator’s Dilemma: to Build or Not to ...
[DSC Europe 23] Danijela Horak - The Innovator’s Dilemma: to Build or Not to ...
Advanced_Recommendation_Systems_Presentation.pptx by neeharikasingh29
Advanced_Recommendation_Systems_Presentation.pptxAdvanced_Recommendation_Systems_Presentation.pptx
Advanced_Recommendation_Systems_Presentation.pptx

Stage Level Scheduling Improving Big Data and AI Integration

  • 1. Stage Level Scheduling Improving Big Data and AI integration Thomas Graves Software Engineer at NVIDIA Spark PMC
  • 2. Agenda § Resource Scheduling § Stage Level Scheduling § Use Case Example § Demo
  • 4. Resource Scheduling • Driver • Cores • Memory • Accelerators (GPU/FPGA/etc) • Executors • Cores • Memory (overhead, pyspark, heap, offheap) • Accelerators (GPU/FPGA/etc) • Tasks (requirements) • CPUs • Accelerators (GPU/FPGA/etc)
  • 5. Resource Scheduling • Tasks Per Executor • Executor Resources / Task Requirements • Configs spark.driver.cores=1 spark.executor.cores=4 spark.task.cpus=1 spark.driver.memory=4g spark.executor.memory=4g spark.executor.memoryOverhead=2g spark.driver.resource.gpu.amount=1 spark.driver.resource.gpu.discoveryScript=./getGpuResources.sh spark.executor.resource.gpu.amount=1 spark.executor.resource.gpu.discoveryScript=./getGpuResources.sh spark.task.resource.gpu.amount=0.25
  • 7. Overview Spark ETL Stage Spark ML Stage NODE NODE GPU CPU CPU
  • 8. Stage Level Scheduling • Stage level resource scheduling (SPARK-27495) • Specify resource requirements per RDD operation • Spark dynamically allocates containers to meet resource requirements • Spark schedules tasks on appropriate containers • Benefits • Hardware utilization and cost • Ease of programming • Application no longer required split ETL and Deep Learning into separate applications • Pipeline simplification
  • 9. Use Cases • Beneficial any time the user wants to change container resources between stages in a single Spark application • ETL to Deep Learning • Skewed data • Data size large in certain stages • Jobs that use caching, switch to higher memory containers during those stages
  • 10. Resources Supported • Executor Resources • Cores • Heap Memory • OffHeap Memory • Pyspark Memory • Memory Overhead • Additional Resources (GPUs, etc) • Task Resources • CPUs • Additional Resources (GPUs, etc)
  • 11. Requirements • Spark 3.1.1 • Dynamic Allocation with External Shuffle Service or Shuffle tracking enabled • YARN and Kubernetes • RDD API only • Scala, Java, Python
  • 12. Implementation Details • New container acquired with new ResourceProfile • Does NOT try to fit into existing container with different ResourceProfile (Future Enhancement) • Unused containers idle timeout • Default to one ResourceProfile per stage • Config to allow multiple ResourceProfiles per stage • Multiple profiles will be merged with simple max of each resource
  • 13. YARN Implementation Details • External Shuffle Service and Dynamic Allocation • YARN Container Priority – ResourceProfile Id becomes container priority • YARN lower numbers are higher priority • Job Server type scenario that may come into affect • GPU and FPGA predefined, other resources require additional configurations • Custom resources via spark.yarn.executor.resource.* only apply in default profile – do not propogate because no way to override • Discovery script must be accessible – sent with job submission
  • 14. Kubernetes Implementation Details • Requires shuffle tracking enabled (spark.dynamicAllocaiton.shuffleTracking.enabled) • May not idel timeout if have shuffle data on the node • Result in more cluster resource used • spark.dynamicAllocaiton.shuffleTracking.timeout • Pod Template Behavior • Resource in Pod Template only used in default profile • Specify all resources needed in the ResourceProfile
  • 15. UI Screen Shots --executor-cores 2 --conf spark.executor.resource.gpu.amount=1 --conf spark.task.resource.gpu.amount=0.5
  • 17. API > import org.apache.spark.resource.{ExecutorResourceRequests, ResourceProfileBuilder, TaskResourceRequests} > val rpb = new ResourceProfileBuilder() > val ereq = new ExecutorResourceRequests() > val treq = new TaskResourceRequests() > ereq.cores(4).memory("6g”).memoryOverhead("2g”).resource("gpu", 2, "./getGpus") > treq.cpus(4).resource("gpu", 2) > rpb.require(ereq) > rpb.require(treq) > val rp = rpb.build() // use the ResourceProfile with the RDD > val mlRdd = df.rdd.withResources(rp) > mlRdd.mapPartitions { x => // feed data into ML and get result }.collect()
  • 20. API > rpb Profile executor resources: ArrayBuffer(memoryOverhead=name: memoryOverhead, amount: 2048, script: , vendor: , cores=name: cores, amount: 4, script: , vendor: , memory=name: memory, amount: 6144, script: , vendor: , gpu=name: gpu, amount: 2, script: ./getGpus, vendor: ), task resources: ArrayBuffer(cpus=name: cpus, amount: 4.0, gpu=name: gpu, amount: 2.0) > mlRdd.getResourceProfile : org.apache.spark.resource.ResourceProfile = Profile: id = 1, executor resources: memoryOverhead -> name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name: cores, amount: 4, script: , vendor: ,memory -> name: memory, amount: 6144, script: , vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus, vendor: , task resources: cpus -> name: cpus, amount: 4.0,gpu -> name: gpu, amount: 2.0
  • 21. API - Mutable vs Immutable > ereq.cores(2).memory("6g”).memoryOverhead("2g”).resource("gpu", 2, "./getGpus") > treq.cpus(1).resource("gpu", 1) > rpb.require(ereq).require(treq) > val rp = rpb.build() > rp : org.apache.spark.resource.ResourceProfile = Profile: id = 2, executor resources: memoryOverhead -> name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name: cores, amount: 2, script: , vendor: ,memory -> name: memory, amount: 6144, script: , vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus, vendor: , task resources: cpus -> name: cpus, amount: 1.0,gpu -> name: gpu, amount: 1.0 > treq.cpus(2).resource("gpu", 2) > rpb.require(treq) > val rpNew = rpb.build() > rpNew : org.apache.spark.resource.ResourceProfile = Profile: id = 3, executor resources: memoryOverhead -> name: memoryOverhead, amount: 2048, script: , vendor: ,cores -> name: cores, amount: 2, script: , vendor: ,memory -> name: memory, amount: 6144, script: , vendor: ,gpu -> name: gpu, amount: 2, script: ./getGpus, vendor: , task resources: cpus -> name: cpus, amount: 2.0,gpu -> name: gpu, amount: 2.0
  • 22. Use Case Example End to End Pipeline
  • 23. ETL Using Rapids Accelerator For Spark
  • 24. Rapids Accelerator For Spark • Run Spark on a GPU to accelerate processing • combines the power of the RAPIDS cuDF library and the scale of the Spark distributed computing framework • Spark SQL and DataFrames • Requires Spark 3.0+ • No user code changes • If operation not supported, run on CPU like normal • built-in accelerated shuffle based on UCX that can be configured to leverage GPU-to-GPU communication and RDMA capabilities’
  • 25. ETL Technology Stack Dask cuDF cuDF, Pandas Python Cython cuDF C++ CUDA Libraries CUDA Java JNI bindings Spark dataframes, Scala, PySpark
  • 26. Rapids Accelerator For Apache Spark (Plugin) DISTRIBUTED SCALE-OUT SPARK APPLICATIONS APACHE SPARK CORE RAPIDS Accelerator for Spark Spark SQL API DataFrame API Spark Shuffle if gpu_enabled(operation, data_type) call-out to RAPIDS else execute standard Spark operation ● Custom Implementation of Spark Shuffle ● Optimized to use RDMA and GPU- to-GPU direct communication JNI bindings Mapping From Java/Scala to C++ RAPIDS C++ Libraries UCX Libraries CUDA JNI bindings Mapping From Java/Scala to C++
  • 27. Spark SQL & Dataframe Compilation Flow DataFrame Logical Plan Physical Plan bar.groupBy( col(”product_id”), col(“ds”)) .agg( maxcol(“price”)) - min(col(“p(rice”)).alias(“range”)) SELECT product_id, ds, max(price) – min(price) AS range FROM bar GROUP BY product_id, ds QUERY GPU PHYSICAL PLAN GPU Physical Plan RAPIDS SQL Plugin RDD[InternalRow] RDD[ColumnarBatch]
  • 28. NDS Query 38 Results Entire query is GPU accelerated CPU Cluster: Driver: 1 x m5dn.large; Workers: 8 x m5dn.2xlarge On-demand cluster cost (US West): $4.488/hr GPU Cluster: Driver: 1 x m5dn.large; Workers: 8 x g4dn.2xlarge On-demand cluster cost (US West): $6.152/hr 163.0 53.2 0.0 40.0 80.0 120.0 160.0 200.0 CPU: 8 x m5dn.2xlarge (64-core 256GB) GPU: 8 x g4dn.2xlarge (64-core 256GB 8xT4 GPU) Time (secs) Query Time $0.20 $0.09 $0.00 $0.05 $0.10 $0.15 $0.20 $0.25 CPU: 8 x m5dn.2xlarge (64-core 256GB) GPU: 8 x g4dn.2xlarge (64-core 256GB 8xT4 GPU) Total Costs 3X Speed-up 55% Cost Saving
  • 30. Horovod Introduction • Distributed Deep learning training framework • TensorFlow, Keras, PyTorch, Apache MXNet • High Performance features • NCCL< GpuDirect, RDMA, tensor fusion • Easy to use • Just 5 lines of Python • Open Source • Linux Foundation AI Foundation • Easy to install • pip install horovod horovod.ai
  • 31. Demo
  • 32. End to End Horovod Demo
  • 34. Future Enhancements • Collect feedback from users • Allow setting certain configs – like dynamic allocation • Fitting new ResourceProfiles into existing containers • Better cleanup of ResourceProfiles • Catalyst internally
  • 36. Other Enhancements • Pluggable Caching • Allows developers to try different caching solutions • Custom GPU implementation
  • 38. Feedback Your feedback is important to us. Don’t forget to rate and review the sessions.