5. Accelerator-Aware Scheduling
▪ SPARK-24615
▪ Request resources
▪ Executor
▪ Driver
▪ Task
▪ Resource discovery
▪ API to determine assignment
▪ Supported on YARN, Kubernetes, and Standalone
GPUs are now a schedulable resource
7. GPU Discovery Script Example
#!/bin/bash
#
# Outputs a JSON formatted string that is expected by the
# spark.{driver/executor}.resource.gpu.discoveryScript config.
#
# Example output: {"name": "gpu", "addresses":["0","1","2","3","4","5","6","7"]}
ADDRS=$(nvidia-smi --query-gpu=index --format=csv,noheader
| sed -e :a -e N -e'$!ba' -e 's/n/","/g')
echo {"name": "gpu", "addresses":["$ADDRS"]}
8. GPU Assignments API
// Task API
val context = TaskContext.get()
val resources = context.resources()
val assignedGpuAddrs = resources("gpu").addresses
// Pass assignedGpuAddrs into TensorFlow or other AI code
// Driver API
scala> sc.resources("gpu").addresses
Array[String] = Array(0)
11. Stage Level Scheduling
▪ SPARK-27495
▪ Specify resource requirements per RDD operation
▪ Spark dynamically allocates containers to meet resource requirements
▪ Spark schedules tasks on appropriate containers
▪ Coming soon in Spark 3.1
12. SQL Columnar Processing
▪ SPARK-27396
▪ Catalyst API for columnar processing
▪ Plugins can modify query plan with columnar operations
▪ Plan nodes can exchange RDD[ColumnarBatch] instead of RDD[Row]
▪ Enables efficient processing by vectorized accelerators
▪ SIMD
▪ FPGA
▪ GPU
13. Spark 3
Spark 3 with Project Hydrogen
Spark 2.x
Data Preparation Model Training
Shared Storage
CPU Powered Cluster GPU Powered Cluster
Data
Sources
Spark
XGBoost | TensorFlow |
PyTorch
Spark Orchestrated Data
Sources
GPU Powered Cluster
Data Preparation Model Training
Spark
XGBoost | TensorFlow |
PyTorch
Spark Orchestrated
14. Spark 3 with Project Hydrogen
▪ Single pipeline
▪ Ingest
▪ Data preparation
▪ Model Training
▪ Infrastructure is consolidated and
simplified
▪ ETL can be GPU-accelerated
Enabling end-to-end acceleration
Data
Sources
GPU Powered Cluster
Data Preparation Model Training
Spark
XGBoost | TensorFlow |
PyTorch
Spark Orchestrated
Spark 3
23. RAPIDS Accelerator for Apache Spark (Plugin)
RAPIDS Accelerator
for Apache Spark
UCX LibrariesRAPIDS C++ Libraries
JNI bindings
Mapping From Java/Scala to C++
DISTRIBUTED SCALE-OUT SPARK APPLICATIONS
APACHE SPARK CORE
Spark SQL Spark ShuffleDataFrame
if gpu_enabled(op, data_type)
call-out to RAPIDS
else
execute standard Spark op
● Custom Implementation of
Spark Shuffle
● Optimized to use RDMA
and GPU-to-GPU direct
communication
CUDA
JNI bindings
Mapping From Java/Scala to C++
26. What We Support
and growing…
!
%
&
*
+
-
/
<
<=
<=>
=
==
>
>=
^
abs
acos
and
asin
atan
avg
bigint
boolean
cast
cbrt
ceil
ceiling
coalesce
concat
cos
cosh
cot
count
cube
current_date
current_timestamp
date
datediff
day
dayofmonth
degrees
double
e
exp
explode*
expm1
first
first_value
float
floor
from_unixtime
hour
if
ifnull
in
initcap
input_file_block_le
ngth
input_file_block_st
art
input_file_name
int
isnan
isnotnull
isnull
last
last_value
length
lcase
like
ln
locate
log
log10
log1p
log2
lower
ltrim
max
mean
min
minute
mod
monotonically_inc
reasing_id
month
nanvl
negative
not
now
nullif
nvl
nvl2
or
pi
posexplode*
position
pow
power
radians
rand*
regexp_replace*
replace
rint
rollup
row_number
rtrim
second
shiftleft
shiftright
shiftrightunsigned
sign
signum
sin
sinh
smallint
spark_partition_id
sqrt
string
substr
substring
sum
tan
tanh
timestamp
tinyint
trim
ucase
upper
when
window
year
|
~
CSV Reading*
Orc Reading
Orc Writing
Parquet Reading
Parquet Writing
ANSI casts
TimeSub for time
ranges
startswith
endswith
contains
limit
order by
group by
filter
union
repartition
equi-joins
select
27. Is This a Silver Bullet?
No
▪ Small amounts of data
▪ Few hundred MB per partition for GPU
▪ Cache coherent processing
▪ Data Movement
▪ Slow I/O (networking, disks, etc.)
▪ Going back and forth to the CPU (UDFs)
▪ Shuffle
▪ Limited GPU Memory
160
550
1250
3500
12288
24576 25600
46080
307200
1048576
Spinning
D
isk
SSD
10G
igE
N
VM
EPC
Ie
gen3
PC
Ie
gen4
D
DR
4-3200
D
IM
M
TypicalPC
R
AM
N
VLink
C
PU
Cache
MB/s(LogScale)
28. But It Can Be Amazing
▪ High cardinality data
▪ Joins
▪ Aggregates
▪ Sort
▪ Window operations
▪ Especially on large windows
▪ Aggregate with lots of distinct operations
▪ Complicated processing
▪ Transcoding
▪ Encoding and compressing Parquet and ORC is expensive
▪ Parsing CSV is expensive
What the SQL plugin excels at
30. Spark SQL & DataFrame Compilation Flow
DataFrame
Logical Plan
Physical Plan
RDD[InternalRow]
bar.groupBy(
col(”product_id”),
col(“ds”))
.agg(
max(col(“price”)) -
min(col(“price”)).alias(“range”))
SELECT product_id, ds,
max(price) – min(price) AS
range FROM bar GROUP BY
product_id, ds
QUERY
CPUPHYSICALPLAN
31. Spark SQL & DataFrame Compilation Flow
DataFrame
Logical Plan
Physical Plan
RDD[InternalRow]
bar.groupBy(
col(”product_id”),
col(“ds”))
.agg(
max(col(“price”)) -
min(col(“price”)).alias(“range”))
SELECT product_id, ds,
max(price) – min(price) AS
range FROM bar GROUP BY
product_id, ds
QUERY
GPUPHYSICALPLAN
Physical Plan
RDD[ColumnarBatch]
32. Spark SQL & DataFrame Compilation FlowCPUPHYSICALPLAN
Read Parquet File
First Stage
Aggregate
Shuffle Exchange
Second Stage
Aggregate
Write Parquet File
Combine Shuffle
Data
Read Parquet File
First Stage
Aggregate
Shuffle Exchange
Second Stage
Aggregate
Write Parquet File
Convert to Row
Format
Convert to Row
Format
GPUPHYSICALPLAN
33. ETL Technology Stack
Dask cuDF
cuDF, Pandas
Python
Cython
cuDF C++
CUDA Libraries
CUDA
Java
JNI bindings
Spark DataFrame,
Scala, PySpark
36. Databricks Demo Results
“The more you buy, the more you save” – Jensen H Huang, CEO NVIDIA
1,736
423
0
350
700
1,050
1,400
1,750
CPU (12 - r4.2xlarge) GPU (12 - p3.2xlarge)
ETL Time (seconds)
4x Speed-up $8.03
$6.81
$0.0
$2.0
$4.0
$6.0
$8.0
$10.0
CPU (12- r4.2xlarge) GPU (12 - p3.2xlarge)
ETL Cost (AWS+DBU)
18% Cost Savings*
* Costs based on Databricks Standard edition
37. T4 Cluster Setup
EC2
V100 is optimized for ML/DL training
T4 fits better with SQL processing
Driver (Ran on one of the worker nodes)
Workers:
▪ 12 – g4dn.2xlarge
▪ 32GB Memory
▪ 1 - T4
▪ 8 cores
38. Coming Soon….T4 GPUs on Databricks
Same speed-up as V100 but more savings
1,736
457
0
350
700
1,050
1,400
1,750
CPU (12 - r4.2xlarge) GPU (12 - g4dn.2xlarge)
ETL Time (seconds)
3.8x Speed-up
$8.03
$3.76
$0.0
$2.0
$4.0
$6.0
$8.0
$10.0
CPU (12- r4.2xlarge) GPU (12 - g4dn.2xlarge)
ETL Cost (AWS+DBU)
50% Cost Savings*
* Costs based on AWS T4 GPU instance market price & V100 GPU price on Databricks Standard edition
39. RAPIDS Accelerator on AWS
▪ ~3.5x Speed-up
▪ ~40% Cost Savings
Based on TPCx-BB like Queries #5 & #22 with 1TB scale factor input
221
82.68
61
26.83
0
50
100
150
200
250
Q5 Q22
ETL Time (Seconds)
CPU: 12 - m5dn.2xlarge
(8-core 32GB)
GPU: 12 - g4dn.2xlarge
(8-core 32GB 1xT4 GPU)
49. What’s Next
▪ Open Source (DONE)
▪ https://github.com/NVIDIA/spark-rapids
▪ https://nvidia.github.io/spark-rapids/
▪ Nested types
▪ Arrays
▪ Structs
▪ Maps
▪ Decimal type
▪ More operators
▪ GPU Direct Storage
▪ Time zone support for timestamps
▪ Only UTC supported now
▪ Higher order functions
▪ UDFs
Further OutComing Soon
50. Where to Get More Information
▪ https://NVIDIA.com/Spark
▪ Please use the “Contact Us” link to get in touch with NVIDIA’s Spark team
▪ https://github.com/NVIDIA/spark-rapids
▪ https://nvidia.github.io/spark-rapids/
▪ Listen to Adobe’s Email Marketing Intelligent Services Use-Case
▪ Free e-book at NVIDIA.com/Spark-Book