SlideShare a Scribd company logo
Deep Dive into the New Features
of Upcoming Apache Spark 3.0
Xiao Li gatorsmile
June 2020
Wenchen Fan cloud-fan
• Open Source Team at
• Apache Spark Committer and PMC
About Us
Xiao Li (Github: gatorsmile) Wenchen Fan (Github: cloud-fan)
Unified data analytics platform for accelerating innovation across
data science, data engineering, and business analytics
Original creators of popular data and machine learning open source projects
Global company with 5,000 customers and 450+ partners
3400+ Resolved
JIRAs
in Spark 3.0 rc2
Adaptive Query
Execution
Dynamic Partition
Pruning
Query Compilation
Speedup
Join Hints
Performance
Richer APIs
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
Enhancements
DELETE/UPDATE/
MERGE in Catalyst
Reserved
Keywords
Proleptic Gregorian
Calendar
ANSI Store
Assignment
Overflow
Checking
SQL Compatibility
Built-in Data Sources
Parquet/ORC Nested
Column Pruning
Parquet: Nested
Column Filter
Pushdown
CSV Filter
Pushdown
New Binary
Data Source
Data Source V2 API +
Catalog Support
Java 11 Support
Hadoop 3
Support
Hive 3.x Metastore
Hive 2.3 Execution
Extensibility and Ecosystem
Structured
Streaming UI
DDL/DML
Enhancements
Observable
Metrics
Event Log
Rollover
Monitoring and Debuggability
Performance
Achieve high performance for interactive, batch, streaming and ML workloads
Adaptive Query
Execution
Dynamic Partition
Pruning
Query Compilation
Speedup
Join Hints
Performance
Achieve high performance for interactive, batch, streaming and ML workloads
Adaptive Query
Execution
Dynamic Partition
Pruning
Join Hints
Query Compilation
Speedup
Spark Catalyst Optimizer
Spark 1.x, Rule
Spark 2.x, Rule + Cost
Query Optimization in Spark 2.x
▪ Missing statistics
Expensive statistics collection
▪ Out-of-date statistics
Compute and storage separated
▪ Suboptimal Heuristics
Local
▪ Misestimated costs
Complex environments
User-defined functions
Spark Catalyst Optimizer
Spark 1.x, Rule
Spark 2.x, Rule + Cost
Spark 3.0, Rule + Cost + Runtime
adaptive planning
Based on statistics of the finished plan nodes, re-optimize the execution plan of the remaining queries
Adaptive Query Execution [AQE]
Blog post: https://databricks.com/blog/2020/05/29/adaptive-
query-execution-speeding-up-spark-sql-at-runtime.html
Based on statistics of the finished plan nodes, re-optimize the
execution plan of the remaining queries
• Convert Sort Merge Join to Broadcast Hash Join
• Shrink the number of reducers
• Handle skew join
Adaptive Query Execution
One of the Most Popular Performance Tuning Tips
▪ Choose Broadcast Hash Join?
▪ Increase “spark.sql.autoBroadcastJoinThreshold”?
▪ Use “broadcast” hint?
However
▪ Hard to tune
▪ Hard to maintain over time
▪ OOM…
Why Spark not Making the Best Choice Automatically?
▪ Inaccurate/missing statistics;
▪ File is compressed; columnar store;
▪ Complex filters; black-box UDFs;
▪ Complex query fragments…
Estimate size:
30 MB
Actual size:
8 MB
Convert Sort Merge Join to Broadcast Hash Join
Sort Merge Join
Filter
Scan
Shuffle
Sort
Scan
Shuffle
Sort
Stage 1
Stage 2
Estimate size:
100 MB
Execute Sort Merge Join
Filter
Scan
Shuffle
Sort
Scan
Shuffle
Sort
Stage 1
Stage 2
Actual size:
86 MB
Optimize Broadcast Hash Join
Filter
Scan
Shuffle
Broadcast
Scan
Shuffle
Stage 1
Stage 2
Actual size:
86 MB
Actual size:
8 MB
One More Popular Performance Tuning Tip
▪ Tuning spark.sql.shuffle.partitions
▪ Default magic number: 200 !?!
However
▪ Too small: GC pressure; disk spilling
▪ Too large: Inefficient I/O; scheduler pressure
▪ Hard to tune over the whole query plan
▪ Hard to maintain over time
Dynamically Coalesce Shuffle Partitions
Filter
Scan
Execute
Shuffle (50 part.)
Sort
Stage 1
OptimizeFilter
Scan
Shuffle (50 part.)
Sort
Stage 1
Filter
Scan
Shuffle (50 part.)
Sort
Stage 1
Coalesce (5 part.)
Set the initial partition number high to accommodate the
largest data size of the entire query execution
Automatically coalesce partitions if needed after each query
stage
Another Popular Performance Tuning Tip
▪ Symptoms of data skew
▪ Frozen/long-running tasks
▪ Disk spilling
▪ Low resource utilization in most nodes
▪ OOM
▪ Various ways
▪ Find the skew values and rewrite the queries
▪ Adding extra skew keys…
TABLE A
Table A - Part 0
Table A - Part 1
Table B - Part 0
TABLE B
Data Skew in Sort Merge Join
Shuffle Sort
Table B - Part 1
Table A - Part 2
Table B - Part 2
Table A - Part 3
Table B - Part 3
Table A - Part 0
Table A - Part 1
Table B - Part 0
Data Skew in Sort Merge Join
Sort
Merge-Join
Table B - Part 1
Table A - Part 2
Table B - Part 2
Table A - Part 3
Table B - Part 3
Table A – Sorted Part 0
Table B – Sorted Part 0
Table B – Sorted Part 1
Table B – Sorted Part 2
Table B – Sorted Part 3
Table A – Sorted Part 1
Table A – Sorted Part 2
Table B – Sorted Part 3
Merge-Join
Merge-Join
Merge-Join
Dynamically Optimize Skew Joins
Sort Merge Join
Filter
Scan
Execute
Shuffle
Sort
Scan
Shuffle
Sort
Sort Merge Join
Filter
Scan
Shuffle
Sort
Scan
Shuffle
Sort
Stage 1
Stage 2
Stage 1
Stage 2
Optimize
Sort Merge Join
Filter
Scan
Shuffle
Sort
Scan
Shuffle
Sort
Stage 1
Stage 2
Skew Reader Skew Reader
• Detect skew from partition sizes using runtime statistics
• Split skew partitions into smaller sub-partitions
TABLE A
Table A - Part 1
Table B - Part 0
TABLE B
Shuffle Sort
Table B - Part 1
Table A - Part 2
Table B - Part 2
Table A - Part 3
Table B - Part 3
Table B - Part 0
Table B - Part 0
Table A - Part 0 – Split 0
Table A - Part 0 – Split 1
Table A - Part 0 – Split 2
Dynamically Optimize
Skew Joins
Table A - Part 1
Table B - Part 0
Sort
Table B - Part 1
Table A - Part 2
Table B - Part 2
Table A - Part 3
Table B - Part 3
Table B - Part 0
Table B - Part 0
Table A - Part 0 – Split 0
Table A - Part 0 – Split 1
Table A - Part 0 – Split 2
Table A - Part 1 [Sorted]
TabB.P0.S1 [Sorted]
Table B - Part 1 [Sorted]
Table A - Part 2 [Sorted]
Table B - Part 2 [Sorted]
Table A - Part 3 [Sorted]
Table B - Part 3 [Sorted]
TabB.P0.S1 [Sorted]
TabB.P0.S0 [Sorted]
TabA.P0.S0 [Sorted]
TabA.P0.S1 [Sorted]
TabA.P0.S2 [Sorted]
Merge-Join
Merge-Join
Merge-Join
Merge-Join
Merge-Join
Merge-Join
Adaptive Query Execution
Performance
Achieve high performance for interactive, batch, streaming and ML workloads
Adaptive Query
Execution
Dynamic Partition
Pruning
Join Hints
Query Compilation
Speedup
Dynamic Partition Pruning
• Avoid partition scanning based on
the query results of the other
query fragments.
• Important for star-schema
queries.
• Significant speedup in TPC-DS.
Dynamic Partition Pruning
60 / 102 TPC-DS queries: a speedup between 2x and 18x
t1: a large
fact table
with many
partitions
t2.id < 2
t2: a dimension
table with a
filter
SELECT t1.id, t2.pKey
FROM t1
JOIN t2
ON t1.pKey = t2.pKey
AND t2.id < 2
t1.pKey = t2.pKey
Dynamic Partition Pruning
Project
Join
Filter
Scan Scan
Optimize
SELECT t1.id, t2.pKey
FROM t1
JOIN t2
ON t1.pKey = t2.pKey
AND t2.id < 2
Dynamic Partition Pruning
Scan all the
partitions of t1
Filter
pushdown
t1.pkey IN (
SELECT t2.pKey
FROM t2
WHERE t2.id < 2)
t2.id < 2
Project
Join
Filter + Scan
Filter
Optimize
Scan
t1.pKey = t2.pKey
t1: a large
fact table
with many
partitions
t2.id < 2
t2: a dimension
table with a
filter
t1.pKey = t2.pKey
Project
Join
Filter
Scan Scan
Optimize
Dynamic Partition Pruning
Scan all the
partitions of t1
t2.id < 2
Project
Join
Filter + Scan
Filter
Scan
t1.pKey = t2.pKey
Scan the
required
partitions of t2
t1.pKey in
DPPFilterResult
Dynamic Partition Pruning
Optimize
Scan the
required
partitions of t1
t2.id < 2
Project
Join
Filter + Scan Filter + Scan
Scan the
required
partitions of t2
t1.pKey in
DPPFilterResult
Scan all the
partitions of t1
t2.id < 2
Project
Join
Filter + Scan
Filter
Scan
t1.pKey = t2.pKey
Scan the
required
partitions of t2
t1.pKey in
DPPFilterResult
Dynamic Partition Pruning
90+% less file scan33 X faster
Optimize Optimize
Scan the
required
partitions of t1
t2.id < 2
Project
Join
Filter + Scan
Scan the
required
partitions of t2
t1.pKey in
DPPFilterResult
Scan all the
partitions of t1
t2.id < 2
Project
Join
Filter + Scan
Filter
Scan
t1.pKey = t2.pKey
Scan the
required
partitions of t2
t1.pKey in
DPPFilterResult
Filter + Scan
Performance
Achieve high performance for interactive, batch, streaming and ML workloads
Adaptive Query
Execution
Dynamic Partition
Pruning
Join Hints
Query Compilation
Speedup
Optimizer Hints
▪ Join hints influence optimizer to choose the join strategies
▪ Broadcast hash join
▪ Sort-merge join NEW
▪ Shuffle hash join NEW
▪ Shuffle nested loop join NEW
▪ Should be used with extreme caution.
▪ Difficult to manage over time.
▪ Broadcast Hash Join
SELECT /*+ BROADCAST(a) */ id FROM a JOIN b ON a.key = b.key
▪ Sort-Merge Join
SELECT /*+ MERGE(a, b) */ id FROM a JOIN b ON a.key = b.key
▪ Shuffle Hash Join
SELECT /*+ SHUFFLE_HASH(a, b) */ id FROM a JOIN b ON a.key = b.key
▪ Shuffle Nested Loop Join
SELECT /*+ SHUFFLE_REPLICATE_NL(a, b) */ id FROM a JOIN b
How to Use Join Hints?
Broadcast Hash Join
Requires one side to be
small. No shuffle, no sort,
very fast.
Sort-Merge Join
Robust. Can handle any
data size. Needs to shuffle
and sort data, slower in
most cases when the table
size is small.
Shuffle Hash Join
Needs to shuffle data but
no sort. Can handle large
tables, but will OOM too if
data is skewed.
Shuffle Nested Loop Join
Doesn’t require join keys.
Enable new use cases and simplify the Spark application development
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
enhancements
DELETE/UPDATE/
MERGE in Catalyst
Richer APIs
Enable new use cases and simplify the Spark application development
Richer APIs
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
enhancements
DELETE/UPDATE/
MERGE in Catalyst
Python
UDF
for SQL
Python lambda
functions for
RDDs
Session-
specific
Python UDF
JAVA
UDF in
Python API
New Pandas UDF
Python Type Hints
Py
UDF
V 3.0V 0.7 V 1.2
2013 2015 20182014 2019/20202016 2017
V 2.0 V 2.1 V 2.3/2.4
Scalar Pandas UDF
[pandas.Series to pandas.Series]
SPARK
2.3
SPARK
3.0Python Type Hints
Grouped Map Pandas Function API
[pandas.DataFrame to pandas.DataFrame]
SPARK
2.3
SPARK
3.0Python Type Hints
Grouped Aggregate Pandas UDF
[pandas.Series to Scalar]
SPARK
2.4
SPARK
3.0Python Type Hints
New Pandas UDF Types
Map Pandas UDF
Cogrouped Map Pandas UDF
New Pandas Function APIs
Enable new use cases and simplify the Spark application development
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
enhancements
DELETE/UPDATE/
MERGE in Catalyst
Richer APIs
Accelerator-aware Scheduling
▪ Widely used for accelerating special workloads,
e.g., deep learning and signal processing.
▪ Supports Standalone, YARN and K8S.
▪ Supports GPU now, FPGA, TPU, etc. in the future.
▪ Needs to specify required resources by configs
▪ Application level. Will support job/stage/task level
in the future.
The workflow
User Spark Cluster Manager
0. Auto-discover resources.
1. Submit an application with
resource requests.
2. Pass resource requests to
cluster manager.
4. Register executors.
3. Allocate executors with
resource isolation.
5. Submit a Spark job. 6. Schedule tasks on available
executors.
7. Dynamic allocation.
8. Retrieve assigned resources
and use them in tasks.
9. Monitor and recover failed
executors.
Discover and request accelerators
Admin can specify a script to auto-discover accelerators (SPARK-27024)
● spark.driver.resource.${resourceName}.discoveryScript
● spark.executor.resource.${resourceName}.discoveryScript
● e.g., `nvidia-smi --query-gpu=index ...`
User can request accelerators at application level (SPARK-27366)
● spark.executor.resource.${resourceName}.amount
● spark.driver.resource.${resourceName}.amount
● spark.task.resource.${resourceName}.amount
Retrieve assigned accelerators
User can retrieve assigned accelerators from task context (SPARK-
27366)
context = TaskContext.get()
assigned_gpu =
context.resources().get(“gpu”).get.addresses.head
with tf.device(assigned_gpu):
# training code ...
Cluster manager support
Standalone
SPARK-27360
YARN
SPARK-27361
Kubernetes
SPARK-27362
Mesos (not
started)
SPARK-27363
Web UI for accelerators
Enable new use cases and simplify the Spark application development
Richer APIs
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
enhancements
DELETE/UPDATE/
MERGE in Catalyst
32 New Built-in Functions
Make monitoring and debugging Spark applications more comprehensive and stable
Structured
Streaming UI
DDL/DML
Enhancements
Observable
Metrics
Event Log
Rollover
Monitoring and Debuggability
Make monitoring and debugging Spark applications more comprehensive and stable
Monitoring and Debuggability
Structured
Streaming UI
DDL/DML
Enhancements
Observable
Metrics
Event Log
Rollover
Structured Streaming UI
Make monitoring and debugging Spark applications more comprehensive and stable
Monitoring and Debuggability
Structured
Streaming UI
DDL/DML
Enhancements
Observable
Metrics
Event Log
Rollover
New Command EXPLAIN FORMATTED
*(1) Project [key#5, val#6]
+- *(1) Filter (isnotnull(key#5) AND (key#5 = Subquery scalar-subquery#15, [id=#113]))
: +- Subquery scalar-subquery#15, [id=#113]
: +- *(2) HashAggregate(keys=[], functions=[max(key#21)])
: +- Exchange SinglePartition, true, [id=#109]
: +- *(1) HashAggregate(keys=[], functions=[partial_max(key#21)])
: +- *(1) Project [key#21]
: +- *(1) Filter (isnotnull(val#22) AND (val#22 > 5))
: +- *(1) ColumnarToRow
: +- FileScan parquet default.tab2[key#21,val#22] Batched: true, DataFilters: [isnotnull(val#22), (val#22 >
5)], Format: Parquet, Location: InMemoryFileIndex[file:/user/hive/warehouse/tab2], PartitionFilters: [],
PushedFilters: [IsNotNull(val), GreaterThan(val,5)], ReadSchema: struct<key:int,val:int>
+- *(1) ColumnarToRow
+- FileScan parquet default.tab1[key#5,val#6] Batched: true, DataFilters: [isnotnull(key#5)], Format: Parquet,
Location: InMemoryFileIndex[file:/user/hive/warehouse/tab1], PartitionFilters: [], PushedFilters: [IsNotNull(key)],
ReadSchema: struct<key:int,val:int>
* Project (4)
+- * Filter (3)
+- * ColumnarToRow (2)
+- Scan parquet default.tab1 (1)
(1) Scan parquet default.tab1
Output [2]: [key#5, val#6]
Batched: true
Location: InMemoryFileIndex [file:/user/hive/warehouse/tab1]
PushedFilters: [IsNotNull(key)]
ReadSchema: struct<key:int,val:int>
(2) ColumnarToRow [codegen id : 1]
Input [2]: [key#5, val#6]
(3) Filter [codegen id : 1]
Input [2]: [key#5, val#6]
Condition : (isnotnull(key#5) AND (key#5 = Subquery scalar-subquery#27, [id=#164]))
(4) Project [codegen id : 1]
Output [2]: [key#5, val#6]
Input [2]: [key#5, val#6]
EXPLAIN FORMATTED
SELECT *
FROM tab1
WHERE key = (SELECT max(key)
FROM tab2
WHERE val > 5
(5) Scan parquet default.tab2
Output [2]: [key#21, val#22]
Batched: true
Location: InMemoryFileIndex [file:/user/hive/warehouse/tab2]
PushedFilters: [IsNotNull(val), GreaterThan(val,5)]
ReadSchema: struct<key:int,val:int>
(6) ColumnarToRow [codegen id : 1]
Input [2]: [key#21, val#22]
(7) Filter [codegen id : 1]
Input [2]: [key#21, val#22]
Condition : (isnotnull(val#22) AND (val#22 > 5))
===== Subqueries =====
Subquery:1 Hosting operator id = 3 Hosting Expression = Subquery scalar-subquery#27, [id=#164]
* HashAggregate (11)
+- Exchange (10)
+- * HashAggregate (9)
+- * Project (8)
+- * Filter (7)
+- * ColumnarToRow (6)
+- Scan parquet default.tab2 (5)
(8) Project [codegen id : 1]
Output [1]: [key#21]
Input [2]: [key#21, val#22]
(9) HashAggregate [codegen id : 1]
Input [1]: [key#21]
Keys: []
Functions [1]: [partial_max(key#21)]
Aggregate Attributes [1]: [max#35]
Results [1]: [max#36]
(10) Exchange
Input [1]: [max#36]
Arguments: SinglePartition, true, [id=#160]
(11) HashAggregate [codegen id : 2]
Input [1]: [max#36]
Keys: []
Functions [1]: [max(key#21)]
Aggregate Attributes [1]: [max(key#21)#33]
Results [1]: [max(key#21)#33 AS max(key)#34]
DDL/DML
Enhancements
Make monitoring and debugging Spark applications more comprehensive and stable
Monitoring and Debuggability
Structured
Streaming UI
Observable
Metrics
Event Log
Rollover
A flexible way to monitor data quality.
Observable Metrics
Reduce the time and complexity of enabling applications that were written for other
relational database products to run in Spark SQL.
Reserved Keywords
in Parser
Proleptic Gregorian
Calendar
ANSI Store
Assignment
Overflow
Checking
SQL Compatibility
Reduce the time and complexity of enabling applications that were written for other
relational database products to run in Spark SQL.
Reserved Keywords
in Parser
Proleptic Gregorian
Calendar
ANSI Store
Assignment
Overflow
Checking
SQL Compatibility
A safer way to do table insertion and avoid bad data.
ANSI store assignment + overflow check
A safer way to do table insertion and avoid bad data.
ANSI store assignment + overflow check
Enhance the performance and functionalities of the built-in data sources
Parquet/ORC Nested
Column Pruning
Parquet: Nested Column
Filter Pushdown
New Binary
Data Source
CSV Filter
Pushdown
Built-in Data Sources
Enhance the performance and functionalities of the built-in data sources
Parquet/ORC Nested
Column Pruning
Parquet: Nested Column
Filter Pushdown
New Binary
Data Source
CSV Filter
Pushdown
Built-in Data Sources
▪ Skip reading useless data blocks when only a few inner fields are
selected.
Better performance for nested fields
▪ Skip reading useless data blocks when there are predicates with
inner fields.
Better performance for nested fields
Improve the plug-in interface and extend the deployment environments
Data Source V2 API +
Catalog Support
Hive 3.x Metastore
Hive 2.3 Execution
Hadoop 3
Support
Java 11
Support
Extensibility and Ecosystem
Improve the plug-in interface and extend the deployment environments
Data Source V2 API +
Catalog Support
Hive 3.x Metastore
Hive 2.3 Execution
Hadoop 3
Support
Java 11
Support
Extensibility and Ecosystem
Catalog plugin API
Users can register customized catalogs and use Spark to
access/manipulate table metadata directly.
JDBC data source v2 is coming in Spark 3.1
To developers: When to use Data Source V2?
▪ Pick V2 if you want to provide catalog functionalities, as V1
doesn’t have such ability.
▪ Pick V2 If you want to support both batch and streaming, as V1
uses separated APIs for batch and streaming which makes it
hard to reuse code.
▪ Pick V2 if you are sensitive to the scan performance, as V2 allows
you to report data partitioning to skip shuffle, and implement
vectorized reader for better performance.
Data source v2 API is not as stable as V1!
Improve the plug-in interface and extend the deployment environments
Data Source V2 API +
Catalog Support
Hive 3.x Metastore
Hive 2.3 Execution
Hadoop 3
Support
Java 11
Support
Extensibility and Ecosystem
Spark 3.0 Builds
• Only builds with Scala 2.12
• Deprecates Python 2 (already EOL)
• Can build with various Hadoop/Hive versions
– Hadoop 2.7 + Hive 1.2
– Hadoop 2.7 + Hive 2.3 (supports Java 11) [Default]
– Hadoop 3.2 + Hive 2.3 (supports Java 11)
• Supports the following Hive metastore versions:
– "0.12", "0.13", "0.14", "1.0", "1.1", "1.2", "2.0", "2.1", "2.2", "2.3", "3.0", "3.1"
32 New Built-in Functions
▪ map
Documentation
• Web UI
• SQL reference
• Migration guide
• Semantic versioning guidelines
Adaptive Query
Execution
Dynamic Partition
Pruning
Query Compilation
Speedup
Join Hints
Performance
Richer APIs
Accelerator-aware
Scheduler
Built-in
Functions
pandas UDF
Enhancements
DELETE/UPDATE/
MERGE in Catalyst
Reserved
Keywords
Proleptic
Gregorian Calendar
ANSI Store
Assignment
Overflow
Checking
SQL Compatibility
Built-in Data Sources
Parquet/ORC Nested
Column Pruning
Parquet: Nested
Column Filter
Pushdown
CSV Filter
Pushdown
New Binary
Data Source
Data Source V2 API +
Catalog Support
Java 11 Support
Hadoop 3
Support
Hive 3.x Metastore
Hive 2.3 Execution
Extensibility and Ecosystem
Structured
Streaming UI
DDL/DML
Enhancements
Observable
Metrics
Event Log
Rollover
Monitoring and Debuggability
Try Databricks
Runtime 7.0 Beta
For Free
https://databricks.
com/try-databricks
Thank you for your
contributions!

More Related Content

What's hot

Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Hive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas PatilHive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas Patil
Databricks
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
Databricks
 
Data Source API in Spark
Data Source API in SparkData Source API in Spark
Data Source API in Spark
Databricks
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Databricks
 
Change Data Feed in Delta
Change Data Feed in DeltaChange Data Feed in Delta
Change Data Feed in Delta
Databricks
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Databricks
 
How We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IOHow We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IO
Databricks
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming Jobs
Databricks
 
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Databricks
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
Nishith Agarwal
 
Spark SQL Join Improvement at Facebook
Spark SQL Join Improvement at FacebookSpark SQL Join Improvement at Facebook
Spark SQL Join Improvement at Facebook
Databricks
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
Cloudera, Inc.
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
StreamNative
 
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
Julian Hyde
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
Anton Kirillov
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
Databricks
 

What's hot (20)

Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 
Hive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas PatilHive Bucketing in Apache Spark with Tejas Patil
Hive Bucketing in Apache Spark with Tejas Patil
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
 
Data Source API in Spark
Data Source API in SparkData Source API in Spark
Data Source API in Spark
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
 
Change Data Feed in Delta
Change Data Feed in DeltaChange Data Feed in Delta
Change Data Feed in Delta
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
 
How We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IOHow We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IO
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming Jobs
 
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
 
Spark SQL Join Improvement at Facebook
Spark SQL Join Improvement at FacebookSpark SQL Join Improvement at Facebook
Spark SQL Join Improvement at Facebook
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
 
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 

Similar to Deep Dive into the New Features of Apache Spark 3.0

What’s New in the Upcoming Apache Spark 3.0
What’s New in the Upcoming Apache Spark 3.0What’s New in the Upcoming Apache Spark 3.0
What’s New in the Upcoming Apache Spark 3.0
Databricks
 
Apache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why CareApache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why Care
Databricks
 
Spark + AI Summit recap jul16 2020
Spark + AI Summit recap jul16 2020Spark + AI Summit recap jul16 2020
Spark + AI Summit recap jul16 2020
Guido Oswald
 
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at RuntimeAdaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Databricks
 
Skew Mitigation For Facebook PetabyteScale Joins
Skew Mitigation For Facebook PetabyteScale JoinsSkew Mitigation For Facebook PetabyteScale Joins
Skew Mitigation For Facebook PetabyteScale Joins
Databricks
 
Improving Spark SQL at LinkedIn
Improving Spark SQL at LinkedInImproving Spark SQL at LinkedIn
Improving Spark SQL at LinkedIn
Databricks
 
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
Databricks
 
Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2 Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2
Databricks
 
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
Databricks
 
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
Databricks
 
Deep Dive into the New Features of Apache Spark 3.1
Deep Dive into the New Features of Apache Spark 3.1Deep Dive into the New Features of Apache Spark 3.1
Deep Dive into the New Features of Apache Spark 3.1
Databricks
 
Advanced Analytics using Apache Hive
Advanced Analytics using Apache HiveAdvanced Analytics using Apache Hive
Advanced Analytics using Apache Hive
Murtaza Doctor
 
Optimising Geospatial Queries with Dynamic File Pruning
Optimising Geospatial Queries with Dynamic File PruningOptimising Geospatial Queries with Dynamic File Pruning
Optimising Geospatial Queries with Dynamic File Pruning
Databricks
 
sql_bootcamp.pdf
sql_bootcamp.pdfsql_bootcamp.pdf
sql_bootcamp.pdf
John McClane
 
How to use Impala query plan and profile to fix performance issues
How to use Impala query plan and profile to fix performance issuesHow to use Impala query plan and profile to fix performance issues
How to use Impala query plan and profile to fix performance issues
Cloudera, Inc.
 
2017 AWS DB Day | Amazon Redshift 소개 및 실습
2017 AWS DB Day | Amazon Redshift  소개 및 실습2017 AWS DB Day | Amazon Redshift  소개 및 실습
2017 AWS DB Day | Amazon Redshift 소개 및 실습
Amazon Web Services Korea
 
Scalable data pipeline
Scalable data pipelineScalable data pipeline
Scalable data pipeline
GreenM
 
Mutable data @ scale
Mutable data @ scaleMutable data @ scale
Mutable data @ scale
Ori Reshef
 
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan ZhangExperiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
Databricks
 
Jboss World 2011 Infinispan
Jboss World 2011 InfinispanJboss World 2011 Infinispan
Jboss World 2011 Infinispancbo_
 

Similar to Deep Dive into the New Features of Apache Spark 3.0 (20)

What’s New in the Upcoming Apache Spark 3.0
What’s New in the Upcoming Apache Spark 3.0What’s New in the Upcoming Apache Spark 3.0
What’s New in the Upcoming Apache Spark 3.0
 
Apache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why CareApache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why Care
 
Spark + AI Summit recap jul16 2020
Spark + AI Summit recap jul16 2020Spark + AI Summit recap jul16 2020
Spark + AI Summit recap jul16 2020
 
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at RuntimeAdaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
 
Skew Mitigation For Facebook PetabyteScale Joins
Skew Mitigation For Facebook PetabyteScale JoinsSkew Mitigation For Facebook PetabyteScale Joins
Skew Mitigation For Facebook PetabyteScale Joins
 
Improving Spark SQL at LinkedIn
Improving Spark SQL at LinkedInImproving Spark SQL at LinkedIn
Improving Spark SQL at LinkedIn
 
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
Using Delta Lake to Transform a Legacy Apache Spark to Support Complex Update...
 
Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2 Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2
 
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...
 
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
Optimize the Large Scale Graph Applications by using Apache Spark with 4-5x P...
 
Deep Dive into the New Features of Apache Spark 3.1
Deep Dive into the New Features of Apache Spark 3.1Deep Dive into the New Features of Apache Spark 3.1
Deep Dive into the New Features of Apache Spark 3.1
 
Advanced Analytics using Apache Hive
Advanced Analytics using Apache HiveAdvanced Analytics using Apache Hive
Advanced Analytics using Apache Hive
 
Optimising Geospatial Queries with Dynamic File Pruning
Optimising Geospatial Queries with Dynamic File PruningOptimising Geospatial Queries with Dynamic File Pruning
Optimising Geospatial Queries with Dynamic File Pruning
 
sql_bootcamp.pdf
sql_bootcamp.pdfsql_bootcamp.pdf
sql_bootcamp.pdf
 
How to use Impala query plan and profile to fix performance issues
How to use Impala query plan and profile to fix performance issuesHow to use Impala query plan and profile to fix performance issues
How to use Impala query plan and profile to fix performance issues
 
2017 AWS DB Day | Amazon Redshift 소개 및 실습
2017 AWS DB Day | Amazon Redshift  소개 및 실습2017 AWS DB Day | Amazon Redshift  소개 및 실습
2017 AWS DB Day | Amazon Redshift 소개 및 실습
 
Scalable data pipeline
Scalable data pipelineScalable data pipeline
Scalable data pipeline
 
Mutable data @ scale
Mutable data @ scaleMutable data @ scale
Mutable data @ scale
 
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan ZhangExperiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
Experiences Migrating Hive Workload to SparkSQL with Jie Xiong and Zhan Zhang
 
Jboss World 2011 Infinispan
Jboss World 2011 InfinispanJboss World 2011 Infinispan
Jboss World 2011 Infinispan
 

More from Databricks

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Machine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack DetectionMachine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack Detection
Databricks
 

More from Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Machine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack DetectionMachine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack Detection
 

Recently uploaded

社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
NABLAS株式会社
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
Business update Q1 2024 Lar España Real Estate SOCIMI
Business update Q1 2024 Lar España Real Estate SOCIMIBusiness update Q1 2024 Lar España Real Estate SOCIMI
Business update Q1 2024 Lar España Real Estate SOCIMI
AlejandraGmez176757
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
ewymefz
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
Investigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_CrimesInvestigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_Crimes
StarCompliance.io
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
nscud
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
alex933524
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
correoyaya
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
James Polillo
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 

Recently uploaded (20)

社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Business update Q1 2024 Lar España Real Estate SOCIMI
Business update Q1 2024 Lar España Real Estate SOCIMIBusiness update Q1 2024 Lar España Real Estate SOCIMI
Business update Q1 2024 Lar España Real Estate SOCIMI
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
Investigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_CrimesInvestigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_Crimes
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 

Deep Dive into the New Features of Apache Spark 3.0

  • 1. Deep Dive into the New Features of Upcoming Apache Spark 3.0 Xiao Li gatorsmile June 2020 Wenchen Fan cloud-fan
  • 2. • Open Source Team at • Apache Spark Committer and PMC About Us Xiao Li (Github: gatorsmile) Wenchen Fan (Github: cloud-fan)
  • 3. Unified data analytics platform for accelerating innovation across data science, data engineering, and business analytics Original creators of popular data and machine learning open source projects Global company with 5,000 customers and 450+ partners
  • 5. Adaptive Query Execution Dynamic Partition Pruning Query Compilation Speedup Join Hints Performance Richer APIs Accelerator-aware Scheduler Built-in Functions pandas UDF Enhancements DELETE/UPDATE/ MERGE in Catalyst Reserved Keywords Proleptic Gregorian Calendar ANSI Store Assignment Overflow Checking SQL Compatibility Built-in Data Sources Parquet/ORC Nested Column Pruning Parquet: Nested Column Filter Pushdown CSV Filter Pushdown New Binary Data Source Data Source V2 API + Catalog Support Java 11 Support Hadoop 3 Support Hive 3.x Metastore Hive 2.3 Execution Extensibility and Ecosystem Structured Streaming UI DDL/DML Enhancements Observable Metrics Event Log Rollover Monitoring and Debuggability
  • 6. Performance Achieve high performance for interactive, batch, streaming and ML workloads Adaptive Query Execution Dynamic Partition Pruning Query Compilation Speedup Join Hints
  • 7. Performance Achieve high performance for interactive, batch, streaming and ML workloads Adaptive Query Execution Dynamic Partition Pruning Join Hints Query Compilation Speedup
  • 8. Spark Catalyst Optimizer Spark 1.x, Rule Spark 2.x, Rule + Cost
  • 9. Query Optimization in Spark 2.x ▪ Missing statistics Expensive statistics collection ▪ Out-of-date statistics Compute and storage separated ▪ Suboptimal Heuristics Local ▪ Misestimated costs Complex environments User-defined functions
  • 10. Spark Catalyst Optimizer Spark 1.x, Rule Spark 2.x, Rule + Cost Spark 3.0, Rule + Cost + Runtime
  • 11. adaptive planning Based on statistics of the finished plan nodes, re-optimize the execution plan of the remaining queries Adaptive Query Execution [AQE]
  • 12. Blog post: https://databricks.com/blog/2020/05/29/adaptive- query-execution-speeding-up-spark-sql-at-runtime.html Based on statistics of the finished plan nodes, re-optimize the execution plan of the remaining queries • Convert Sort Merge Join to Broadcast Hash Join • Shrink the number of reducers • Handle skew join Adaptive Query Execution
  • 13. One of the Most Popular Performance Tuning Tips ▪ Choose Broadcast Hash Join? ▪ Increase “spark.sql.autoBroadcastJoinThreshold”? ▪ Use “broadcast” hint? However ▪ Hard to tune ▪ Hard to maintain over time ▪ OOM…
  • 14. Why Spark not Making the Best Choice Automatically? ▪ Inaccurate/missing statistics; ▪ File is compressed; columnar store; ▪ Complex filters; black-box UDFs; ▪ Complex query fragments…
  • 15. Estimate size: 30 MB Actual size: 8 MB Convert Sort Merge Join to Broadcast Hash Join Sort Merge Join Filter Scan Shuffle Sort Scan Shuffle Sort Stage 1 Stage 2 Estimate size: 100 MB Execute Sort Merge Join Filter Scan Shuffle Sort Scan Shuffle Sort Stage 1 Stage 2 Actual size: 86 MB Optimize Broadcast Hash Join Filter Scan Shuffle Broadcast Scan Shuffle Stage 1 Stage 2 Actual size: 86 MB Actual size: 8 MB
  • 16. One More Popular Performance Tuning Tip ▪ Tuning spark.sql.shuffle.partitions ▪ Default magic number: 200 !?! However ▪ Too small: GC pressure; disk spilling ▪ Too large: Inefficient I/O; scheduler pressure ▪ Hard to tune over the whole query plan ▪ Hard to maintain over time
  • 17. Dynamically Coalesce Shuffle Partitions Filter Scan Execute Shuffle (50 part.) Sort Stage 1 OptimizeFilter Scan Shuffle (50 part.) Sort Stage 1 Filter Scan Shuffle (50 part.) Sort Stage 1 Coalesce (5 part.) Set the initial partition number high to accommodate the largest data size of the entire query execution Automatically coalesce partitions if needed after each query stage
  • 18. Another Popular Performance Tuning Tip ▪ Symptoms of data skew ▪ Frozen/long-running tasks ▪ Disk spilling ▪ Low resource utilization in most nodes ▪ OOM ▪ Various ways ▪ Find the skew values and rewrite the queries ▪ Adding extra skew keys…
  • 19. TABLE A Table A - Part 0 Table A - Part 1 Table B - Part 0 TABLE B Data Skew in Sort Merge Join Shuffle Sort Table B - Part 1 Table A - Part 2 Table B - Part 2 Table A - Part 3 Table B - Part 3
  • 20. Table A - Part 0 Table A - Part 1 Table B - Part 0 Data Skew in Sort Merge Join Sort Merge-Join Table B - Part 1 Table A - Part 2 Table B - Part 2 Table A - Part 3 Table B - Part 3 Table A – Sorted Part 0 Table B – Sorted Part 0 Table B – Sorted Part 1 Table B – Sorted Part 2 Table B – Sorted Part 3 Table A – Sorted Part 1 Table A – Sorted Part 2 Table B – Sorted Part 3 Merge-Join Merge-Join Merge-Join
  • 21. Dynamically Optimize Skew Joins Sort Merge Join Filter Scan Execute Shuffle Sort Scan Shuffle Sort Sort Merge Join Filter Scan Shuffle Sort Scan Shuffle Sort Stage 1 Stage 2 Stage 1 Stage 2 Optimize Sort Merge Join Filter Scan Shuffle Sort Scan Shuffle Sort Stage 1 Stage 2 Skew Reader Skew Reader • Detect skew from partition sizes using runtime statistics • Split skew partitions into smaller sub-partitions
  • 22. TABLE A Table A - Part 1 Table B - Part 0 TABLE B Shuffle Sort Table B - Part 1 Table A - Part 2 Table B - Part 2 Table A - Part 3 Table B - Part 3 Table B - Part 0 Table B - Part 0 Table A - Part 0 – Split 0 Table A - Part 0 – Split 1 Table A - Part 0 – Split 2 Dynamically Optimize Skew Joins
  • 23. Table A - Part 1 Table B - Part 0 Sort Table B - Part 1 Table A - Part 2 Table B - Part 2 Table A - Part 3 Table B - Part 3 Table B - Part 0 Table B - Part 0 Table A - Part 0 – Split 0 Table A - Part 0 – Split 1 Table A - Part 0 – Split 2 Table A - Part 1 [Sorted] TabB.P0.S1 [Sorted] Table B - Part 1 [Sorted] Table A - Part 2 [Sorted] Table B - Part 2 [Sorted] Table A - Part 3 [Sorted] Table B - Part 3 [Sorted] TabB.P0.S1 [Sorted] TabB.P0.S0 [Sorted] TabA.P0.S0 [Sorted] TabA.P0.S1 [Sorted] TabA.P0.S2 [Sorted] Merge-Join Merge-Join Merge-Join Merge-Join Merge-Join Merge-Join
  • 25. Performance Achieve high performance for interactive, batch, streaming and ML workloads Adaptive Query Execution Dynamic Partition Pruning Join Hints Query Compilation Speedup
  • 26. Dynamic Partition Pruning • Avoid partition scanning based on the query results of the other query fragments. • Important for star-schema queries. • Significant speedup in TPC-DS.
  • 27. Dynamic Partition Pruning 60 / 102 TPC-DS queries: a speedup between 2x and 18x
  • 28. t1: a large fact table with many partitions t2.id < 2 t2: a dimension table with a filter SELECT t1.id, t2.pKey FROM t1 JOIN t2 ON t1.pKey = t2.pKey AND t2.id < 2 t1.pKey = t2.pKey Dynamic Partition Pruning Project Join Filter Scan Scan Optimize
  • 29. SELECT t1.id, t2.pKey FROM t1 JOIN t2 ON t1.pKey = t2.pKey AND t2.id < 2 Dynamic Partition Pruning Scan all the partitions of t1 Filter pushdown t1.pkey IN ( SELECT t2.pKey FROM t2 WHERE t2.id < 2) t2.id < 2 Project Join Filter + Scan Filter Optimize Scan t1.pKey = t2.pKey t1: a large fact table with many partitions t2.id < 2 t2: a dimension table with a filter t1.pKey = t2.pKey Project Join Filter Scan Scan Optimize
  • 30. Dynamic Partition Pruning Scan all the partitions of t1 t2.id < 2 Project Join Filter + Scan Filter Scan t1.pKey = t2.pKey Scan the required partitions of t2 t1.pKey in DPPFilterResult
  • 31. Dynamic Partition Pruning Optimize Scan the required partitions of t1 t2.id < 2 Project Join Filter + Scan Filter + Scan Scan the required partitions of t2 t1.pKey in DPPFilterResult Scan all the partitions of t1 t2.id < 2 Project Join Filter + Scan Filter Scan t1.pKey = t2.pKey Scan the required partitions of t2 t1.pKey in DPPFilterResult
  • 32. Dynamic Partition Pruning 90+% less file scan33 X faster Optimize Optimize Scan the required partitions of t1 t2.id < 2 Project Join Filter + Scan Scan the required partitions of t2 t1.pKey in DPPFilterResult Scan all the partitions of t1 t2.id < 2 Project Join Filter + Scan Filter Scan t1.pKey = t2.pKey Scan the required partitions of t2 t1.pKey in DPPFilterResult Filter + Scan
  • 33. Performance Achieve high performance for interactive, batch, streaming and ML workloads Adaptive Query Execution Dynamic Partition Pruning Join Hints Query Compilation Speedup
  • 34. Optimizer Hints ▪ Join hints influence optimizer to choose the join strategies ▪ Broadcast hash join ▪ Sort-merge join NEW ▪ Shuffle hash join NEW ▪ Shuffle nested loop join NEW ▪ Should be used with extreme caution. ▪ Difficult to manage over time.
  • 35. ▪ Broadcast Hash Join SELECT /*+ BROADCAST(a) */ id FROM a JOIN b ON a.key = b.key ▪ Sort-Merge Join SELECT /*+ MERGE(a, b) */ id FROM a JOIN b ON a.key = b.key ▪ Shuffle Hash Join SELECT /*+ SHUFFLE_HASH(a, b) */ id FROM a JOIN b ON a.key = b.key ▪ Shuffle Nested Loop Join SELECT /*+ SHUFFLE_REPLICATE_NL(a, b) */ id FROM a JOIN b How to Use Join Hints?
  • 36. Broadcast Hash Join Requires one side to be small. No shuffle, no sort, very fast. Sort-Merge Join Robust. Can handle any data size. Needs to shuffle and sort data, slower in most cases when the table size is small. Shuffle Hash Join Needs to shuffle data but no sort. Can handle large tables, but will OOM too if data is skewed. Shuffle Nested Loop Join Doesn’t require join keys.
  • 37. Enable new use cases and simplify the Spark application development Accelerator-aware Scheduler Built-in Functions pandas UDF enhancements DELETE/UPDATE/ MERGE in Catalyst Richer APIs
  • 38. Enable new use cases and simplify the Spark application development Richer APIs Accelerator-aware Scheduler Built-in Functions pandas UDF enhancements DELETE/UPDATE/ MERGE in Catalyst
  • 39. Python UDF for SQL Python lambda functions for RDDs Session- specific Python UDF JAVA UDF in Python API New Pandas UDF Python Type Hints Py UDF V 3.0V 0.7 V 1.2 2013 2015 20182014 2019/20202016 2017 V 2.0 V 2.1 V 2.3/2.4
  • 40. Scalar Pandas UDF [pandas.Series to pandas.Series] SPARK 2.3 SPARK 3.0Python Type Hints
  • 41. Grouped Map Pandas Function API [pandas.DataFrame to pandas.DataFrame] SPARK 2.3 SPARK 3.0Python Type Hints
  • 42. Grouped Aggregate Pandas UDF [pandas.Series to Scalar] SPARK 2.4 SPARK 3.0Python Type Hints
  • 43. New Pandas UDF Types
  • 44. Map Pandas UDF Cogrouped Map Pandas UDF New Pandas Function APIs
  • 45. Enable new use cases and simplify the Spark application development Accelerator-aware Scheduler Built-in Functions pandas UDF enhancements DELETE/UPDATE/ MERGE in Catalyst Richer APIs
  • 46. Accelerator-aware Scheduling ▪ Widely used for accelerating special workloads, e.g., deep learning and signal processing. ▪ Supports Standalone, YARN and K8S. ▪ Supports GPU now, FPGA, TPU, etc. in the future. ▪ Needs to specify required resources by configs ▪ Application level. Will support job/stage/task level in the future.
  • 47. The workflow User Spark Cluster Manager 0. Auto-discover resources. 1. Submit an application with resource requests. 2. Pass resource requests to cluster manager. 4. Register executors. 3. Allocate executors with resource isolation. 5. Submit a Spark job. 6. Schedule tasks on available executors. 7. Dynamic allocation. 8. Retrieve assigned resources and use them in tasks. 9. Monitor and recover failed executors.
  • 48. Discover and request accelerators Admin can specify a script to auto-discover accelerators (SPARK-27024) ● spark.driver.resource.${resourceName}.discoveryScript ● spark.executor.resource.${resourceName}.discoveryScript ● e.g., `nvidia-smi --query-gpu=index ...` User can request accelerators at application level (SPARK-27366) ● spark.executor.resource.${resourceName}.amount ● spark.driver.resource.${resourceName}.amount ● spark.task.resource.${resourceName}.amount
  • 49. Retrieve assigned accelerators User can retrieve assigned accelerators from task context (SPARK- 27366) context = TaskContext.get() assigned_gpu = context.resources().get(“gpu”).get.addresses.head with tf.device(assigned_gpu): # training code ...
  • 51. Web UI for accelerators
  • 52. Enable new use cases and simplify the Spark application development Richer APIs Accelerator-aware Scheduler Built-in Functions pandas UDF enhancements DELETE/UPDATE/ MERGE in Catalyst
  • 53. 32 New Built-in Functions
  • 54.
  • 55.
  • 56.
  • 57. Make monitoring and debugging Spark applications more comprehensive and stable Structured Streaming UI DDL/DML Enhancements Observable Metrics Event Log Rollover Monitoring and Debuggability
  • 58. Make monitoring and debugging Spark applications more comprehensive and stable Monitoring and Debuggability Structured Streaming UI DDL/DML Enhancements Observable Metrics Event Log Rollover
  • 60. Make monitoring and debugging Spark applications more comprehensive and stable Monitoring and Debuggability Structured Streaming UI DDL/DML Enhancements Observable Metrics Event Log Rollover
  • 61. New Command EXPLAIN FORMATTED *(1) Project [key#5, val#6] +- *(1) Filter (isnotnull(key#5) AND (key#5 = Subquery scalar-subquery#15, [id=#113])) : +- Subquery scalar-subquery#15, [id=#113] : +- *(2) HashAggregate(keys=[], functions=[max(key#21)]) : +- Exchange SinglePartition, true, [id=#109] : +- *(1) HashAggregate(keys=[], functions=[partial_max(key#21)]) : +- *(1) Project [key#21] : +- *(1) Filter (isnotnull(val#22) AND (val#22 > 5)) : +- *(1) ColumnarToRow : +- FileScan parquet default.tab2[key#21,val#22] Batched: true, DataFilters: [isnotnull(val#22), (val#22 > 5)], Format: Parquet, Location: InMemoryFileIndex[file:/user/hive/warehouse/tab2], PartitionFilters: [], PushedFilters: [IsNotNull(val), GreaterThan(val,5)], ReadSchema: struct<key:int,val:int> +- *(1) ColumnarToRow +- FileScan parquet default.tab1[key#5,val#6] Batched: true, DataFilters: [isnotnull(key#5)], Format: Parquet, Location: InMemoryFileIndex[file:/user/hive/warehouse/tab1], PartitionFilters: [], PushedFilters: [IsNotNull(key)], ReadSchema: struct<key:int,val:int>
  • 62. * Project (4) +- * Filter (3) +- * ColumnarToRow (2) +- Scan parquet default.tab1 (1) (1) Scan parquet default.tab1 Output [2]: [key#5, val#6] Batched: true Location: InMemoryFileIndex [file:/user/hive/warehouse/tab1] PushedFilters: [IsNotNull(key)] ReadSchema: struct<key:int,val:int> (2) ColumnarToRow [codegen id : 1] Input [2]: [key#5, val#6] (3) Filter [codegen id : 1] Input [2]: [key#5, val#6] Condition : (isnotnull(key#5) AND (key#5 = Subquery scalar-subquery#27, [id=#164])) (4) Project [codegen id : 1] Output [2]: [key#5, val#6] Input [2]: [key#5, val#6] EXPLAIN FORMATTED SELECT * FROM tab1 WHERE key = (SELECT max(key) FROM tab2 WHERE val > 5
  • 63. (5) Scan parquet default.tab2 Output [2]: [key#21, val#22] Batched: true Location: InMemoryFileIndex [file:/user/hive/warehouse/tab2] PushedFilters: [IsNotNull(val), GreaterThan(val,5)] ReadSchema: struct<key:int,val:int> (6) ColumnarToRow [codegen id : 1] Input [2]: [key#21, val#22] (7) Filter [codegen id : 1] Input [2]: [key#21, val#22] Condition : (isnotnull(val#22) AND (val#22 > 5)) ===== Subqueries ===== Subquery:1 Hosting operator id = 3 Hosting Expression = Subquery scalar-subquery#27, [id=#164] * HashAggregate (11) +- Exchange (10) +- * HashAggregate (9) +- * Project (8) +- * Filter (7) +- * ColumnarToRow (6) +- Scan parquet default.tab2 (5) (8) Project [codegen id : 1] Output [1]: [key#21] Input [2]: [key#21, val#22] (9) HashAggregate [codegen id : 1] Input [1]: [key#21] Keys: [] Functions [1]: [partial_max(key#21)] Aggregate Attributes [1]: [max#35] Results [1]: [max#36] (10) Exchange Input [1]: [max#36] Arguments: SinglePartition, true, [id=#160] (11) HashAggregate [codegen id : 2] Input [1]: [max#36] Keys: [] Functions [1]: [max(key#21)] Aggregate Attributes [1]: [max(key#21)#33] Results [1]: [max(key#21)#33 AS max(key)#34]
  • 64. DDL/DML Enhancements Make monitoring and debugging Spark applications more comprehensive and stable Monitoring and Debuggability Structured Streaming UI Observable Metrics Event Log Rollover
  • 65. A flexible way to monitor data quality. Observable Metrics
  • 66. Reduce the time and complexity of enabling applications that were written for other relational database products to run in Spark SQL. Reserved Keywords in Parser Proleptic Gregorian Calendar ANSI Store Assignment Overflow Checking SQL Compatibility
  • 67. Reduce the time and complexity of enabling applications that were written for other relational database products to run in Spark SQL. Reserved Keywords in Parser Proleptic Gregorian Calendar ANSI Store Assignment Overflow Checking SQL Compatibility
  • 68. A safer way to do table insertion and avoid bad data. ANSI store assignment + overflow check
  • 69. A safer way to do table insertion and avoid bad data. ANSI store assignment + overflow check
  • 70. Enhance the performance and functionalities of the built-in data sources Parquet/ORC Nested Column Pruning Parquet: Nested Column Filter Pushdown New Binary Data Source CSV Filter Pushdown Built-in Data Sources
  • 71. Enhance the performance and functionalities of the built-in data sources Parquet/ORC Nested Column Pruning Parquet: Nested Column Filter Pushdown New Binary Data Source CSV Filter Pushdown Built-in Data Sources
  • 72. ▪ Skip reading useless data blocks when only a few inner fields are selected. Better performance for nested fields
  • 73. ▪ Skip reading useless data blocks when there are predicates with inner fields. Better performance for nested fields
  • 74. Improve the plug-in interface and extend the deployment environments Data Source V2 API + Catalog Support Hive 3.x Metastore Hive 2.3 Execution Hadoop 3 Support Java 11 Support Extensibility and Ecosystem
  • 75. Improve the plug-in interface and extend the deployment environments Data Source V2 API + Catalog Support Hive 3.x Metastore Hive 2.3 Execution Hadoop 3 Support Java 11 Support Extensibility and Ecosystem
  • 76. Catalog plugin API Users can register customized catalogs and use Spark to access/manipulate table metadata directly. JDBC data source v2 is coming in Spark 3.1
  • 77. To developers: When to use Data Source V2? ▪ Pick V2 if you want to provide catalog functionalities, as V1 doesn’t have such ability. ▪ Pick V2 If you want to support both batch and streaming, as V1 uses separated APIs for batch and streaming which makes it hard to reuse code. ▪ Pick V2 if you are sensitive to the scan performance, as V2 allows you to report data partitioning to skip shuffle, and implement vectorized reader for better performance. Data source v2 API is not as stable as V1!
  • 78. Improve the plug-in interface and extend the deployment environments Data Source V2 API + Catalog Support Hive 3.x Metastore Hive 2.3 Execution Hadoop 3 Support Java 11 Support Extensibility and Ecosystem
  • 79. Spark 3.0 Builds • Only builds with Scala 2.12 • Deprecates Python 2 (already EOL) • Can build with various Hadoop/Hive versions – Hadoop 2.7 + Hive 1.2 – Hadoop 2.7 + Hive 2.3 (supports Java 11) [Default] – Hadoop 3.2 + Hive 2.3 (supports Java 11) • Supports the following Hive metastore versions: – "0.12", "0.13", "0.14", "1.0", "1.1", "1.2", "2.0", "2.1", "2.2", "2.3", "3.0", "3.1"
  • 80. 32 New Built-in Functions ▪ map Documentation • Web UI • SQL reference • Migration guide • Semantic versioning guidelines
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95. Adaptive Query Execution Dynamic Partition Pruning Query Compilation Speedup Join Hints Performance Richer APIs Accelerator-aware Scheduler Built-in Functions pandas UDF Enhancements DELETE/UPDATE/ MERGE in Catalyst Reserved Keywords Proleptic Gregorian Calendar ANSI Store Assignment Overflow Checking SQL Compatibility Built-in Data Sources Parquet/ORC Nested Column Pruning Parquet: Nested Column Filter Pushdown CSV Filter Pushdown New Binary Data Source Data Source V2 API + Catalog Support Java 11 Support Hadoop 3 Support Hive 3.x Metastore Hive 2.3 Execution Extensibility and Ecosystem Structured Streaming UI DDL/DML Enhancements Observable Metrics Event Log Rollover Monitoring and Debuggability
  • 96. Try Databricks Runtime 7.0 Beta For Free https://databricks. com/try-databricks
  • 97. Thank you for your contributions!