Boosting Spark Performance:
An Overview of Techniques
Ahsan Javed Awan
Motivation
About me
● Erasmus Mundus Joint Doctoral Fellow at KTH Sweden and UPC Spain.
● Visiting Researcher at Barcelona Super Computing Center.
● Speaker at Spark Summit Europe 2016.
● Written Licentiate Thesis, “Performance Characterization of In-Memory Data Analytics
with Apache Spark”
● https://www.kth.se/profile/ajawan/
Motivation
Why should you listen?
● What's new in Apache Spark 2.0
● Phase 1: Memory Management and Cache-aware algorithms
● Phase 2: Whole-stage Codegen and Columnar In-Memory Support
● How to get better performance by
● Choosing and Tuning GC.
● Using multiple executors each with the heap size of not more than 32 GB.
● Exploiting Data Locality on DRAM nodes.
● Turning off Hardware Pre-fetchers
● Keeping the Hyper-Threading on
Motivation
Apache Spark Philosophy?
Motivation
Cont...
I
*Source: http://navcode.info/2012/12/24/cloud-scaling-schemes/
Phoenix ++,
Metis, Ostrich, etc..
Hadoop, Spark,
Flink, etc..
Motivation
Cont..
*Source: SGI
● Exponential increase in core count.
● A mismatch between the characteristics of emerging big data workloads and the
underlying hardware.
● Newer promising technologies (Hybrid Memory Cubes, NVRAM etc)
● Clearing the clouds, ASPLOS' 12
● Characterizing data analysis
workloads, IISWC' 13
● Understanding the behavior of in-
memory computing workloads,
IISWC' 14
Substantially improve the memory and CPU efficiency of Spark
backend execution and push performance closer to the limits of
modern hardware.
Goals of Project Tungsten
Phase 1
Foundation
Memory Management
Code Generation
Cache-aware Algorithms
Phase 2
Order-of-magnitude Faster
Whole-stage Codegen
Vectorization
Cont..
Perform explicit memory management instead of relying on Java objects
• Reduce memory footprint
• Eliminate garbage collection overheads
• Use sun.misc.unsafe rows and off heap memory
Code generation for expression evaluation
• Reduce virtual function calls and interpretation overhead
Cache conscious sorting
• Reduce bad memory access patterns
Summary of Phase I
Progress Meeting 12-12-14
Which Benchmarks ?
Our Hardware Configuration
Which Machine ?
Intel's Ivy Bridge Server
Performance of Cache-aware algorithms ?
DataFrame exhibit 25% less back-end bound stalls 64% less DRAM bound stalled cycles
25% less BW consumption10% less starvation of execution resources
Difficult to get order of magnitude performance speed ups with
profiling techniques
• For 10x improvement, would need of find top hotspots that add up to
90% and make them instantaneous
• For 100x, 99%
Instead, look bottom up, how fast should it run?
Phase 2
Scan
Filter
Project
Aggregate
select count(*) from store_sales
where ss_item_sk = 1000
Cont..
Standard for 30 years:
almost all databases do it
Each operator is an
“iterator” that consumes
records from its input
operator
class Filter(
child: Operator,
predicate: (Row => Boolean))
extends Operator {
def next(): Row = {
var current = child.next()
while (current == null ||predicate(current)) {
current = child.next()
}
return current
}
}
Volcano Iterator Model
select count(*) from store_sales
where ss_item_sk = 1000
long count = 0;
for (ss_item_sk in store_sales) {
if (ss_item_sk == 1000) {
count += 1;
}
}
Hand Written Code
Volcano 13.95 million
rows/sec
college
freshman
125 million
rows/sec
Note: End-to-end, single thread, single column, and data originated in Parquet on disk
High throughput
Volcano Model vs Hand Written Code
Volcano Model
1. Too many virtual function calls
2. Intermediate data in memory (or
L1/L2/L3 cache)
3. Can’t take advantage of modern
CPU features -- no loop unrolling,
SIMD, pipelining, prefetching, branch
prediction etc.
Hand-written code
1. No virtual function calls
2. Data in CPU registers
3. Compiler loop unrolling, SIMD,
pipelining
Volcano vs Hand Written Code
Fusing operators together so the generated code looks like hand
optimized code:
- Identify chains of operators (“stages”)
- Compile each stage into a single function
- Functionality of a general purpose execution engine;
performance as if hand built system just to run your query
Whole-Stage Codegen
mike
In-memory
Row Format
1 john 4.1
2 3.5
3 sally 6.4
1 2 3
john mike sally
4.1 3.5 6.4
In-memory
Column Format
Columnar In-Memory
1. More efficient: denser storage, regular data access, easier to
index into
2. More compatible: Most high-performance external systems
are already columnar (numpy, TensorFlow, Parquet); zero
serialization/copy to work with them
3. Easier to extend: process encoded data
Why Columnar?
Parquet 11 million
rows/sec
Parquet
vectorized
90 million
rows/sec
Note: End-to-end, single thread, single column, and data originated in Parquet on disk
High throughput
Phase 1
Spark 1.4 - 1.6
Memory Management
Code Generation
Cache-aware Algorithms
Phase 2
Spark 2.0+
Whole-stage Code Generation
Columnar in Memory Support
Both whole stage codegen [SPARK-12795] and the vectorized
parquet reader [SPARK-12992] are enabled by default in Spark 2.0+
5-30x
Speedups
Operator Benchmarks: Cost/Row (ns)
1. SPARK-16026: Cost Based Optimizer
- Leverage table/column level statistics to optimize joins and aggregates
- Statistics Collection Framework (Spark 2.1)
- Cost Based Optimizer (Spark 2.2)
2. Boosting Spark’s Performance on Many-Core Machines
- Qifan’s Talk Today at 2:55pm (Research Track)
- In-memory/ single node shuffle
3. Improving quality of generated code and better integration
with the in-memory column format in Spark
Spark 2.1, 2.2 and beyond
Motivation
The choice of Garbage Collector impact the data processing
capability of the system
Improvement in DPS ranges from 1.4x to 3.7x on average in
Parallel Scavenge as compared to G1
Our Approach
Multiple Small executors instead of single large executor
Multiple small executors can provide up-to 36% performance gain
Our Approach
NUMA Awareness
NUMA Awareness results in 10% speed up on average
Our Approach
Hyper Threading is effective
Hyper threading reduces the DRAM bound stalls by 50%
Our Approach
Disable next-line prefetchers
Disabling next-line prefetchers can improve the
performance by 15%
Our Approach
Further Reading
●
Performance characterization of in-memory data analytics on a modern cloud server, in 5th
IEEE
Conference on Big Data and Cloud Computing, 2015 (Best Paper Award).
●
How Data Volume Affects Spark Based Data Analytics on a Scale-up Server in 6th
Workshop on
Big Data Benchmarks, Performance Optimization and Emerging Hardware (BpoE), held in
conjunction with VLDB 2015, Hawaii, USA .
●
Micro-architectural Characterization of Apache Spark on Batch and Stream Processing Workloads,
in 6th
IEEE Conference on Big Data and Cloud Computing, 2016.
●
Node Architecture Implications for In-Memory Data Analytics in Scale-in Clusters in 3rd
IEEE/ACM
Conference in Big Data Computing, Applications and Technologies, 2016.
●
Implications of In-Memory Data Analytics with Apache Spark on Near Data Computing
Architectures (under submission).
THANK YOU.
Acknowledgements:
Sameer Agarwal for Project Tugsten slides

Boosting spark performance: An Overview of Techniques

  • 1.
    Boosting Spark Performance: AnOverview of Techniques Ahsan Javed Awan
  • 2.
    Motivation About me ● ErasmusMundus Joint Doctoral Fellow at KTH Sweden and UPC Spain. ● Visiting Researcher at Barcelona Super Computing Center. ● Speaker at Spark Summit Europe 2016. ● Written Licentiate Thesis, “Performance Characterization of In-Memory Data Analytics with Apache Spark” ● https://www.kth.se/profile/ajawan/
  • 3.
    Motivation Why should youlisten? ● What's new in Apache Spark 2.0 ● Phase 1: Memory Management and Cache-aware algorithms ● Phase 2: Whole-stage Codegen and Columnar In-Memory Support ● How to get better performance by ● Choosing and Tuning GC. ● Using multiple executors each with the heap size of not more than 32 GB. ● Exploiting Data Locality on DRAM nodes. ● Turning off Hardware Pre-fetchers ● Keeping the Hyper-Threading on
  • 4.
  • 5.
  • 6.
    Motivation Cont.. *Source: SGI ● Exponentialincrease in core count. ● A mismatch between the characteristics of emerging big data workloads and the underlying hardware. ● Newer promising technologies (Hybrid Memory Cubes, NVRAM etc) ● Clearing the clouds, ASPLOS' 12 ● Characterizing data analysis workloads, IISWC' 13 ● Understanding the behavior of in- memory computing workloads, IISWC' 14
  • 7.
    Substantially improve thememory and CPU efficiency of Spark backend execution and push performance closer to the limits of modern hardware. Goals of Project Tungsten
  • 8.
    Phase 1 Foundation Memory Management CodeGeneration Cache-aware Algorithms Phase 2 Order-of-magnitude Faster Whole-stage Codegen Vectorization Cont..
  • 9.
    Perform explicit memorymanagement instead of relying on Java objects • Reduce memory footprint • Eliminate garbage collection overheads • Use sun.misc.unsafe rows and off heap memory Code generation for expression evaluation • Reduce virtual function calls and interpretation overhead Cache conscious sorting • Reduce bad memory access patterns Summary of Phase I
  • 10.
  • 11.
    Our Hardware Configuration WhichMachine ? Intel's Ivy Bridge Server
  • 12.
    Performance of Cache-awarealgorithms ? DataFrame exhibit 25% less back-end bound stalls 64% less DRAM bound stalled cycles 25% less BW consumption10% less starvation of execution resources
  • 13.
    Difficult to getorder of magnitude performance speed ups with profiling techniques • For 10x improvement, would need of find top hotspots that add up to 90% and make them instantaneous • For 100x, 99% Instead, look bottom up, how fast should it run? Phase 2
  • 14.
    Scan Filter Project Aggregate select count(*) fromstore_sales where ss_item_sk = 1000 Cont..
  • 15.
    Standard for 30years: almost all databases do it Each operator is an “iterator” that consumes records from its input operator class Filter( child: Operator, predicate: (Row => Boolean)) extends Operator { def next(): Row = { var current = child.next() while (current == null ||predicate(current)) { current = child.next() } return current } } Volcano Iterator Model
  • 16.
    select count(*) fromstore_sales where ss_item_sk = 1000 long count = 0; for (ss_item_sk in store_sales) { if (ss_item_sk == 1000) { count += 1; } } Hand Written Code
  • 17.
    Volcano 13.95 million rows/sec college freshman 125million rows/sec Note: End-to-end, single thread, single column, and data originated in Parquet on disk High throughput Volcano Model vs Hand Written Code
  • 18.
    Volcano Model 1. Toomany virtual function calls 2. Intermediate data in memory (or L1/L2/L3 cache) 3. Can’t take advantage of modern CPU features -- no loop unrolling, SIMD, pipelining, prefetching, branch prediction etc. Hand-written code 1. No virtual function calls 2. Data in CPU registers 3. Compiler loop unrolling, SIMD, pipelining Volcano vs Hand Written Code
  • 19.
    Fusing operators togetherso the generated code looks like hand optimized code: - Identify chains of operators (“stages”) - Compile each stage into a single function - Functionality of a general purpose execution engine; performance as if hand built system just to run your query Whole-Stage Codegen
  • 20.
    mike In-memory Row Format 1 john4.1 2 3.5 3 sally 6.4 1 2 3 john mike sally 4.1 3.5 6.4 In-memory Column Format Columnar In-Memory
  • 21.
    1. More efficient:denser storage, regular data access, easier to index into 2. More compatible: Most high-performance external systems are already columnar (numpy, TensorFlow, Parquet); zero serialization/copy to work with them 3. Easier to extend: process encoded data Why Columnar?
  • 22.
    Parquet 11 million rows/sec Parquet vectorized 90million rows/sec Note: End-to-end, single thread, single column, and data originated in Parquet on disk High throughput
  • 23.
    Phase 1 Spark 1.4- 1.6 Memory Management Code Generation Cache-aware Algorithms Phase 2 Spark 2.0+ Whole-stage Code Generation Columnar in Memory Support Both whole stage codegen [SPARK-12795] and the vectorized parquet reader [SPARK-12992] are enabled by default in Spark 2.0+
  • 24.
  • 25.
    1. SPARK-16026: CostBased Optimizer - Leverage table/column level statistics to optimize joins and aggregates - Statistics Collection Framework (Spark 2.1) - Cost Based Optimizer (Spark 2.2) 2. Boosting Spark’s Performance on Many-Core Machines - Qifan’s Talk Today at 2:55pm (Research Track) - In-memory/ single node shuffle 3. Improving quality of generated code and better integration with the in-memory column format in Spark Spark 2.1, 2.2 and beyond
  • 26.
    Motivation The choice ofGarbage Collector impact the data processing capability of the system Improvement in DPS ranges from 1.4x to 3.7x on average in Parallel Scavenge as compared to G1
  • 27.
    Our Approach Multiple Smallexecutors instead of single large executor Multiple small executors can provide up-to 36% performance gain
  • 28.
    Our Approach NUMA Awareness NUMAAwareness results in 10% speed up on average
  • 29.
    Our Approach Hyper Threadingis effective Hyper threading reduces the DRAM bound stalls by 50%
  • 30.
    Our Approach Disable next-lineprefetchers Disabling next-line prefetchers can improve the performance by 15%
  • 31.
    Our Approach Further Reading ● Performancecharacterization of in-memory data analytics on a modern cloud server, in 5th IEEE Conference on Big Data and Cloud Computing, 2015 (Best Paper Award). ● How Data Volume Affects Spark Based Data Analytics on a Scale-up Server in 6th Workshop on Big Data Benchmarks, Performance Optimization and Emerging Hardware (BpoE), held in conjunction with VLDB 2015, Hawaii, USA . ● Micro-architectural Characterization of Apache Spark on Batch and Stream Processing Workloads, in 6th IEEE Conference on Big Data and Cloud Computing, 2016. ● Node Architecture Implications for In-Memory Data Analytics in Scale-in Clusters in 3rd IEEE/ACM Conference in Big Data Computing, Applications and Technologies, 2016. ● Implications of In-Memory Data Analytics with Apache Spark on Near Data Computing Architectures (under submission).
  • 32.