SlideShare a Scribd company logo
Kazuaki Ishizaki
IBM Research – Tokyo
@kiszk
Demystifying
DataFrame and Dataset
#SFdev20
About Me – Kazuaki Ishizaki
• Researcher at IBM Research in compiler optimizations
• Working for IBM Java virtual machine over 20 years
– In particular, just-in-time compiler
• Contributor to SQL package in Apache Spark
• Committer of GPUEnabler
– Apache Spark Plug-in to execute GPU code on Spark
• https://github.com/IBMSparkGPU/GPUEnabler
• Homepage: http://ibm.biz/ishizaki
• Github: https://github.com/kiszk, Twitter: @kiszk
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki2
Spark 2.2 makes Dataset Faster
Compared to Spark 2.0 (also 2.1)
• 1.6x faster for map() with scalar variable
• 4.5x faster for map() with primitive array
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki3
ds = (1 to ...).toDS.cache
ds.map(value => value + 1)
ds = Seq(Array(…), Array(…), …).toDS.cache
ds.map(array => Array(array(0)))
from enhancements in Catalyst optimizer and codegen
How Does It Matter?
• Application programmers
– ML pipeline will become faster
• Library developers
– You will want to use Dataset instead of RDD
• SQL package developers
– You will know detail on Dataset
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki4
DataFrame, Dataset, and RDD
df = (1 to 100).toDF.cache
df.selectExpr(“value + 1”)
ds = (1 to 100).toDS.cache
ds.map(value => value + 1)
SQLDataFrame (DF) Dataset (DS)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki5
RDD
rdd = sc.parallelize(
(1 to 100)).cache
rdd.map(value => value + 1)
Based on lambda expressionBased on SQL operation
How DF, DS, and RDD Work
• We expect the same performance on DF and DS
From Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Michael Armbrust
DS
DF
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki6
Catalyst
// generated Java code
// for DF and DS
while (itr.hasNext()) {
Row r = (Row)itr.next();
int v = r.getInt(0);
…
}
Dataset for Int Value is Slow on
Spark 2.0
• 1.7x slow for addition to scalar int value
0 0.5 1 1.5 2
Relative execution time over DataFrame
DataFrame Dataset
df.selectExpr(“value + 1”)
ds.map(value => value + 1)
All experiments are done using
16-core Intel E2667-v3 3.2GHz, with 384GB mem, Ubuntu 16.04,
OpenJDK 1.8.0u212-b13 with 256GB heap, Spark master=local[1]
Spark branch-2.0 and branch-2.2, ds/df/rdd are cached
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki7
Shorter is better
Dataset for Int Array is Extremely
Slow on Spark 2.0
• 54x slow for creating a new array for primitive int array
0 10 20 30 40 50 60
Relative execution time over DataFrame
DataFrame Dataset
df = Seq(Array(…), Array(…), …)
.toDF(“a”).cache
df.selectExpr(“Array(a[0])”)
ds = Seq(Array(…), Array(…), …)
.toDS.cache
ds.map(a => Array(a(0)))
SPARK-16070 pointed out this performance problem
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki8
Shorter is better
Outline
• How DataFrame and Dataset are executed?
• Deep dive for two cases
– Why Dataset was slow
– How Spark 2.2 improves performance
• Deep dive for another case
– Why Dataset is slow
– How future Spark will improve performance
• How Dataset is used in ML pipeline?
• Next Steps
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki9
Simple Generated Code for int
with DF
• Generated code performs `+` to int
– Use strongly-typed ‘int’ variable
1: while (itr.hasNext()) { // execute a row
2: // get a value from a row in DF
3: int value =((Row)itr.next()).getInt(0);
4: // compute a new value
5: int mapValue = value + 1;
6: // store a new value to a row in DF
7: outRow.write(0, mapValue);
8: append(outRow);
9: }
// df: DataFrame for int …
df.selectExpr(“value + 1”)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki10
Catalyst
generates
Java code
Catalyst
Weird Generated Code with DS
• Generated code performs four data conversions
– Use standard API with generic type method apply(Object)
while (itr.hasNext()) {
int value = ((Row)itr.next()).getInt(0);
Object objValue = new Integer(value);
int mapValue = ((Integer)
mapFunc.apply(objVal)).toValue();
outRow.write(0, mapValue);
append(outRow);
}
Object apply(Object obj) {
int value = Integer(obj).toValue();
return new Integer(apply$II(value));
}
int apply$II(int value) { return value + 1; }
// ds: Dataset[Int] …
ds.map(value => value + 1)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki11
Catalyst
generates
Java code
Scalac generates
these functions
from lambda expression
Catalyst
Simple Generated Code with DS
on Spark 2.2
• SPARK-19008 enables generated code to use int value
– Use non-standard API with type-specialized method
apply$II(int)
while (itr.hasNext()) {
int value = ((Row)itr.next()).getInt(0);
int mapValue = mapFunc.apply$II(value);
outRow.write(0, mapValue);
append(outRow);
}
int apply$II(int value) { return value + 1;}
// ds: Dataset[Int] …
ds.map(value => value + 1)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki12
Catalyst
Scalac generates
this function
from lambda expression
Dataset is Not Slow on Spark 2.2
0 0.5 1 1.5 2
Relative execution time over DataFrame
DataFrame Dataset
df.selectExpr(“value + 1”)
ds.map(value => value + 1)
Shorter is better
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki13
1.6x
• Improve performance by 1.6x compared to Spark 2.0
– DataFrame is 9% faster than Dataset
RDD/DF/DS achieve close
performance on Spark 2.2
• RDD is 5% faster than DataFrame
0 0.5 1 1.5 2
Relative execution time over DataFrame
RDD DataFrame Dataset
df.selectExpr(“value + 1”)
ds.map(value => value + 1)
rdd.map(value => value + 1)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki14
9%
5%
Shorter is better
Simple Generated Code for Array
with DF
• All operations access data in internal data format for array
ArrayData (devised in Project Tungsten)
// df: DataFrame for Array(Int) …
df.selectExpr(“Array(a[0])”)
ArrayData inArray, outArray;
while (itr.hasNext()) {
inArray = ((Row)itr.next()).getArray(0);
int value = inArray.getInt(0)); //a[0]
outArray = new UnsafeArrayData(); ...
outArray.setInt(0, value); //Array(a[0])
outArray.writeToMemory(outRow);
append(outRow);
}
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki15
Catalyst
generates
Java code
Internal data format (ArrayData)
inArray.getInt(0)
outArray.setInt(0, …)
Catalyst
Lambda Expression with DS
requires Java Object
• Need serialization/deserialzation (Ser/De) between internal
data format and Java object for lambda expression
// ds: DataSet[Array(Int)] …
ds.map(a => Array(a(0)))
ArrayData inArray, outArray;
while (itr.hasNext()) {
inArray = ((Row)itr.next()).getArray(0);
append(outRow);
}
Internal data formatJava object
Ser/De
Array(a(0))
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki16
int[] mapArray = Array(a(0));
Java bytecode for
lambda expression
Catalyst
Deserialization
Serialization
Ser/De
ArrayData inArray, outArray;
while (itr.hasNext()) {
inArray = ((Row)itr.next().getArray(0);
Object[] tmp = new Object[inArray.numElements()];
for (int i = 0; i < tmp.length; i ++) {
tmp[i] = (inArray.isNullAt(i)) ? Null : inArray.getInt(i);
}
ArrayData array = new GenericIntArrayData(tmpArray);
int[] javaArray = array.toIntArray();
int[] mapArray = (int[])map_func.apply(javaArray);
outArray = new GenericArrayData(mapArray);
for (int i = 0; i < outArray.numElements(); i++) {
if (outArray.isNullAt(i)) {
arrayWriter.setNullInt(i);
} else {
arrayWriter.write(i, outArray.getInt(i));
}
}
append(outRow);
}
ArrayData inArray, outArray;
while (itr.hasNext()) {
inArray = ((Row)itr.next().getArray(0);
append(outRow);
}
Extremely Weird Code for Array
with DS on Spark 2.0
• Data conversion is too slow
• Element-wise copy is slow
// ds: DataSet[Array(Int)] …
ds.map(a => Array(a(0)))
Data conversion
Element-wise data copy
Element-wise data copy
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki17
Data conversion Copy with Java object creation
Element-wise data copy Copy each element with null check
int[] mapArray = Array(a(0));
Catalyst
Data conversion
Element-wise data copy
Ser
De
Simple Generated Code for Array
on Spark 2.2
• SPARK-15985, SPARK-17490, and SPARK-15962
simplify Ser/De by using bulk data copy
– Data conversion and element-wise copy are not used
– Bulk copy is faster
// ds: DataSet[Array(Int)] …
ds.map(a => Array(a(0)))
while (itr.hasNext()) {
inArray =((Row)itr.next()).getArray(0);
int[] javaArray = inArray.toIntArray();
int[] mapArray = (int[])mapFunc.apply(javaArray);
outArray = UnsafeArrayData
.fromPrimitiveArray(mapArray);
outArray.writeToMemory(outRow);
append(outRow);
}
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki18
Bulk data copy Copy whole array using memcpy()
Catalyst
while (itr.hasNext()) {
inArray =((Row)itr.next()).getArray(0);
int[]mapArray=(int[])map.apply(javaArray);
append(outRow);
}
Bulk data copy
int[] mapArray = Array(a(0));
Bulk data copy
Bulk data copy
0 10 20 30 40 50 60
Relative execution time over DataFrame
DataFrame Dataset
Dataset for Array is Not
Extremely Slow over DataFrame
• Improve performance by 4.5 compared to Spark 2.0
– DataFrame is 12x faster than Dataset
ds = Seq(Array(…), Array(…), …)
.toDS.cache
ds.map(a => Array(a(0)))
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki19
4.5x
df = Seq(Array(…), Array(…), …)
.toDF(”a”).cache
df.selectExpr(“Array(a[0])”)
Would this overhead be negligible
if map() is much computation intensive?
12x
Shorter is better
RDD and DataFrame for Array
Achieve Close Performance
• DataFrame is 1.2x faster than RDD
0 5 10 15
Relative execution time over DataFrame
RDD DataFrame Dataset
ds = Seq(Array(…), Array(…), …)
.toDS.cache
ds.map(a => Array(a(0)))
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki20
df = Seq(Array(…), Array(…), …)
.toDF(”a”).cache
df.selectExpr(“Array(a[0])”)
rdd = sc.parallelize(Seq(
Array(…), Array(…), …)).cache
rdd.map(a => Array(a(0))) 1.2
12
Shorter is better
Enable Lambda Expression to
Use Internal Data Format
• Our prototype modifies Java bytecode of lambda
expression to access internal data format
– We improved performance by avoiding Ser/De
// ds: DataSet[Array(Int)] …
ds.map(a => Array(a(0)))
while (itr.hasNext()) {
inArray = ((Row)itr.next().getArray(0);
outArray = mapFunc.applyTungsten(inArray);
outArray.writeToMemory(outRow);
append(outRow);
}
Internal data format
“Accelerating Spark Datasets by inlining deserialization”, our academic paper at IPDPS2017
1
1
apply(Object)
1011
applyTungsten(ArrayData)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki21
Catalyst
generates
Java code
Catalyst
Our
prototype
Outline
• How DataFrame and Dataset are executed?
• Deep dive for two cases
– Why Dataset is slow
– How Spark 2.2 improve performance
• Deep dive for another case
– Why Dataset is slow
• How Dataset are used in ML pipeline?
• Next Steps
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki22
0 0.5 1 1.5
Relative execution time over DataFrame
RDD DataFrame Dataset
Dataset for Two Scalar Adds is
Slow on Spark 2.2
• DataFrame and Dataset are faster than RDD
• Dataset is 1.1x slower than DataFrame
df.selectExpr(“value + 1”)
.selectExpr(“value + 2”)
ds.map(value => value + 1)
.map(value => value + 2)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki23
rdd.map(value => value + 1)
.map(value => value + 2)
Shorter is better
☺

1.1
1.4
Adds are Combined with DF
• Spark 2.2 understands operations in selectExpr() and
combines two adds into one add ‘value + 3’
while (itr.hasNext()) {
Row row = (Row)itr.next();
int value = row.getInt(0);
int mapValue = value + 3;
projectRow.write(0, mapValue);
append(projectRow);
}
// df: DataFrame for Int …
df.selectExpr(“value + 1”)
.selectExpr(“value + 2”)
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki24
Catalyst
Catalyst
generates
Java code
Two Method Calls for Adds
Remain with DS
• Spark 2.2 cannot understand lambda expressions stored
as Java bytecode
while (itr.hasNext()) {
int value = ((Row)itr.next()).getInt(0);
int mapValue = mapFunc1.apply$II(value);
mapValue += mapFunc2.apply$II(mapValue);
outRow.write(0, mapValue);
append(outRow);
}
// ds: Dataset[Int] …
ds.map(value => value + 1)
.map(value => value + 2)
class MapFunc1 {
int apply$II(int value) { return value + 1;}}
class MapFunc2 {
int apply$II(int value) { return value + 2;}}
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki25
Catalyst
Scalac generates
these functions
from lambda expression
class MapFunc1 {
int apply$II(int value) { return value + 1;}}
class MapFunc2 {
int apply$II(int value) { return value + 2;}}
Adds Will be Combined on Future
Spark 2.x
• SPARK-14083 will allow future Spark to understand Java
byte code lambda expressions and to combine them
while (itr.hasNext()) {
int value = ((Row)itr.next()).getInt(0);
int mapValue = value + 3;
outRow.write(0, mapValue);
append(outRow);
}
// ds: Dataset[Int] …
ds.map(value => value + 1)
.map(value => value + 2)
Dataset will become faster
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki26
Catalyst
Outline
• How DataFrame and Dataset are executed?
• Deep dive for two cases
– Why Dataset is slow
– How Spark 2.2 improve performance
• Deep dive for another case
– Why Dataset is slow
• How Dataset are used in ML pipeline?
• Next Steps
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki27
How Dataset is Used in ML
Pipeline
• ML Pipeline is constructed by using DataFrame/Dataset
• Algorithm still operates on data in RDD
– Conversion overhead between Dataset and RDD
val trainingDS: Dataset = ...
val tokenizer = new Tokenizer()
val hashingTF = new HashingTF()
val lr = new LogisticRegression()
val pipeline = new Pipeline().setStages(
Array(tokenizer, hashingTF, lr))
val model = pipeline.fit(trainingDS)
def fit(ds: Dataset[_]) = {
...
train(ds)
}
def train(ds:Dataset[_]):LogisticRegressionModel = {
val instances: RDD[Instance] =
ds.select(...).rdd.map { // convert DS to RDD
case Row(...) => ...
}
instances.persist(...)
instances.treeAggregate(...)
}
ML pipeline Machine learning algorithm (LogisticRegression)
Derived from https://databricks.com/blog/2015/01/07/ml-pipelines-a-new-high-level-api-for-mllib.html
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki28
If We Use Dataset for Machine
Learning Algorithm
• ML Pipeline is constructed by using DataFrame/Dataset
• Algorithm also operates on data in DataFrame/Dataset
– No conversion between Dataset and RDD
– Optimizations applied by Catalyst
– Writing of control flows
val trainingDataset: Dataset = ...
val tokenizer = new Tokenizer()
val hashingTF = new HashingTF()
val lr = new LogisticRegression()
val pipeline = new Pipeline().setStages(
Array(tokenizer, hashingTF, lr))
val model = pipeline.fit(trainingDataset)
def train(ds:Dataset[_]):LogisticRegressionModel = {
val instances: Dataset[Instance] =
ds.select(...).rdd.map {
case Row(...) => ...
}
instances.persist(...)
instances.treeAggregate(...)
}
ML pipeline
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki29
Machine learning algorithm (LogisticRegression)
Next Steps
• Achieve further performance improvements for Dataset
– Implement Java bytecode analysis and modification
• Exploit optimizations in Catalyst (SPARK-14083)
• Reduce overhead of data conversion (Our paper)
• Exploit SIMD/GPU in Spark by using simple generated
code
– http://www.spark.tc/simd-and-gpu/
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki30
Takeaway
• Dataset on Spark 2.2 is much faster than on Spark 2.0/2.1
• How Dataset was executed and was improved
– Data conversion
– Serialization/deserialization
– Java bytecode of lambda expression
• Continue to improve performance of Dataset
• Let us start using Dataset from Today!
Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki31
Thank you
@kiszk on twitter
@kiszk on github
http://ibm.biz/ishizaki
• @sameeragarwal, @hvanhovell, @gatorsmile,
@cloud-fan, @ueshin, @davies, @rxin,
@joshrosen, @maropu, @viirya
• Spark Technology Center

More Related Content

What's hot

Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
 
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiA Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiDatabricks
 
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...InfluxData
 
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkArbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
 
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015Advanced Apache Spark Meetup Project Tungsten Nov 12 2015
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015Chris Fregly
 
Apache Calcite: One planner fits all
Apache Calcite: One planner fits allApache Calcite: One planner fits all
Apache Calcite: One planner fits allJulian Hyde
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache CalciteJulian Hyde
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowJulien Le Dem
 
MySQL查询优化浅析
MySQL查询优化浅析MySQL查询优化浅析
MySQL查询优化浅析frogd
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsCloudera, Inc.
 
Understanding and Improving Code Generation
Understanding and Improving Code GenerationUnderstanding and Improving Code Generation
Understanding and Improving Code GenerationDatabricks
 
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...Julian Hyde
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache SparkRahul Jain
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkDatabricks
 
Cassandra Data Modeling - Practical Considerations @ Netflix
Cassandra Data Modeling - Practical Considerations @ NetflixCassandra Data Modeling - Practical Considerations @ Netflix
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaDatabricks
 
A Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLA Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLDatabricks
 
PySpark dataframe
PySpark dataframePySpark dataframe
PySpark dataframeJaemun Jung
 

What's hot (20)

Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiA Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
 
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
 
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkArbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
 
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015Advanced Apache Spark Meetup Project Tungsten Nov 12 2015
Advanced Apache Spark Meetup Project Tungsten Nov 12 2015
 
Apache Calcite: One planner fits all
Apache Calcite: One planner fits allApache Calcite: One planner fits all
Apache Calcite: One planner fits all
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
 
MySQL查询优化浅析
MySQL查询优化浅析MySQL查询优化浅析
MySQL查询优化浅析
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Understanding and Improving Code Generation
Understanding and Improving Code GenerationUnderstanding and Improving Code Generation
Understanding and Improving Code Generation
 
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache Spark
 
Cassandra Data Modeling - Practical Considerations @ Netflix
Cassandra Data Modeling - Practical Considerations @ NetflixCassandra Data Modeling - Practical Considerations @ Netflix
Cassandra Data Modeling - Practical Considerations @ Netflix
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
 
A Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLA Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQL
 
PySpark dataframe
PySpark dataframePySpark dataframe
PySpark dataframe
 

Similar to Demystifying DataFrame and Dataset with Kazuaki Ishizaki

Looking back at Spark 2.x and forward to 3.0
Looking back at Spark 2.x and forward to 3.0Looking back at Spark 2.x and forward to 3.0
Looking back at Spark 2.x and forward to 3.0Kazuaki Ishizaki
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2Gal Marder
 
Spark Programming
Spark ProgrammingSpark Programming
Spark ProgrammingTaewook Eom
 
Jump Start into Apache® Spark™ and Databricks
Jump Start into Apache® Spark™ and DatabricksJump Start into Apache® Spark™ and Databricks
Jump Start into Apache® Spark™ and DatabricksDatabricks
 
Spark Streaming Programming Techniques You Should Know with Gerard Maas
Spark Streaming Programming Techniques You Should Know with Gerard MaasSpark Streaming Programming Techniques You Should Know with Gerard Maas
Spark Streaming Programming Techniques You Should Know with Gerard MaasSpark Summit
 
Scala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkScala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkIvan Morozov
 
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探台灣資料科學年會
 
Apache Spark, the Next Generation Cluster Computing
Apache Spark, the Next Generation Cluster ComputingApache Spark, the Next Generation Cluster Computing
Apache Spark, the Next Generation Cluster ComputingGerger
 
A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...Holden Karau
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax EnablementVincent Poncet
 
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterSpark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterDon Drake
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkPatrick Wendell
 
An Overview of Apache Spark
An Overview of Apache SparkAn Overview of Apache Spark
An Overview of Apache SparkYasoda Jayaweera
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkDatabricks
 
Koalas: Making an Easy Transition from Pandas to Apache Spark
Koalas: Making an Easy Transition from Pandas to Apache SparkKoalas: Making an Easy Transition from Pandas to Apache Spark
Koalas: Making an Easy Transition from Pandas to Apache SparkDatabricks
 
Python and Bigdata - An Introduction to Spark (PySpark)
Python and Bigdata -  An Introduction to Spark (PySpark)Python and Bigdata -  An Introduction to Spark (PySpark)
Python and Bigdata - An Introduction to Spark (PySpark)hiteshnd
 

Similar to Demystifying DataFrame and Dataset with Kazuaki Ishizaki (20)

Looking back at Spark 2.x and forward to 3.0
Looking back at Spark 2.x and forward to 3.0Looking back at Spark 2.x and forward to 3.0
Looking back at Spark 2.x and forward to 3.0
 
icpe2019_ishizaki_public
icpe2019_ishizaki_publicicpe2019_ishizaki_public
icpe2019_ishizaki_public
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2
 
Spark Programming
Spark ProgrammingSpark Programming
Spark Programming
 
Jump Start into Apache® Spark™ and Databricks
Jump Start into Apache® Spark™ and DatabricksJump Start into Apache® Spark™ and Databricks
Jump Start into Apache® Spark™ and Databricks
 
Spark Streaming Programming Techniques You Should Know with Gerard Maas
Spark Streaming Programming Techniques You Should Know with Gerard MaasSpark Streaming Programming Techniques You Should Know with Gerard Maas
Spark Streaming Programming Techniques You Should Know with Gerard Maas
 
Scala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkScala Meetup Hamburg - Spark
Scala Meetup Hamburg - Spark
 
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
 
Apache Spark, the Next Generation Cluster Computing
Apache Spark, the Next Generation Cluster ComputingApache Spark, the Next Generation Cluster Computing
Apache Spark, the Next Generation Cluster Computing
 
A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax Enablement
 
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterSpark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
An Overview of Apache Spark
An Overview of Apache SparkAn Overview of Apache Spark
An Overview of Apache Spark
 
Apache Spark Workshop
Apache Spark WorkshopApache Spark Workshop
Apache Spark Workshop
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Meetup ml spark_ppt
Meetup ml spark_pptMeetup ml spark_ppt
Meetup ml spark_ppt
 
Koalas: Making an Easy Transition from Pandas to Apache Spark
Koalas: Making an Easy Transition from Pandas to Apache SparkKoalas: Making an Easy Transition from Pandas to Apache Spark
Koalas: Making an Easy Transition from Pandas to Apache Spark
 
Python and Bigdata - An Introduction to Spark (PySpark)
Python and Bigdata -  An Introduction to Spark (PySpark)Python and Bigdata -  An Introduction to Spark (PySpark)
Python and Bigdata - An Introduction to Spark (PySpark)
 

More from Databricks

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDatabricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Databricks
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Databricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDatabricks
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceDatabricks
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringDatabricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsDatabricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkDatabricks
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkDatabricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesDatabricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeDatabricks
 

More from Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Recently uploaded

Using PDB Relocation to Move a Single PDB to Another Existing CDB
Using PDB Relocation to Move a Single PDB to Another Existing CDBUsing PDB Relocation to Move a Single PDB to Another Existing CDB
Using PDB Relocation to Move a Single PDB to Another Existing CDBAlireza Kamrani
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单ewymefz
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单ukgaet
 
Uber Ride Supply Demand Gap Analysis Report
Uber Ride Supply Demand Gap Analysis ReportUber Ride Supply Demand Gap Analysis Report
Uber Ride Supply Demand Gap Analysis ReportSatyamNeelmani2
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单ewymefz
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单vcaxypu
 
tapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive datatapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive datatheahmadsaood
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundOppotus
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单ocavb
 
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...elinavihriala
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单ewymefz
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP
 
Supply chain analytics to combat the effects of Ukraine-Russia-conflict
Supply chain analytics to combat the effects of Ukraine-Russia-conflictSupply chain analytics to combat the effects of Ukraine-Russia-conflict
Supply chain analytics to combat the effects of Ukraine-Russia-conflictJack Cole
 
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...Domenico Conte
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .NABLAS株式会社
 
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPs
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPsWebinar One View, Multiple Systems No-Code Integration of Salesforce and ERPs
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPsCEPTES Software Inc
 
Computer Presentation.pptx ecommerce advantage s
Computer Presentation.pptx ecommerce advantage sComputer Presentation.pptx ecommerce advantage s
Computer Presentation.pptx ecommerce advantage sMAQIB18
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhArpitMalhotra16
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单enxupq
 

Recently uploaded (20)

Using PDB Relocation to Move a Single PDB to Another Existing CDB
Using PDB Relocation to Move a Single PDB to Another Existing CDBUsing PDB Relocation to Move a Single PDB to Another Existing CDB
Using PDB Relocation to Move a Single PDB to Another Existing CDB
 
Slip-and-fall Injuries: Top Workers' Comp Claims
Slip-and-fall Injuries: Top Workers' Comp ClaimsSlip-and-fall Injuries: Top Workers' Comp Claims
Slip-and-fall Injuries: Top Workers' Comp Claims
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Uber Ride Supply Demand Gap Analysis Report
Uber Ride Supply Demand Gap Analysis ReportUber Ride Supply Demand Gap Analysis Report
Uber Ride Supply Demand Gap Analysis Report
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
tapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive datatapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive data
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...
2024-05-14 - Tableau User Group - TC24 Hot Topics - Tableau Pulse and Einstei...
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
Supply chain analytics to combat the effects of Ukraine-Russia-conflict
Supply chain analytics to combat the effects of Ukraine-Russia-conflictSupply chain analytics to combat the effects of Ukraine-Russia-conflict
Supply chain analytics to combat the effects of Ukraine-Russia-conflict
 
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...
Professional Data Engineer Certification Exam Guide  _  Learn  _  Google Clou...
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
 
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPs
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPsWebinar One View, Multiple Systems No-Code Integration of Salesforce and ERPs
Webinar One View, Multiple Systems No-Code Integration of Salesforce and ERPs
 
Computer Presentation.pptx ecommerce advantage s
Computer Presentation.pptx ecommerce advantage sComputer Presentation.pptx ecommerce advantage s
Computer Presentation.pptx ecommerce advantage s
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 

Demystifying DataFrame and Dataset with Kazuaki Ishizaki

  • 1. Kazuaki Ishizaki IBM Research – Tokyo @kiszk Demystifying DataFrame and Dataset #SFdev20
  • 2. About Me – Kazuaki Ishizaki • Researcher at IBM Research in compiler optimizations • Working for IBM Java virtual machine over 20 years – In particular, just-in-time compiler • Contributor to SQL package in Apache Spark • Committer of GPUEnabler – Apache Spark Plug-in to execute GPU code on Spark • https://github.com/IBMSparkGPU/GPUEnabler • Homepage: http://ibm.biz/ishizaki • Github: https://github.com/kiszk, Twitter: @kiszk Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki2
  • 3. Spark 2.2 makes Dataset Faster Compared to Spark 2.0 (also 2.1) • 1.6x faster for map() with scalar variable • 4.5x faster for map() with primitive array Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki3 ds = (1 to ...).toDS.cache ds.map(value => value + 1) ds = Seq(Array(…), Array(…), …).toDS.cache ds.map(array => Array(array(0))) from enhancements in Catalyst optimizer and codegen
  • 4. How Does It Matter? • Application programmers – ML pipeline will become faster • Library developers – You will want to use Dataset instead of RDD • SQL package developers – You will know detail on Dataset Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki4
  • 5. DataFrame, Dataset, and RDD df = (1 to 100).toDF.cache df.selectExpr(“value + 1”) ds = (1 to 100).toDS.cache ds.map(value => value + 1) SQLDataFrame (DF) Dataset (DS) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki5 RDD rdd = sc.parallelize( (1 to 100)).cache rdd.map(value => value + 1) Based on lambda expressionBased on SQL operation
  • 6. How DF, DS, and RDD Work • We expect the same performance on DF and DS From Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Michael Armbrust DS DF Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki6 Catalyst // generated Java code // for DF and DS while (itr.hasNext()) { Row r = (Row)itr.next(); int v = r.getInt(0); … }
  • 7. Dataset for Int Value is Slow on Spark 2.0 • 1.7x slow for addition to scalar int value 0 0.5 1 1.5 2 Relative execution time over DataFrame DataFrame Dataset df.selectExpr(“value + 1”) ds.map(value => value + 1) All experiments are done using 16-core Intel E2667-v3 3.2GHz, with 384GB mem, Ubuntu 16.04, OpenJDK 1.8.0u212-b13 with 256GB heap, Spark master=local[1] Spark branch-2.0 and branch-2.2, ds/df/rdd are cached Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki7 Shorter is better
  • 8. Dataset for Int Array is Extremely Slow on Spark 2.0 • 54x slow for creating a new array for primitive int array 0 10 20 30 40 50 60 Relative execution time over DataFrame DataFrame Dataset df = Seq(Array(…), Array(…), …) .toDF(“a”).cache df.selectExpr(“Array(a[0])”) ds = Seq(Array(…), Array(…), …) .toDS.cache ds.map(a => Array(a(0))) SPARK-16070 pointed out this performance problem Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki8 Shorter is better
  • 9. Outline • How DataFrame and Dataset are executed? • Deep dive for two cases – Why Dataset was slow – How Spark 2.2 improves performance • Deep dive for another case – Why Dataset is slow – How future Spark will improve performance • How Dataset is used in ML pipeline? • Next Steps Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki9
  • 10. Simple Generated Code for int with DF • Generated code performs `+` to int – Use strongly-typed ‘int’ variable 1: while (itr.hasNext()) { // execute a row 2: // get a value from a row in DF 3: int value =((Row)itr.next()).getInt(0); 4: // compute a new value 5: int mapValue = value + 1; 6: // store a new value to a row in DF 7: outRow.write(0, mapValue); 8: append(outRow); 9: } // df: DataFrame for int … df.selectExpr(“value + 1”) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki10 Catalyst generates Java code Catalyst
  • 11. Weird Generated Code with DS • Generated code performs four data conversions – Use standard API with generic type method apply(Object) while (itr.hasNext()) { int value = ((Row)itr.next()).getInt(0); Object objValue = new Integer(value); int mapValue = ((Integer) mapFunc.apply(objVal)).toValue(); outRow.write(0, mapValue); append(outRow); } Object apply(Object obj) { int value = Integer(obj).toValue(); return new Integer(apply$II(value)); } int apply$II(int value) { return value + 1; } // ds: Dataset[Int] … ds.map(value => value + 1) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki11 Catalyst generates Java code Scalac generates these functions from lambda expression Catalyst
  • 12. Simple Generated Code with DS on Spark 2.2 • SPARK-19008 enables generated code to use int value – Use non-standard API with type-specialized method apply$II(int) while (itr.hasNext()) { int value = ((Row)itr.next()).getInt(0); int mapValue = mapFunc.apply$II(value); outRow.write(0, mapValue); append(outRow); } int apply$II(int value) { return value + 1;} // ds: Dataset[Int] … ds.map(value => value + 1) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki12 Catalyst Scalac generates this function from lambda expression
  • 13. Dataset is Not Slow on Spark 2.2 0 0.5 1 1.5 2 Relative execution time over DataFrame DataFrame Dataset df.selectExpr(“value + 1”) ds.map(value => value + 1) Shorter is better Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki13 1.6x • Improve performance by 1.6x compared to Spark 2.0 – DataFrame is 9% faster than Dataset
  • 14. RDD/DF/DS achieve close performance on Spark 2.2 • RDD is 5% faster than DataFrame 0 0.5 1 1.5 2 Relative execution time over DataFrame RDD DataFrame Dataset df.selectExpr(“value + 1”) ds.map(value => value + 1) rdd.map(value => value + 1) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki14 9% 5% Shorter is better
  • 15. Simple Generated Code for Array with DF • All operations access data in internal data format for array ArrayData (devised in Project Tungsten) // df: DataFrame for Array(Int) … df.selectExpr(“Array(a[0])”) ArrayData inArray, outArray; while (itr.hasNext()) { inArray = ((Row)itr.next()).getArray(0); int value = inArray.getInt(0)); //a[0] outArray = new UnsafeArrayData(); ... outArray.setInt(0, value); //Array(a[0]) outArray.writeToMemory(outRow); append(outRow); } Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki15 Catalyst generates Java code Internal data format (ArrayData) inArray.getInt(0) outArray.setInt(0, …) Catalyst
  • 16. Lambda Expression with DS requires Java Object • Need serialization/deserialzation (Ser/De) between internal data format and Java object for lambda expression // ds: DataSet[Array(Int)] … ds.map(a => Array(a(0))) ArrayData inArray, outArray; while (itr.hasNext()) { inArray = ((Row)itr.next()).getArray(0); append(outRow); } Internal data formatJava object Ser/De Array(a(0)) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki16 int[] mapArray = Array(a(0)); Java bytecode for lambda expression Catalyst Deserialization Serialization Ser/De
  • 17. ArrayData inArray, outArray; while (itr.hasNext()) { inArray = ((Row)itr.next().getArray(0); Object[] tmp = new Object[inArray.numElements()]; for (int i = 0; i < tmp.length; i ++) { tmp[i] = (inArray.isNullAt(i)) ? Null : inArray.getInt(i); } ArrayData array = new GenericIntArrayData(tmpArray); int[] javaArray = array.toIntArray(); int[] mapArray = (int[])map_func.apply(javaArray); outArray = new GenericArrayData(mapArray); for (int i = 0; i < outArray.numElements(); i++) { if (outArray.isNullAt(i)) { arrayWriter.setNullInt(i); } else { arrayWriter.write(i, outArray.getInt(i)); } } append(outRow); } ArrayData inArray, outArray; while (itr.hasNext()) { inArray = ((Row)itr.next().getArray(0); append(outRow); } Extremely Weird Code for Array with DS on Spark 2.0 • Data conversion is too slow • Element-wise copy is slow // ds: DataSet[Array(Int)] … ds.map(a => Array(a(0))) Data conversion Element-wise data copy Element-wise data copy Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki17 Data conversion Copy with Java object creation Element-wise data copy Copy each element with null check int[] mapArray = Array(a(0)); Catalyst Data conversion Element-wise data copy Ser De
  • 18. Simple Generated Code for Array on Spark 2.2 • SPARK-15985, SPARK-17490, and SPARK-15962 simplify Ser/De by using bulk data copy – Data conversion and element-wise copy are not used – Bulk copy is faster // ds: DataSet[Array(Int)] … ds.map(a => Array(a(0))) while (itr.hasNext()) { inArray =((Row)itr.next()).getArray(0); int[] javaArray = inArray.toIntArray(); int[] mapArray = (int[])mapFunc.apply(javaArray); outArray = UnsafeArrayData .fromPrimitiveArray(mapArray); outArray.writeToMemory(outRow); append(outRow); } Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki18 Bulk data copy Copy whole array using memcpy() Catalyst while (itr.hasNext()) { inArray =((Row)itr.next()).getArray(0); int[]mapArray=(int[])map.apply(javaArray); append(outRow); } Bulk data copy int[] mapArray = Array(a(0)); Bulk data copy Bulk data copy
  • 19. 0 10 20 30 40 50 60 Relative execution time over DataFrame DataFrame Dataset Dataset for Array is Not Extremely Slow over DataFrame • Improve performance by 4.5 compared to Spark 2.0 – DataFrame is 12x faster than Dataset ds = Seq(Array(…), Array(…), …) .toDS.cache ds.map(a => Array(a(0))) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki19 4.5x df = Seq(Array(…), Array(…), …) .toDF(”a”).cache df.selectExpr(“Array(a[0])”) Would this overhead be negligible if map() is much computation intensive? 12x Shorter is better
  • 20. RDD and DataFrame for Array Achieve Close Performance • DataFrame is 1.2x faster than RDD 0 5 10 15 Relative execution time over DataFrame RDD DataFrame Dataset ds = Seq(Array(…), Array(…), …) .toDS.cache ds.map(a => Array(a(0))) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki20 df = Seq(Array(…), Array(…), …) .toDF(”a”).cache df.selectExpr(“Array(a[0])”) rdd = sc.parallelize(Seq( Array(…), Array(…), …)).cache rdd.map(a => Array(a(0))) 1.2 12 Shorter is better
  • 21. Enable Lambda Expression to Use Internal Data Format • Our prototype modifies Java bytecode of lambda expression to access internal data format – We improved performance by avoiding Ser/De // ds: DataSet[Array(Int)] … ds.map(a => Array(a(0))) while (itr.hasNext()) { inArray = ((Row)itr.next().getArray(0); outArray = mapFunc.applyTungsten(inArray); outArray.writeToMemory(outRow); append(outRow); } Internal data format “Accelerating Spark Datasets by inlining deserialization”, our academic paper at IPDPS2017 1 1 apply(Object) 1011 applyTungsten(ArrayData) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki21 Catalyst generates Java code Catalyst Our prototype
  • 22. Outline • How DataFrame and Dataset are executed? • Deep dive for two cases – Why Dataset is slow – How Spark 2.2 improve performance • Deep dive for another case – Why Dataset is slow • How Dataset are used in ML pipeline? • Next Steps Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki22
  • 23. 0 0.5 1 1.5 Relative execution time over DataFrame RDD DataFrame Dataset Dataset for Two Scalar Adds is Slow on Spark 2.2 • DataFrame and Dataset are faster than RDD • Dataset is 1.1x slower than DataFrame df.selectExpr(“value + 1”) .selectExpr(“value + 2”) ds.map(value => value + 1) .map(value => value + 2) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki23 rdd.map(value => value + 1) .map(value => value + 2) Shorter is better ☺  1.1 1.4
  • 24. Adds are Combined with DF • Spark 2.2 understands operations in selectExpr() and combines two adds into one add ‘value + 3’ while (itr.hasNext()) { Row row = (Row)itr.next(); int value = row.getInt(0); int mapValue = value + 3; projectRow.write(0, mapValue); append(projectRow); } // df: DataFrame for Int … df.selectExpr(“value + 1”) .selectExpr(“value + 2”) Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki24 Catalyst Catalyst generates Java code
  • 25. Two Method Calls for Adds Remain with DS • Spark 2.2 cannot understand lambda expressions stored as Java bytecode while (itr.hasNext()) { int value = ((Row)itr.next()).getInt(0); int mapValue = mapFunc1.apply$II(value); mapValue += mapFunc2.apply$II(mapValue); outRow.write(0, mapValue); append(outRow); } // ds: Dataset[Int] … ds.map(value => value + 1) .map(value => value + 2) class MapFunc1 { int apply$II(int value) { return value + 1;}} class MapFunc2 { int apply$II(int value) { return value + 2;}} Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki25 Catalyst Scalac generates these functions from lambda expression
  • 26. class MapFunc1 { int apply$II(int value) { return value + 1;}} class MapFunc2 { int apply$II(int value) { return value + 2;}} Adds Will be Combined on Future Spark 2.x • SPARK-14083 will allow future Spark to understand Java byte code lambda expressions and to combine them while (itr.hasNext()) { int value = ((Row)itr.next()).getInt(0); int mapValue = value + 3; outRow.write(0, mapValue); append(outRow); } // ds: Dataset[Int] … ds.map(value => value + 1) .map(value => value + 2) Dataset will become faster Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki26 Catalyst
  • 27. Outline • How DataFrame and Dataset are executed? • Deep dive for two cases – Why Dataset is slow – How Spark 2.2 improve performance • Deep dive for another case – Why Dataset is slow • How Dataset are used in ML pipeline? • Next Steps Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki27
  • 28. How Dataset is Used in ML Pipeline • ML Pipeline is constructed by using DataFrame/Dataset • Algorithm still operates on data in RDD – Conversion overhead between Dataset and RDD val trainingDS: Dataset = ... val tokenizer = new Tokenizer() val hashingTF = new HashingTF() val lr = new LogisticRegression() val pipeline = new Pipeline().setStages( Array(tokenizer, hashingTF, lr)) val model = pipeline.fit(trainingDS) def fit(ds: Dataset[_]) = { ... train(ds) } def train(ds:Dataset[_]):LogisticRegressionModel = { val instances: RDD[Instance] = ds.select(...).rdd.map { // convert DS to RDD case Row(...) => ... } instances.persist(...) instances.treeAggregate(...) } ML pipeline Machine learning algorithm (LogisticRegression) Derived from https://databricks.com/blog/2015/01/07/ml-pipelines-a-new-high-level-api-for-mllib.html Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki28
  • 29. If We Use Dataset for Machine Learning Algorithm • ML Pipeline is constructed by using DataFrame/Dataset • Algorithm also operates on data in DataFrame/Dataset – No conversion between Dataset and RDD – Optimizations applied by Catalyst – Writing of control flows val trainingDataset: Dataset = ... val tokenizer = new Tokenizer() val hashingTF = new HashingTF() val lr = new LogisticRegression() val pipeline = new Pipeline().setStages( Array(tokenizer, hashingTF, lr)) val model = pipeline.fit(trainingDataset) def train(ds:Dataset[_]):LogisticRegressionModel = { val instances: Dataset[Instance] = ds.select(...).rdd.map { case Row(...) => ... } instances.persist(...) instances.treeAggregate(...) } ML pipeline Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki29 Machine learning algorithm (LogisticRegression)
  • 30. Next Steps • Achieve further performance improvements for Dataset – Implement Java bytecode analysis and modification • Exploit optimizations in Catalyst (SPARK-14083) • Reduce overhead of data conversion (Our paper) • Exploit SIMD/GPU in Spark by using simple generated code – http://www.spark.tc/simd-and-gpu/ Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki30
  • 31. Takeaway • Dataset on Spark 2.2 is much faster than on Spark 2.0/2.1 • How Dataset was executed and was improved – Data conversion – Serialization/deserialization – Java bytecode of lambda expression • Continue to improve performance of Dataset • Let us start using Dataset from Today! Demystifying DataFrame and Dataset (#SFdev20) / Kazuaki Ishizaki31
  • 32. Thank you @kiszk on twitter @kiszk on github http://ibm.biz/ishizaki • @sameeragarwal, @hvanhovell, @gatorsmile, @cloud-fan, @ueshin, @davies, @rxin, @joshrosen, @maropu, @viirya • Spark Technology Center