SlideShare a Scribd company logo
1 of 46
Download to read offline
Eric Liang
6/7/2017
TEAM
About Databricks
Started Spark project (now Apache Spark) at UC Berkeley in 2009
22
PRODUCT
Unified Analytics Platform
MISSION
Making Big Data Simple
3
Overview
Deep dive on challenges in writing to Cloud storage (e.g. S3) with
Spark
• Why transactional writes are important
• Cloud storage vs HDFS
• Current solutions are tuned for HDFS
• What we are doing about this
4
What are the requirements for writers?
Spark: unified analytics stack
Common end-to-end use case we've seen: chaining Spark jobs
ETL => analytics
ETL => ETL
Requirements for writes come from downstream Spark jobs
5
Writer example (ETL)
spark.read.csv("s3://source")
.groupBy(...)
.agg(...)
.write.mode("append")
.parquet("s3://dest")
Extract
Transform
Load
6
Reader example (analytics)
spark.read.parquet("s3://dest")
.groupBy(...)
.agg(...)
.show()
Extract
Transform
7
Reader requirement #1
• With chained jobs,
• Failures of upstream jobs shouldn't corrupt data
Distributed
Filesystem
Distributed
Filesystem
read readwrite
Writer job fails
8
Reader requirement #2
• Don't want to read outputs of in-progress jobs
Distributed
Filesystem
Distributed
Filesystem
read
read
read
readwrite
9
Reader requirement #3
• Data shows up (eventual consistency)
read
FileNotFoundException
Cloud
Storage
10
Summary of requirements
Writes should be committed transactionally by Spark
Writes commit atomically
Reads see consistent snapshot
11
How Spark supports this today
• Spark's commit protocol (inherited from Hadoop) ensures
reliable output for jobs
• HadoopMapReduceCommitProtocol:
Spark stages output
files to a temporary
location
Commit?
Move staged files to final
locations
Abort; Delete staged files
yes
no
12
Executor
Executor
Executor
Executor
Driver
Spark commit protocol
Job Start
Task 1
Task 2
Task 3
Task 4 Task 5
Task 6
Commit
Timeline
Task 7
13
Commit on HDFS
• HDFS is the Hadoop distributed filesystem
• Highly available, durable
• Supports fast atomic metadata operations
• e.g. file moves
• HadoopMapReduceCommitProtocol uses series of files
moves for Job commit
• on HDFS, good performance
• close to transactional in practice
14
Does it meet requirements?
Spark commit on HDFS:
1. No partial results on failures => yes*
2. Don't see results of in-progress jobs => yes*
3. Data shows up => yes
* window of failure during commit is small
15
What about the Cloud?
Reliable output works on HDFS, but what about the Cloud?
• Option 1: run HDFS worker on each node (e.g. EC2 instance)
• Option 2: Use Cloud-native storage (e.g. S3)
– Challenge: systems like S3 are not true filesystems
16
Object stores as Filesystems
• Not so hard to provide Filesystem API over object stores such
as S3
• e.g. S3A Filesystem
• Traditional Hadoop applications / Spark continue to work
over Cloud storage using these adapters
•What do you give up (transactionality, performance)?
17
The remainder of this talk
1. Why HDFS has no place in the Cloud
2. Tradeoffs when using existing Hadoop commit protocols with
Cloud storage adapters
3. New commit protocol we built that provides transactional
writes in Databricks
18
Evaluating storage systems
1. Cost
2. SLA (availability and durability)
3. Performance
Let's compare HDFS and S3
19
(1) Cost
• Storage cost per TB-month
• HDFS: $103 with d2.8xl dense storage instances
• S3: $23
• Human cost
• HDFS: team of Hadoop engineers or vendor
• S3: $0
• Elasticity
• Cloud storage is likely >10x cheaper
20
(2) SLA
• Amazon claims 99.99% availability and 99.999999999%
durability.
• Our experience:
• S3 only down twice in last 6 years
• Never lost data
• Most Hadoop clusters have uptime <= 99.9%
21
(3) Performance
• Raw read/write performance
• HDFS offers higher per-node throughput with disk locality
• S3 decouples storage from compute
– performance can scale to your needs
• Metadata performance
• S3: Listing files much slower
– Better w/scalable partition handling in Spark 2.1
• S3: File moves require copies (expensive!)
22
Cloud storage is preferred
• Cloud-native storage wins in cost and SLA
• better price-performance ratio
• more reliable
• However it brings challenges for ETL
• mostly around transactional write requirement
• what is the right commit protocol?
https://databricks.com/blog/2017/05/31/top-5-reasons-for-choos
ing-s3-over-hdfs.html
23
Evaluating commit protocols
• Two commit protocol variations in use today
• HadoopMapReduceCommitProtocol V1
– (as described previously)
• HadoopMapReduceCommitProtocol V2
• DirectOutputCommitter
• deprecated and removed from Spark
• Which one is most suitable for Cloud?
24
Evaluation Criteria
1. Performance – how fast is the protocol at committing files?
2. Transactionality – can a job cause partial or corrupt results to
be visible to readers?
25
Performance Test
Command to write out 100 files to S3:
spark.range(10e6.toLong)
.repartition(100).write.mode("append")
.option(
"mapreduce.fileoutputcommitter.algorithm.version",
version)
.parquet(s"s3:/tmp/test-$version")
26
Performance Test
Time to write out 100 files to S3:
27
Executor
Executor
Executor
Executor
Driver
Hadoop Commit V1
Job Start
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Job Commit (moves outputs of tasks 1-6)
Timeline
28
Executor
Executor
Executor
Executor
Driver
Hadoop Commit V2
Job Start
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Timeline
Successful task outputs
moved to final locations
ASAP
29
Performance Test
• The V1 commit protocol is slow
• it moves each file serially on the driver
• moves require a copy in S3
• The V2 Hadoop commit protocol is almost five times faster
than V1
• parallel move on executors
30
Transactionality
• Example: What happens if a job fails due to a bad task?
• Common situation if certain input files have bad records
• The job should fail cleanly
• No partial outputs visible to readers
• Let's evaluate Hadoop commit protocols {V1, V2} again
31
Fault Test
Simulate a job failure due to bad records:
spark.range(10000).repartition(7).map { i =>
if (i == 123) { throw new RuntimeException("oops!") }
else i
}.write.option("mapreduce.fileoutputcommitter.algorithm.version
", version)
.mode("append").parquet(s"s3:/tmp/test-$version")
32
Fault Test
Did the failed job leave behind any partial results?
33
Fault Test
• The V2 protocol left behind ~8k garbage rows for the failed
job.
• This is duplicate data that needs to be "manually" cleaned up before
running the job again
• We see empirically that while V2 is faster, it also leaves behind
partial results on job failures, breaking transactionality
requirements
34
Concurrent Readers Test
• Wait, is V1 really transactional?
spark.range(1e9.toLong)
.repartition(70)
.write.mode("append")
.parquet("s3:/test")
spark.read("s3:/test").count()
> 0
> 0
> 14285714
> 814285714
> 1000000000
35
Transactionality Test
• V1 isn't very "transactional" on Cloud storage either
• Long commit phase
• What if driver fails?
• What if another job reads concurrently?
• On HDFS the window of vulnerability is much smaller because
file moves are O(millisecond)
36
The problem with Hadoop commit
• Key design assumption of Hadoop commit V1: file moves are
fast
• V2 is a workaround for Cloud storage, but sacrifices
transactionality
• This tradeoff isn't fundamental
• How can we get both strong transactionality guarantees and
the best performance?
37
Executor
Executor
Executor
Executor
Driver
Ideal commit protocol?
Job Start
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Timeline
Instantaneous commit
38
No compromises with DBIO
transactional commit
• When a user writes a file in a job
• Embed a unique txn id in the file name
• Write the file directly to its final location
• Mark the txn id as committed if the job
commits
• When a user is listing files
• Ignore the file if it has a txn id that is
uncommitted
DBIO
Cloud
Storage
39
DBIO transactional commit
40
Fault analysis
41
Concurrent Readers Test
• DBIO provides atomic commit
spark.range(1e9.toLong)
.repartition(70)
.write.mode("append")
.parquet("s3:/test")
spark.read("s3:/test").count()
> 0
> 0
> 0
> 0
> 1000000000
42
Other benefits of DBIO commit
• High performance
• Strong correctness guarantees
+ Safe Spark task speculation
+ Atomically add and remove sets of files
+ Enhanced consistency
43
Implementation challenges
• Challenge: compatibility with external systems like Hive
• Solution:
• relaxed transactionality guarantees: If you read from an external
system you might read uncommitted files as before
• %sql VACUUM command to clean up files (also auto-vacuum)
• Challenge: how to reliably track transaction status
• Solution:
• hybrid of metadata files and service component
• more data warehousing features in the future
44
Rollout status
Enabled by default starting with Spark 2.1-db5 in Databricks
Process of gradual rollout
>100 customers already using DBIO transactional commit
45
Summary
• Cloud Storage >> HDFS, however
• Different design required for commit protocols
• No compromises with DBIO transactional commit
See our engineering blog post for more details
https://databricks.com/blog/2017/05/31/transactional-writes-clo
ud-storage.html
UNIFIED ANALYTICS PLATFORM
Try Apache Spark in Databricks!
• Collaborative cloud environment
• Free version (community edition)
4646
DATABRICKS RUNTIME 3.0
• Apache Spark - optimized for the cloud
• Caching and optimization layer - DBIO
• Enterprise security - DBES
Try for free today.
databricks.com

More Related Content

What's hot

Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
 
Apache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introductioncolorant
 
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...HostedbyConfluent
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase强 王
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationDatabricks
 
Deep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst OptimizerDeep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst OptimizerSachin Aggarwal
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeDatabricks
 
Presto At Treasure Data
Presto At Treasure DataPresto At Treasure Data
Presto At Treasure DataTaro L. Saito
 
Near Real-Time Data Warehousing with Apache Spark and Delta Lake
Near Real-Time Data Warehousing with Apache Spark and Delta LakeNear Real-Time Data Warehousing with Apache Spark and Delta Lake
Near Real-Time Data Warehousing with Apache Spark and Delta LakeDatabricks
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsSpark Summit
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache icebergAlluxio, Inc.
 
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...Altinity Ltd
 
Spark with Delta Lake
Spark with Delta LakeSpark with Delta Lake
Spark with Delta LakeKnoldus Inc.
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Databricks
 

What's hot (20)

Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Apache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the Covers
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper Optimization
 
Deep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst OptimizerDeep Dive : Spark Data Frames, SQL and Catalyst Optimizer
Deep Dive : Spark Data Frames, SQL and Catalyst Optimizer
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic Datasets
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
 
Presto At Treasure Data
Presto At Treasure DataPresto At Treasure Data
Presto At Treasure Data
 
Near Real-Time Data Warehousing with Apache Spark and Delta Lake
Near Real-Time Data Warehousing with Apache Spark and Delta LakeNear Real-Time Data Warehousing with Apache Spark and Delta Lake
Near Real-Time Data Warehousing with Apache Spark and Delta Lake
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
 
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
 
Spark with Delta Lake
Spark with Delta LakeSpark with Delta Lake
Spark with Delta Lake
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 

Similar to Transactional writes to cloud storage with Eric Liang

Robust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache SparkRobust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache SparkDatabricks
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
 
A Journey into Databricks' Pipelines: Journey and Lessons Learned
A Journey into Databricks' Pipelines: Journey and Lessons LearnedA Journey into Databricks' Pipelines: Journey and Lessons Learned
A Journey into Databricks' Pipelines: Journey and Lessons LearnedDatabricks
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼NAVER D2
 
5 Pitfalls to Avoid with MongoDB
5 Pitfalls to Avoid with MongoDB5 Pitfalls to Avoid with MongoDB
5 Pitfalls to Avoid with MongoDBTim Callaghan
 
Spark to DocumentDB connector
Spark to DocumentDB connectorSpark to DocumentDB connector
Spark to DocumentDB connectorDenny Lee
 
Docker interview Questions-3.pdf
Docker interview Questions-3.pdfDocker interview Questions-3.pdf
Docker interview Questions-3.pdfYogeshwaran R
 
Running Airflow Workflows as ETL Processes on Hadoop
Running Airflow Workflows as ETL Processes on HadoopRunning Airflow Workflows as ETL Processes on Hadoop
Running Airflow Workflows as ETL Processes on Hadoopclairvoyantllc
 
Teaching Apache Spark: Demonstrations on the Databricks Cloud Platform
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformTeaching Apache Spark: Demonstrations on the Databricks Cloud Platform
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformYao Yao
 
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...Amazon Web Services Japan
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupRafal Kwasny
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & DeltaDatabricks
 
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...Cloudera, Inc.
 
Supporting Digital Media Workflows in the Cloud with Perforce Helix
Supporting Digital Media Workflows in the Cloud with Perforce HelixSupporting Digital Media Workflows in the Cloud with Perforce Helix
Supporting Digital Media Workflows in the Cloud with Perforce HelixPerforce
 
Optimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudOptimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudQubole
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Djamel Zouaoui
 
Building FoundationDB
Building FoundationDBBuilding FoundationDB
Building FoundationDBFoundationDB
 
Scaling spark on kubernetes at Lyft
Scaling spark on kubernetes at LyftScaling spark on kubernetes at Lyft
Scaling spark on kubernetes at LyftLi Gao
 

Similar to Transactional writes to cloud storage with Eric Liang (20)

Robust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache SparkRobust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache Spark
 
[AWS Builders] Effective AWS Glue
[AWS Builders] Effective AWS Glue[AWS Builders] Effective AWS Glue
[AWS Builders] Effective AWS Glue
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
A Journey into Databricks' Pipelines: Journey and Lessons Learned
A Journey into Databricks' Pipelines: Journey and Lessons LearnedA Journey into Databricks' Pipelines: Journey and Lessons Learned
A Journey into Databricks' Pipelines: Journey and Lessons Learned
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼
 
5 Pitfalls to Avoid with MongoDB
5 Pitfalls to Avoid with MongoDB5 Pitfalls to Avoid with MongoDB
5 Pitfalls to Avoid with MongoDB
 
Spark to DocumentDB connector
Spark to DocumentDB connectorSpark to DocumentDB connector
Spark to DocumentDB connector
 
Docker interview Questions-3.pdf
Docker interview Questions-3.pdfDocker interview Questions-3.pdf
Docker interview Questions-3.pdf
 
Running Airflow Workflows as ETL Processes on Hadoop
Running Airflow Workflows as ETL Processes on HadoopRunning Airflow Workflows as ETL Processes on Hadoop
Running Airflow Workflows as ETL Processes on Hadoop
 
Teaching Apache Spark: Demonstrations on the Databricks Cloud Platform
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformTeaching Apache Spark: Demonstrations on the Databricks Cloud Platform
Teaching Apache Spark: Demonstrations on the Databricks Cloud Platform
 
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...
 
Typesafe spark- Zalando meetup
Typesafe spark- Zalando meetupTypesafe spark- Zalando meetup
Typesafe spark- Zalando meetup
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & Delta
 
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
 
Supporting Digital Media Workflows in the Cloud with Perforce Helix
Supporting Digital Media Workflows in the Cloud with Perforce HelixSupporting Digital Media Workflows in the Cloud with Perforce Helix
Supporting Digital Media Workflows in the Cloud with Perforce Helix
 
Optimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudOptimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public Cloud
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
Building FoundationDB
Building FoundationDBBuilding FoundationDB
Building FoundationDB
 
Scaling spark on kubernetes at Lyft
Scaling spark on kubernetes at LyftScaling spark on kubernetes at Lyft
Scaling spark on kubernetes at Lyft
 

More from Databricks

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDatabricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Databricks
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Databricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDatabricks
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceDatabricks
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringDatabricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsDatabricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkDatabricks
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkDatabricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesDatabricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeDatabricks
 

More from Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Recently uploaded

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksdeepakthakur548787
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Thomas Poetter
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesTimothy Spann
 
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhThiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhYasamin16
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data VisualizationKianJazayeri1
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Cathrine Wilhelmsen
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataTecnoIncentive
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfblazblazml
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Seán Kennedy
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsVICTOR MAESTRE RAMIREZ
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max PrincetonTimothy Spann
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改yuu sss
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanMYRABACSAFRA2
 

Recently uploaded (20)

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing works
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
 
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhThiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data Visualization
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded data
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...
 
Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business Professionals
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max Princeton
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population Mean
 

Transactional writes to cloud storage with Eric Liang

  • 2. TEAM About Databricks Started Spark project (now Apache Spark) at UC Berkeley in 2009 22 PRODUCT Unified Analytics Platform MISSION Making Big Data Simple
  • 3. 3 Overview Deep dive on challenges in writing to Cloud storage (e.g. S3) with Spark • Why transactional writes are important • Cloud storage vs HDFS • Current solutions are tuned for HDFS • What we are doing about this
  • 4. 4 What are the requirements for writers? Spark: unified analytics stack Common end-to-end use case we've seen: chaining Spark jobs ETL => analytics ETL => ETL Requirements for writes come from downstream Spark jobs
  • 7. 7 Reader requirement #1 • With chained jobs, • Failures of upstream jobs shouldn't corrupt data Distributed Filesystem Distributed Filesystem read readwrite Writer job fails
  • 8. 8 Reader requirement #2 • Don't want to read outputs of in-progress jobs Distributed Filesystem Distributed Filesystem read read read readwrite
  • 9. 9 Reader requirement #3 • Data shows up (eventual consistency) read FileNotFoundException Cloud Storage
  • 10. 10 Summary of requirements Writes should be committed transactionally by Spark Writes commit atomically Reads see consistent snapshot
  • 11. 11 How Spark supports this today • Spark's commit protocol (inherited from Hadoop) ensures reliable output for jobs • HadoopMapReduceCommitProtocol: Spark stages output files to a temporary location Commit? Move staged files to final locations Abort; Delete staged files yes no
  • 12. 12 Executor Executor Executor Executor Driver Spark commit protocol Job Start Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Commit Timeline Task 7
  • 13. 13 Commit on HDFS • HDFS is the Hadoop distributed filesystem • Highly available, durable • Supports fast atomic metadata operations • e.g. file moves • HadoopMapReduceCommitProtocol uses series of files moves for Job commit • on HDFS, good performance • close to transactional in practice
  • 14. 14 Does it meet requirements? Spark commit on HDFS: 1. No partial results on failures => yes* 2. Don't see results of in-progress jobs => yes* 3. Data shows up => yes * window of failure during commit is small
  • 15. 15 What about the Cloud? Reliable output works on HDFS, but what about the Cloud? • Option 1: run HDFS worker on each node (e.g. EC2 instance) • Option 2: Use Cloud-native storage (e.g. S3) – Challenge: systems like S3 are not true filesystems
  • 16. 16 Object stores as Filesystems • Not so hard to provide Filesystem API over object stores such as S3 • e.g. S3A Filesystem • Traditional Hadoop applications / Spark continue to work over Cloud storage using these adapters •What do you give up (transactionality, performance)?
  • 17. 17 The remainder of this talk 1. Why HDFS has no place in the Cloud 2. Tradeoffs when using existing Hadoop commit protocols with Cloud storage adapters 3. New commit protocol we built that provides transactional writes in Databricks
  • 18. 18 Evaluating storage systems 1. Cost 2. SLA (availability and durability) 3. Performance Let's compare HDFS and S3
  • 19. 19 (1) Cost • Storage cost per TB-month • HDFS: $103 with d2.8xl dense storage instances • S3: $23 • Human cost • HDFS: team of Hadoop engineers or vendor • S3: $0 • Elasticity • Cloud storage is likely >10x cheaper
  • 20. 20 (2) SLA • Amazon claims 99.99% availability and 99.999999999% durability. • Our experience: • S3 only down twice in last 6 years • Never lost data • Most Hadoop clusters have uptime <= 99.9%
  • 21. 21 (3) Performance • Raw read/write performance • HDFS offers higher per-node throughput with disk locality • S3 decouples storage from compute – performance can scale to your needs • Metadata performance • S3: Listing files much slower – Better w/scalable partition handling in Spark 2.1 • S3: File moves require copies (expensive!)
  • 22. 22 Cloud storage is preferred • Cloud-native storage wins in cost and SLA • better price-performance ratio • more reliable • However it brings challenges for ETL • mostly around transactional write requirement • what is the right commit protocol? https://databricks.com/blog/2017/05/31/top-5-reasons-for-choos ing-s3-over-hdfs.html
  • 23. 23 Evaluating commit protocols • Two commit protocol variations in use today • HadoopMapReduceCommitProtocol V1 – (as described previously) • HadoopMapReduceCommitProtocol V2 • DirectOutputCommitter • deprecated and removed from Spark • Which one is most suitable for Cloud?
  • 24. 24 Evaluation Criteria 1. Performance – how fast is the protocol at committing files? 2. Transactionality – can a job cause partial or corrupt results to be visible to readers?
  • 25. 25 Performance Test Command to write out 100 files to S3: spark.range(10e6.toLong) .repartition(100).write.mode("append") .option( "mapreduce.fileoutputcommitter.algorithm.version", version) .parquet(s"s3:/tmp/test-$version")
  • 26. 26 Performance Test Time to write out 100 files to S3:
  • 27. 27 Executor Executor Executor Executor Driver Hadoop Commit V1 Job Start Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Job Commit (moves outputs of tasks 1-6) Timeline
  • 28. 28 Executor Executor Executor Executor Driver Hadoop Commit V2 Job Start Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Timeline Successful task outputs moved to final locations ASAP
  • 29. 29 Performance Test • The V1 commit protocol is slow • it moves each file serially on the driver • moves require a copy in S3 • The V2 Hadoop commit protocol is almost five times faster than V1 • parallel move on executors
  • 30. 30 Transactionality • Example: What happens if a job fails due to a bad task? • Common situation if certain input files have bad records • The job should fail cleanly • No partial outputs visible to readers • Let's evaluate Hadoop commit protocols {V1, V2} again
  • 31. 31 Fault Test Simulate a job failure due to bad records: spark.range(10000).repartition(7).map { i => if (i == 123) { throw new RuntimeException("oops!") } else i }.write.option("mapreduce.fileoutputcommitter.algorithm.version ", version) .mode("append").parquet(s"s3:/tmp/test-$version")
  • 32. 32 Fault Test Did the failed job leave behind any partial results?
  • 33. 33 Fault Test • The V2 protocol left behind ~8k garbage rows for the failed job. • This is duplicate data that needs to be "manually" cleaned up before running the job again • We see empirically that while V2 is faster, it also leaves behind partial results on job failures, breaking transactionality requirements
  • 34. 34 Concurrent Readers Test • Wait, is V1 really transactional? spark.range(1e9.toLong) .repartition(70) .write.mode("append") .parquet("s3:/test") spark.read("s3:/test").count() > 0 > 0 > 14285714 > 814285714 > 1000000000
  • 35. 35 Transactionality Test • V1 isn't very "transactional" on Cloud storage either • Long commit phase • What if driver fails? • What if another job reads concurrently? • On HDFS the window of vulnerability is much smaller because file moves are O(millisecond)
  • 36. 36 The problem with Hadoop commit • Key design assumption of Hadoop commit V1: file moves are fast • V2 is a workaround for Cloud storage, but sacrifices transactionality • This tradeoff isn't fundamental • How can we get both strong transactionality guarantees and the best performance?
  • 37. 37 Executor Executor Executor Executor Driver Ideal commit protocol? Job Start Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Timeline Instantaneous commit
  • 38. 38 No compromises with DBIO transactional commit • When a user writes a file in a job • Embed a unique txn id in the file name • Write the file directly to its final location • Mark the txn id as committed if the job commits • When a user is listing files • Ignore the file if it has a txn id that is uncommitted DBIO Cloud Storage
  • 41. 41 Concurrent Readers Test • DBIO provides atomic commit spark.range(1e9.toLong) .repartition(70) .write.mode("append") .parquet("s3:/test") spark.read("s3:/test").count() > 0 > 0 > 0 > 0 > 1000000000
  • 42. 42 Other benefits of DBIO commit • High performance • Strong correctness guarantees + Safe Spark task speculation + Atomically add and remove sets of files + Enhanced consistency
  • 43. 43 Implementation challenges • Challenge: compatibility with external systems like Hive • Solution: • relaxed transactionality guarantees: If you read from an external system you might read uncommitted files as before • %sql VACUUM command to clean up files (also auto-vacuum) • Challenge: how to reliably track transaction status • Solution: • hybrid of metadata files and service component • more data warehousing features in the future
  • 44. 44 Rollout status Enabled by default starting with Spark 2.1-db5 in Databricks Process of gradual rollout >100 customers already using DBIO transactional commit
  • 45. 45 Summary • Cloud Storage >> HDFS, however • Different design required for commit protocols • No compromises with DBIO transactional commit See our engineering blog post for more details https://databricks.com/blog/2017/05/31/transactional-writes-clo ud-storage.html
  • 46. UNIFIED ANALYTICS PLATFORM Try Apache Spark in Databricks! • Collaborative cloud environment • Free version (community edition) 4646 DATABRICKS RUNTIME 3.0 • Apache Spark - optimized for the cloud • Caching and optimization layer - DBIO • Enterprise security - DBES Try for free today. databricks.com