SlideShare a Scribd company logo
1 of 28
Dustin Vannoy
Data Engineer
Cloud + Streaming
Azure Databricks with
Delta Lake
Dustin Vannoy
Data Engineering Consultant
Co-founder Data Engineering San Diego
/in/dustinvannoy
@dustinvannoy
dustin@dustinvannoy.com
Technologies
• Azure & AWS
• Spark
• Kafka
• Python
Modern Data Systems
• Data Lakes
• Analytics in Cloud
• Streaming
© Microsoft Azure + AI Conference All rights reserved.
Agenda
 Intro to Spark + Azure Databricks
 Delta Lake Overview
 Delta Lake in Action
 Schema Enforcement
 Time Travel
 MERGE, DELETE, OPTIMIZE
© Microsoft Azure + AI Conference All rights reserved.
Intro to Spark & Azure Databricks
Overview and Databricks workspace walk through
Why Spark?
Big data and the cloud
changed our mindset.
We want tools that
scale easily as data
size grows.
Spark is a leader in
data processing that
scales across many
machines. It can run
on Hadoop but is
faster and easier than
Map Reduce.
Benefit of horizontal scaling
Traditional Distributed (Parallel)
© Microsoft Azure + AI Conference All rights reserved.
What is Spark?
 Fast, general purpose engine for large-scale data processing
 Replaces MapReduce as Hadoop parallel programming API
 Many options:
 Yarn / Spark Cluster / Local
 Scala / Python / Java / R
 Spark Core / SQL / Streaming / ML / Graph
© Microsoft Azure + AI Conference All rights reserved.
Simple code, parallel compute
Spark consists of a programming API and execution engine
Worker Worker Worker Worker
Master
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
spark = SparkSession.builder.getOrCreate()
song_df = spark.read 
.option('sep','t') 
.option("inferSchema","true") 
.csv("/databricks-datasets/songs/data-001/part-0000*")
tempo_df = song_df.select(
col('_c4').alias('artist_name'),
col('_c14').alias('tempo'),
)
avg_tempo_df = tempo_df 
.groupBy('artist_name') 
.avg('tempo') 
.orderBy('avg(tempo)',ascending=False)
avg_tempo_df.show(truncate=False)
© Microsoft Azure + AI Conference All rights reserved.
Spark’s Strengths
 Data pipelines and analytics
 Batch or streaming
 SparkSQL
 Machine learning
 Uses memory to speed up processing
 Large community, many examples and tutorials
Demo
Databricks
Workspace
© Microsoft Azure + AI Conference All rights reserved.
Delta Lake Overview
Why use it and how to start
© Microsoft Azure + AI Conference All rights reserved.
Spark is powerful, but...
 Not ACID compliant – too easy to get corrupted data
 Schema mismatches – no validation on write
 Small files written, not efficient for reading
 Reads too much data (no indexes, only partitions)
© Microsoft Azure + AI Conference All rights reserved.
ACID
 Atomicity – all or nothing
 Consistency – data always in valid state
 Isolation – uncommitted operations don’t impact other reads/writes
 Durability – committed data is never lost
ACID compliance would give us ability to update and delete!
© Microsoft Azure + AI Conference All rights reserved.
Small File Problem
 Too much metadata
 Too many file open/close operations
 Compression not as effective
 Bad if using Map Reduce to read
We fix this with scheduled file compaction jobs, difficulty is avoiding
interference with new write operations
© Microsoft Azure + AI Conference All rights reserved.
Partitions
 Typically Spark reads all data in a table/directory before applying
filters
 Folder partitioning used to allow some filter push downs
 Limited to one fixed partition scheme to allow skipping reads
 Must use low cardinality columns for partitioning
We used to just add indexes and run statistics to improve seeks
Delta Lake Concepts
Reference: delta.io
© Microsoft Azure + AI Conference All rights reserved.
ACID Transactions
Atomicity, Consistency, and Isolation all improved
© Microsoft Azure + AI Conference All rights reserved.
Reminder: ACID
 Atomicity – all or nothing
 Consistency – data always in valid state
 Isolation – uncommitted operations don’t impact other reads/writes
 Durability – committed data is never lost
© Microsoft Azure + AI Conference All rights reserved.
ACID Transaction Support
“Serializable isolation levels
ensure that readers never
see inconsistent data”
- Delta Lake Documentation
© Microsoft Azure + AI Conference All rights reserved.
Schema Enforcement
How to use schema validation and schema merge
© Microsoft Azure + AI Conference All rights reserved.
Schema validation by default
 Delta defaults to validating schema
 Fails on mismatch
 Or, set schema merge option
© Microsoft Azure + AI Conference All rights reserved.
Time Travel
Data version history in Delta
© Microsoft Azure + AI Conference All rights reserved.
Delta Log
“The transaction log is the mechanism through which
Delta Lake is able to offer the guarantee of atomicity.”
Reference: Databricks Blog: Unpacking the Transaction Log
Demo
Delta
capabilities
© Microsoft Azure + AI Conference All rights reserved.
Final thoughts
Delta Lake delivers some powerful capabilities
© Microsoft Azure + AI Conference All rights reserved.
Delta Lake addresses
 ACID compliance
 Schema enforcement
 Compacting files
 Performance optimizations
© Microsoft Azure + AI Conference All rights reserved.
References
 Video - Simplify and Scale Data Engineering Pipelines with Delta Lake
- Amanda Moran
 Video - Building Data Intensive Application on Top of Delta Lakes
 Video - Why do we need Delta Lake for Spark? - Learning Journal
 Databricks Blog: Unpacking the Transaction Log
 Databricks Delta Lake - James Serra
 Databricks Delta Technical Guide - Jan 2019
 Productionizing Machine Learning with Delta Lake
© Microsoft Azure + AI Conference All rights reserved.
Please use EventsXD to fill out a session evaluation.
Thank you!

More Related Content

What's hot

[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...DataScienceConferenc1
 
Scaling and Modernizing Data Platform with Databricks
Scaling and Modernizing Data Platform with DatabricksScaling and Modernizing Data Platform with Databricks
Scaling and Modernizing Data Platform with DatabricksDatabricks
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
 
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEAApache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEAAdam Doyle
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
 
Considerations for Data Access in the Lakehouse
Considerations for Data Access in the LakehouseConsiderations for Data Access in the Lakehouse
Considerations for Data Access in the LakehouseDatabricks
 
Build Real-Time Applications with Databricks Streaming
Build Real-Time Applications with Databricks StreamingBuild Real-Time Applications with Databricks Streaming
Build Real-Time Applications with Databricks StreamingDatabricks
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesNishith Agarwal
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeDatabricks
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseDatabricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Databricks
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming JobsDatabricks
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
 
Building Data Quality pipelines with Apache Spark and Delta Lake
Building Data Quality pipelines with Apache Spark and Delta LakeBuilding Data Quality pipelines with Apache Spark and Delta Lake
Building Data Quality pipelines with Apache Spark and Delta LakeDatabricks
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDatabricks
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
 

What's hot (20)

[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
 
Scaling and Modernizing Data Platform with Databricks
Scaling and Modernizing Data Platform with DatabricksScaling and Modernizing Data Platform with Databricks
Scaling and Modernizing Data Platform with Databricks
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
 
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEAApache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEA
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh Architecture
 
Considerations for Data Access in the Lakehouse
Considerations for Data Access in the LakehouseConsiderations for Data Access in the Lakehouse
Considerations for Data Access in the Lakehouse
 
Build Real-Time Applications with Databricks Streaming
Build Real-Time Applications with Databricks StreamingBuild Real-Time Applications with Databricks Streaming
Build Real-Time Applications with Databricks Streaming
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
 
Architecting a datalake
Architecting a datalakeArchitecting a datalake
Architecting a datalake
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a Lakehouse
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic Datasets
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming Jobs
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Building Data Quality pipelines with Apache Spark and Delta Lake
Building Data Quality pipelines with Apache Spark and Delta LakeBuilding Data Quality pipelines with Apache Spark and Delta Lake
Building Data Quality pipelines with Apache Spark and Delta Lake
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's Perspective
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 

Similar to Delta Lake with Azure Databricks

Getting Started with Delta Lake on Databricks
Getting Started with Delta Lake on DatabricksGetting Started with Delta Lake on Databricks
Getting Started with Delta Lake on DatabricksKnoldus Inc.
 
Spark Streaming with Azure Databricks
Spark Streaming with Azure DatabricksSpark Streaming with Azure Databricks
Spark Streaming with Azure DatabricksDustin Vannoy
 
Self-service Big Data Analytics on Microsoft Azure
Self-service Big Data Analytics on Microsoft AzureSelf-service Big Data Analytics on Microsoft Azure
Self-service Big Data Analytics on Microsoft AzureCloudera, Inc.
 
Azure + DataStax Enterprise (DSE) Powers Office365 Per User Store
Azure + DataStax Enterprise (DSE) Powers Office365 Per User StoreAzure + DataStax Enterprise (DSE) Powers Office365 Per User Store
Azure + DataStax Enterprise (DSE) Powers Office365 Per User StoreDataStax Academy
 
Google take on heterogeneous data base replication
Google take on heterogeneous data base replication Google take on heterogeneous data base replication
Google take on heterogeneous data base replication Svetlin Stanchev
 
A deep dive into running data analytic workloads in the cloud
A deep dive into running data analytic workloads in the cloudA deep dive into running data analytic workloads in the cloud
A deep dive into running data analytic workloads in the cloudCloudera, Inc.
 
How to Build Multi-disciplinary Analytics Applications on a Shared Data Platform
How to Build Multi-disciplinary Analytics Applications on a Shared Data PlatformHow to Build Multi-disciplinary Analytics Applications on a Shared Data Platform
How to Build Multi-disciplinary Analytics Applications on a Shared Data PlatformCloudera, Inc.
 
Data Engineering with Databricks Presentation
Data Engineering with Databricks PresentationData Engineering with Databricks Presentation
Data Engineering with Databricks PresentationKnoldus Inc.
 
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...HostedbyConfluent
 
Data platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxData platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxCalvinSim10
 
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...Cloudera, Inc.
 
2014.11.14 Data Opportunities with Azure
2014.11.14 Data Opportunities with Azure2014.11.14 Data Opportunities with Azure
2014.11.14 Data Opportunities with AzureMarco Parenzan
 
By Popular Demand: The Rise of Elastic SQL
By Popular Demand: The Rise of Elastic SQLBy Popular Demand: The Rise of Elastic SQL
By Popular Demand: The Rise of Elastic SQLNuoDB
 
Part 2: A Visual Dive into Machine Learning and Deep Learning 

Part 2: A Visual Dive into Machine Learning and Deep Learning 
Part 2: A Visual Dive into Machine Learning and Deep Learning 

Part 2: A Visual Dive into Machine Learning and Deep Learning 
Cloudera, Inc.
 
Azure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudAzure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudMark Kromer
 
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudSQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudMark Kromer
 
Delivering Data Democratization in the Cloud with Snowflake
Delivering Data Democratization in the Cloud with SnowflakeDelivering Data Democratization in the Cloud with Snowflake
Delivering Data Democratization in the Cloud with SnowflakeKent Graziano
 

Similar to Delta Lake with Azure Databricks (20)

Getting Started with Delta Lake on Databricks
Getting Started with Delta Lake on DatabricksGetting Started with Delta Lake on Databricks
Getting Started with Delta Lake on Databricks
 
Spark Streaming with Azure Databricks
Spark Streaming with Azure DatabricksSpark Streaming with Azure Databricks
Spark Streaming with Azure Databricks
 
Self-service Big Data Analytics on Microsoft Azure
Self-service Big Data Analytics on Microsoft AzureSelf-service Big Data Analytics on Microsoft Azure
Self-service Big Data Analytics on Microsoft Azure
 
Azure + DataStax Enterprise (DSE) Powers Office365 Per User Store
Azure + DataStax Enterprise (DSE) Powers Office365 Per User StoreAzure + DataStax Enterprise (DSE) Powers Office365 Per User Store
Azure + DataStax Enterprise (DSE) Powers Office365 Per User Store
 
How to Win When Migrating to Azure
How to Win When Migrating to AzureHow to Win When Migrating to Azure
How to Win When Migrating to Azure
 
Google take on heterogeneous data base replication
Google take on heterogeneous data base replication Google take on heterogeneous data base replication
Google take on heterogeneous data base replication
 
A deep dive into running data analytic workloads in the cloud
A deep dive into running data analytic workloads in the cloudA deep dive into running data analytic workloads in the cloud
A deep dive into running data analytic workloads in the cloud
 
How to Build Multi-disciplinary Analytics Applications on a Shared Data Platform
How to Build Multi-disciplinary Analytics Applications on a Shared Data PlatformHow to Build Multi-disciplinary Analytics Applications on a Shared Data Platform
How to Build Multi-disciplinary Analytics Applications on a Shared Data Platform
 
Data Engineering with Databricks Presentation
Data Engineering with Databricks PresentationData Engineering with Databricks Presentation
Data Engineering with Databricks Presentation
 
Vue d'ensemble Dremio
Vue d'ensemble DremioVue d'ensemble Dremio
Vue d'ensemble Dremio
 
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...
 
Data platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxData platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptx
 
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...
Multidisziplinäre Analyseanwendungen auf einer gemeinsamen Datenplattform ers...
 
Cloud Computing & Cloud Storage
Cloud Computing & Cloud Storage Cloud Computing & Cloud Storage
Cloud Computing & Cloud Storage
 
2014.11.14 Data Opportunities with Azure
2014.11.14 Data Opportunities with Azure2014.11.14 Data Opportunities with Azure
2014.11.14 Data Opportunities with Azure
 
By Popular Demand: The Rise of Elastic SQL
By Popular Demand: The Rise of Elastic SQLBy Popular Demand: The Rise of Elastic SQL
By Popular Demand: The Rise of Elastic SQL
 
Part 2: A Visual Dive into Machine Learning and Deep Learning 

Part 2: A Visual Dive into Machine Learning and Deep Learning 
Part 2: A Visual Dive into Machine Learning and Deep Learning 

Part 2: A Visual Dive into Machine Learning and Deep Learning 

 
Azure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudAzure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the Cloud
 
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudSQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
 
Delivering Data Democratization in the Cloud with Snowflake
Delivering Data Democratization in the Cloud with SnowflakeDelivering Data Democratization in the Cloud with Snowflake
Delivering Data Democratization in the Cloud with Snowflake
 

Recently uploaded

Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAmazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAbdelrhman abooda
 
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一F La
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhijennyeacort
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxStephen266013
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxEmmanuel Dauda
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubaihf8803863
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxBoston Institute of Analytics
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一F sss
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024thyngster
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDRafezzaman
 

Recently uploaded (20)

Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAmazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
 
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docx
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptx
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
 

Delta Lake with Azure Databricks

  • 1. Dustin Vannoy Data Engineer Cloud + Streaming Azure Databricks with Delta Lake
  • 2. Dustin Vannoy Data Engineering Consultant Co-founder Data Engineering San Diego /in/dustinvannoy @dustinvannoy dustin@dustinvannoy.com Technologies • Azure & AWS • Spark • Kafka • Python Modern Data Systems • Data Lakes • Analytics in Cloud • Streaming
  • 3. © Microsoft Azure + AI Conference All rights reserved. Agenda  Intro to Spark + Azure Databricks  Delta Lake Overview  Delta Lake in Action  Schema Enforcement  Time Travel  MERGE, DELETE, OPTIMIZE
  • 4. © Microsoft Azure + AI Conference All rights reserved. Intro to Spark & Azure Databricks Overview and Databricks workspace walk through
  • 5. Why Spark? Big data and the cloud changed our mindset. We want tools that scale easily as data size grows. Spark is a leader in data processing that scales across many machines. It can run on Hadoop but is faster and easier than Map Reduce.
  • 6. Benefit of horizontal scaling Traditional Distributed (Parallel)
  • 7. © Microsoft Azure + AI Conference All rights reserved. What is Spark?  Fast, general purpose engine for large-scale data processing  Replaces MapReduce as Hadoop parallel programming API  Many options:  Yarn / Spark Cluster / Local  Scala / Python / Java / R  Spark Core / SQL / Streaming / ML / Graph
  • 8. © Microsoft Azure + AI Conference All rights reserved. Simple code, parallel compute Spark consists of a programming API and execution engine Worker Worker Worker Worker Master from pyspark.sql import SparkSession from pyspark.sql.functions import col spark = SparkSession.builder.getOrCreate() song_df = spark.read .option('sep','t') .option("inferSchema","true") .csv("/databricks-datasets/songs/data-001/part-0000*") tempo_df = song_df.select( col('_c4').alias('artist_name'), col('_c14').alias('tempo'), ) avg_tempo_df = tempo_df .groupBy('artist_name') .avg('tempo') .orderBy('avg(tempo)',ascending=False) avg_tempo_df.show(truncate=False)
  • 9. © Microsoft Azure + AI Conference All rights reserved. Spark’s Strengths  Data pipelines and analytics  Batch or streaming  SparkSQL  Machine learning  Uses memory to speed up processing  Large community, many examples and tutorials
  • 11. © Microsoft Azure + AI Conference All rights reserved. Delta Lake Overview Why use it and how to start
  • 12. © Microsoft Azure + AI Conference All rights reserved. Spark is powerful, but...  Not ACID compliant – too easy to get corrupted data  Schema mismatches – no validation on write  Small files written, not efficient for reading  Reads too much data (no indexes, only partitions)
  • 13. © Microsoft Azure + AI Conference All rights reserved. ACID  Atomicity – all or nothing  Consistency – data always in valid state  Isolation – uncommitted operations don’t impact other reads/writes  Durability – committed data is never lost ACID compliance would give us ability to update and delete!
  • 14. © Microsoft Azure + AI Conference All rights reserved. Small File Problem  Too much metadata  Too many file open/close operations  Compression not as effective  Bad if using Map Reduce to read We fix this with scheduled file compaction jobs, difficulty is avoiding interference with new write operations
  • 15. © Microsoft Azure + AI Conference All rights reserved. Partitions  Typically Spark reads all data in a table/directory before applying filters  Folder partitioning used to allow some filter push downs  Limited to one fixed partition scheme to allow skipping reads  Must use low cardinality columns for partitioning We used to just add indexes and run statistics to improve seeks
  • 17. © Microsoft Azure + AI Conference All rights reserved. ACID Transactions Atomicity, Consistency, and Isolation all improved
  • 18. © Microsoft Azure + AI Conference All rights reserved. Reminder: ACID  Atomicity – all or nothing  Consistency – data always in valid state  Isolation – uncommitted operations don’t impact other reads/writes  Durability – committed data is never lost
  • 19. © Microsoft Azure + AI Conference All rights reserved. ACID Transaction Support “Serializable isolation levels ensure that readers never see inconsistent data” - Delta Lake Documentation
  • 20. © Microsoft Azure + AI Conference All rights reserved. Schema Enforcement How to use schema validation and schema merge
  • 21. © Microsoft Azure + AI Conference All rights reserved. Schema validation by default  Delta defaults to validating schema  Fails on mismatch  Or, set schema merge option
  • 22. © Microsoft Azure + AI Conference All rights reserved. Time Travel Data version history in Delta
  • 23. © Microsoft Azure + AI Conference All rights reserved. Delta Log “The transaction log is the mechanism through which Delta Lake is able to offer the guarantee of atomicity.” Reference: Databricks Blog: Unpacking the Transaction Log
  • 25. © Microsoft Azure + AI Conference All rights reserved. Final thoughts Delta Lake delivers some powerful capabilities
  • 26. © Microsoft Azure + AI Conference All rights reserved. Delta Lake addresses  ACID compliance  Schema enforcement  Compacting files  Performance optimizations
  • 27. © Microsoft Azure + AI Conference All rights reserved. References  Video - Simplify and Scale Data Engineering Pipelines with Delta Lake - Amanda Moran  Video - Building Data Intensive Application on Top of Delta Lakes  Video - Why do we need Delta Lake for Spark? - Learning Journal  Databricks Blog: Unpacking the Transaction Log  Databricks Delta Lake - James Serra  Databricks Delta Technical Guide - Jan 2019  Productionizing Machine Learning with Delta Lake
  • 28. © Microsoft Azure + AI Conference All rights reserved. Please use EventsXD to fill out a session evaluation. Thank you!

Editor's Notes

  1. With the shift to data lakes that use distributed file storage as the foundation, we have been missing the reliability that relational databases provides. Databricks Delta is a data management system focused on bringing more reliability and performance into our data lakes. It sits on top of existing storage and the API is very similar to reading and writing to files from Spark already. This session will present the overview of Delta Lake, why it may be a better option than standard data lake storage, and how you can use it from Azure Databricks. We will work through demos that showcase the key benefits of delta lake: 1. ACID transactions 2. Schema enforcement and evolution 3. Time travel (data versioning)
  2. Let’s think about the benefit of parallel processing, often referred to as distributed systems. The idea is actually very easy to understand. If we had a task such as counting all the people at a concert, you could have one person who is really good at counting do it and if the venue is small enough they will do just fine. But the job will be completed faster if you have many people counting and combining the results at the end. Sure there is a little more organization needed, but if you need to count the attendees at a Beyonce concert you could just hire a lot of people to do the job. And if one of them gets distracted by the music, you can send whoever finishes first in to take over counting that section. We call this capability “Horizontal Scaling” because if our data processing system is not powerful enough to do the work, we add more computers to help out rather than replacing the single server with a more powerful server. Distributed computing and parallel processing are not new concepts, few things in computing are, but what if you had an easy way to tell all the workers what to do without having to micro-manage to avoid two people counting the same section? That is where new programming models and frameworks have stepped in over the last 10 years and gave us the beloved buzz word ”Big Data”. Spark is not the only option here, but it has a lot of strengths and is often chosen over the traditional single machine processing options.
  3. A fast and general engine for large-scale data processing, uses memory to provide benefit Often replaces MapReduce as parallel programming api on Hadoop, the way it handles data (RDDs) provides one performance benefit and use of memory when possible provides another large performance benefit Can run on Hadoop (using Yarn) but also as a separate Spark cluster. Local is possible as well but reduces the performance benefits…I find its still a useful API though Run Java, Scala, Python, or R. If you don’t already know one of those languages really well, I recommend trying it in Python and Scala and pick whichever is easiest for you. Several modules for different use cases, similar api so you can swap between modes relatively easily. For example, we have both streaming and batch sources of some data and we reuse the rest of the spark processing transformations.
  4. In the day to day we will talk about writing Spark code and also refer to running the code on the Spark cluster. There are actually quite a few options for how to do either of these things, but a quick look at Spark code that uses Spark DataFrames in Python. And then whatever cluster we run it on will have a concept of a master node and worker nodes, as well as some storage that is often a hybrid of local storage on the workers plus a distributed file system like Hadoop’s HDFS, Amazon S3, or Azure Data Lake Storage. If you don’t follow all those terms, it’s ok. There is plenty of time to build up to those concepts after you start learning to write spark code and run it in a simple Spark environment. We will cover that in other videos.
  5. So we sort of get what Spark is, we saw a small code sample and discussed how a cluster exists to run the code on. Let’s go back to a higher level and talk about Spark’s strengths.
  6. Quick overview of important databricks workspace segments – Clusters, Tables, Notebooks Open create_parquet_tables notebook and run first few commands as examples of working without delta
  7. Atomicity – typical Spark save does not use locking and is not atomic so it could leave incomplete changes behind and corrupt data. Overwrite will remove data before loading new data, so typically not an issue. With append mode the default commiter should have atomicity but some of the faster commiters don’t gurantee atomicity. - Learning Journal, Delta Lake for Apache Spark video on YouTube Consistency – with typical Spark overwrite there is a time where no files exist and if failure happens at that point you are left in invalid state. Isolation – an operation that is in progress (not commited) should not impact the results of other reads or writes...do not want dirty reads. Typical database offers different levels of isolation but Spark doesn’t have specific option of commit such as read/commited and serializable. Task level and job level commits exist but lack of atomicity in write leaves this not fully working. Durability – typically not an issue, though lack of commit can lead to issues here as well
  8. Atomicity – typical Spark save does not use locking and is not atomic so it could leave incomplete changes behind and corrupt data. Overwrite will remove data before loading new data, so typically not an issue. With append mode the default commiter should have atomicity but some of the faster commiters don’t gurantee atomicity. - Learning Journal, Delta Lake for Apache Spark video on YouTube Consistency – with typical Spark overwrite there is a time where no files exist and if failure happens at that point you are left in invalid state. Isolation – an operation that is in progress (not commited) should not impact the results of other reads or writes...do not want dirty reads. Typical database offers different levels of isolation but Spark doesn’t have specific option of commit such as read/commited and serializable. Task level and job level commits exist but lack of atomicity in write leaves this not fully working. Durability – typically not an issue, though lack of commit can lead to issues here as well
  9. Atomicity – typical Spark save does not use locking and is not atomic so it could leave incomplete changes behind and corrupt data. Overwrite will remove data before loading new data, so typically not an issue. With append mode the default commiter should have atomicity but some of the faster commiters don’t gurantee atomicity. - Learning Journal, Delta Lake for Apache Spark video on YouTube Consistency – with typical Spark overwrite there is a time where no files exist and if failure happens at that point you are left in invalid state. Isolation – an operation that is in progress (not commited) should not impact the results of other reads or writes...do not want dirty reads. Typical database offers different levels of isolation but Spark doesn’t have specific option of commit such as read/commited and serializable. Task level and job level commits exist but lack of atomicity in write leaves this not fully working. Durability – typically not an issue, though lack of commit can lead to issues here as well
  10. Quote and image from Databricks blog post by Burak Yavuz, Michael Armbrust and Brenner Heintz -> https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html
  11. Demo notebook create_delta_tables Show bad data when running one set of writes from one source, then run from second source Same example with delta destination to show failure Same example but tweaked to allow schema merge Show transaction log files Demo of file where data was streamed in, show by timestamp and version