Cloud Storage is evolving rapidly, and our Azure Storage portfolio has added a ton of new industry leading capabilities. In this session you will learn the do's and don'ts of building data lakes on Azure Data Lake Storage. You will learn about the commonly used patterns, how to set up your accounts and pipelines to maximize performance, how to organize your data and various options to secure access to your data. We will also cover customer use cases and highlight planned enhancements and upcoming features.
Data Quality Patterns in the Cloud with Azure Data FactoryMark Kromer
This is my slide presentation from Pragmatic Works' Azure Data Week 2019: Data Quality Patterns in the Cloud with Azure Data Factory using Mapping Data Flows
Azure Data Factory Data Wrangling with Power QueryMark Kromer
ADF has embedded Power Query in Data Factory for a code-free / data-first data wrangling experience. Use the Power Query spreadsheet-style interface in your data factory to explore and prep your data, then execute your M script at scale on ADF's Spark data flow integration runtimes.
Microsoft Ignite AU 2017 - Orchestrating Big Data Pipelines with Azure Data F...Lace Lofranco
Data orchestration is the lifeblood of any successful data analytics solution. Take a deep dive into Azure Data Factory's data movement and transformation activities, particularly its integration with Azure's Big Data PaaS offerings such as HDInsight, SQL Data warehouse, Data Lake, and AzureML. Participants will learn how to design, build and manage big data orchestration pipelines using Azure Data Factory and how it stacks up against similar Big Data orchestration tools such as Apache Oozie.
Video of presentation:
https://channel9.msdn.com/Events/Ignite/Australia-2017/DA332
Data Quality Patterns in the Cloud with Azure Data FactoryMark Kromer
This is my slide presentation from Pragmatic Works' Azure Data Week 2019: Data Quality Patterns in the Cloud with Azure Data Factory using Mapping Data Flows
Azure Data Factory Data Wrangling with Power QueryMark Kromer
ADF has embedded Power Query in Data Factory for a code-free / data-first data wrangling experience. Use the Power Query spreadsheet-style interface in your data factory to explore and prep your data, then execute your M script at scale on ADF's Spark data flow integration runtimes.
Microsoft Ignite AU 2017 - Orchestrating Big Data Pipelines with Azure Data F...Lace Lofranco
Data orchestration is the lifeblood of any successful data analytics solution. Take a deep dive into Azure Data Factory's data movement and transformation activities, particularly its integration with Azure's Big Data PaaS offerings such as HDInsight, SQL Data warehouse, Data Lake, and AzureML. Participants will learn how to design, build and manage big data orchestration pipelines using Azure Data Factory and how it stacks up against similar Big Data orchestration tools such as Apache Oozie.
Video of presentation:
https://channel9.msdn.com/Events/Ignite/Australia-2017/DA332
In this introductory session, we dive into the inner workings of the newest version of Azure Data Factory (v2) and take a look at the components and principles that you need to understand to begin creating your own data pipelines. See the accompanying GitHub repository @ github.com/ebragas for code samples and ADFv2 ARM templates.
Here are the slides for my talk "An intro to Azure Data Lake" at Techorama NL 2018. The session was held on Tuesday October 2nd from 15:00 - 16:00 in room 7.
Azure Data Factory is one of the newer data services in Microsoft Azure and is part of the Cortana Analyics Suite, providing data orchestration and movement capabilities.
This session will describe the key components of Azure Data Factory and take a look at how you create data transformation and movement activities using the online tooling. Additionally, the new tooling that shipped with the recently updated Azure SDK 2.8 will be shown in order to provide a quickstart for your cloud ETL projects.
Integration Monday - Analysing StackExchange data with Azure Data LakeTom Kerkhove
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different.
Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
Streaming Real-time Data to Azure Data Lake Storage Gen 2Carole Gunst
Check out this presentation to learn the basics of using Attunity Replicate to stream real-time data to Azure Data Lake Storage Gen2 for analytics projects.
Analyzing StackExchange data with Azure Data LakeBizTalk360
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different. Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
AWS March 2016 Webinar Series Building Your Data Lake on AWS Amazon Web Services
Uncovering new, valuable insights from big data requires organizations to collect, store, and analyze increasing volumes of data from multiple, often disparate sources at disparate points in time. This makes it difficult to handle big data with data warehouses or relational database management systems alone.
A Data Lake allows you to store massive amounts of data in its original form, without the need to enforce a predefined schema, enabling a far more agile and flexible architecture, which makes it easier to gain new types of analytical insights from your data
In this webinar, we will introduce key concepts of a Data Lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management.
Learning Objectives:
• Learn how AWS can help enable a Data Lake architecture
• Understand some of the key architectural considerations when building a Data Lake
• Hear some of the important Data Lake implementation considerations
Who Should Attend:
• Data architects, data scientists, advanced AWS developers
"Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.
A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created ""on demand"", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS."
In this introductory session, we dive into the inner workings of the newest version of Azure Data Factory (v2) and take a look at the components and principles that you need to understand to begin creating your own data pipelines. See the accompanying GitHub repository @ github.com/ebragas for code samples and ADFv2 ARM templates.
Here are the slides for my talk "An intro to Azure Data Lake" at Techorama NL 2018. The session was held on Tuesday October 2nd from 15:00 - 16:00 in room 7.
Azure Data Factory is one of the newer data services in Microsoft Azure and is part of the Cortana Analyics Suite, providing data orchestration and movement capabilities.
This session will describe the key components of Azure Data Factory and take a look at how you create data transformation and movement activities using the online tooling. Additionally, the new tooling that shipped with the recently updated Azure SDK 2.8 will be shown in order to provide a quickstart for your cloud ETL projects.
Integration Monday - Analysing StackExchange data with Azure Data LakeTom Kerkhove
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different.
Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
Streaming Real-time Data to Azure Data Lake Storage Gen 2Carole Gunst
Check out this presentation to learn the basics of using Attunity Replicate to stream real-time data to Azure Data Lake Storage Gen2 for analytics projects.
Analyzing StackExchange data with Azure Data LakeBizTalk360
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different. Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
AWS March 2016 Webinar Series Building Your Data Lake on AWS Amazon Web Services
Uncovering new, valuable insights from big data requires organizations to collect, store, and analyze increasing volumes of data from multiple, often disparate sources at disparate points in time. This makes it difficult to handle big data with data warehouses or relational database management systems alone.
A Data Lake allows you to store massive amounts of data in its original form, without the need to enforce a predefined schema, enabling a far more agile and flexible architecture, which makes it easier to gain new types of analytical insights from your data
In this webinar, we will introduce key concepts of a Data Lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management.
Learning Objectives:
• Learn how AWS can help enable a Data Lake architecture
• Understand some of the key architectural considerations when building a Data Lake
• Hear some of the important Data Lake implementation considerations
Who Should Attend:
• Data architects, data scientists, advanced AWS developers
"Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.
A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created ""on demand"", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS."
Antoine Genereux takes us on a detailed overview of the Database solutions available on the AWS Cloud, addressing the needs and requirements of customers at all levels. He also discusses Business Intelligence and Analytics solutions.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Level: Intermediate
Speakers:
Tony Nguyen - Senior Consultant, ProServe, AWS
Hannah Marlowe - Consultant - Federal, AWS
Data Analytics Week at the San Francisco Loft
Using Data Lakes
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
John Mallory - Principal Business Development Manager Storage (Object), AWS
Hemant Borole - Sr. Big Data Consultant, AWS
Is the traditional data warehouse dead?James Serra
With new technologies such as Hive LLAP or Spark SQL, do I still need a data warehouse or can I just put everything in a data lake and report off of that? No! In the presentation I’ll discuss why you still need a relational data warehouse and how to use a data lake and a RDBMS data warehouse to get the best of both worlds. I will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. I’ll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution. And I’ll put it all together by showing common big data architectures.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting (in its original format) and extract value. In this session, learn how to architect and implement a data lake in the AWS Cloud. Learn about best practices as we walk through architectural blueprints.
Serverlesss Big Data Analytics with Amazon Athena and QuicksightAmazon Web Services
Check out how you can easily query raw data in various formats in Amazon S3, transform it into a canonical form, analyze it, and build dashboards to get more insights from your data.
Azure Days 2019: Business Intelligence auf Azure (Marco Amhof & Yves Mauron)Trivadis
In dieser Session stellen wir ein Projekt vor, in welchem wir ein umfassendes BI-System mit Hilfe von Azure Blob Storage, Azure SQL, Azure Logic Apps und Azure Analysis Services für und in der Azure Cloud aufgebaut haben. Wir berichten über die Herausforderungen, wie wir diese gelöst haben und welche Learnings und Best Practices wir mitgenommen haben.
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
For discovery-phase research, life sciences companies have to support infrastructure that processes millions to billions of transactions. The advent of a data lake to accomplish such a task is showing itself to be a stable and productive data platform pattern to meet the goal. We discuss how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, IAM, and AWS Lambda. We also review a reference architecture from Amgen that uses a data lake to aid in their Life Science Research.
Move your on prem data to a lake in a Lake in CloudCAMMS
With the boom in data; the volume and its complexity, the trend is to move data to the cloud. Where and How do we do this? Azure gives you the answer. In this session, I will give you an introduction to Azure Data Lake and Azure Data Factory, and why they are good for the type of problem we are talking about. You will learn how large datasets can be stored on the cloud, and how you could transport your data to this store. The session will briefly cover Azure Data Lake as the modern warehouse for data on the cloud,
Similar to Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data Lake Storage (20)
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data Lake Storage
1. Designing performant and scalable data
lakes using Azure Data Lake Storage
Rukmani Gopalan
@RukmaniGopalan
2. Agenda • Data Lake Concepts and Patterns
• Designing your data lake
• Set up
• Organize data
• Secure data
• Manage cost
• Optimizing your data lake
• Achieve the best performance and scale
3. Traditional on-prem analytics pipeline
Operational
database
Business/custom apps
Operational
database
Operational
database
Enterprise data
warehouse
Data mart
Data mart
Data mart
ETL
ETL
ETL
ETL ETL
ETL
ETL
Reporting
Analytics
Data mining
4. Modern data warehouse
Logs (structured)
Media (unstructured)
Files (unstructured)
Business/custom apps
(structured)
Ingest Prep & train Model & serve
Store
Azure Data Lake Storage
Azure DatabricksAzure Data Factory
Power BI
Azure Synapse Analytics
Azure Synapse Analytics
5. Advanced Analytics
Logs (structured)
Media (unstructured)
Files (unstructured)
Business/custom apps
(structured)
Ingest Prep & train Model & serve
Store
Azure Data Lake Storage
Azure Data Factory
Power BI
Apps
Azure Databricks
Azure Synapse Analytics Azure Synapse Analytics
Cosmos DB
6. Realtime Analytics
Logs (structured)
Media (unstructured)
Files (unstructured)
Business/custom apps
(structured)
Ingest Prep & train Model & serve
Store
Azure Data Lake Storage
Azure DatabricksAzure Data Factory
Power BI
Apps
Message Broker
Azure Synapse Analytics Azure Synapse Analytics
Cosmos DB
Sensors and IoT
(unstructured)
7. A “no-compromises” Data Lake: secure, performant, massively-scalable Data Lake storage that brings the cost and
scale profile of object storage together with the performance and analytics feature set of data lake storage
A z u r e D a t a L a k e S t o r a g e
M A N A G E A B L ES C A L A B L E F A S T S E C U R E
No limits on
data store size
Global footprint
(50 regions)
Optimized for Spark
and Hadoop
Analytic Engines
Tightly integrated
with Azure end to
end analytics
solutions
Automated
Lifecycle Policy
Management
Object Level
tiering
Support for fine-
grained ACLs,
protecting data at the
file and folder level
Multi-layered
protection via at-rest
Storage Service
encryption and Azure
Active Directory
integration
C O S T
E F F E C T I V E
I N T E G R AT I O N
R E A D Y
Atomic directory
operations
means jobs
complete faster
Object store
pricing levels
File system
operations
minimize
transactions
required for job
completion
8. Azure Data Lake Storage
Cloud Storage platform with first class file/folder semantics and support for multiple
protocols and cost/performance tiers. Built on Object Storage.
Common Blob Storage Foundation
Blob API ADLS API
Server Backups, Archive
Storage, Semi-structured
Data
Object Data
Hadoop File System, File
and Folder Hierarchy,
Granular ACLS Atomic File
Transactions
Analytics Data
Object Tiering and Lifecycle
Policy Management
AAD Integration, RBAC,
Storage Account Security
HA/DR support through ZRS
and RA-GRS
NFS v3 (preview)
HPC Data, Applications
using NFS v3 against large
sequentially read data sets
File Data
9. Data Lake Architecture - Summary
Store large volume of multi-structured data in its native format
Defer work to ‘schematize’ after value & requirements are known
(Schema-on-read)
Extract high value insights from the multi-structured data
Build intelligent business scenarios based on the insights
10. Designing Your Data Lake
• How do I set up my data lake?
• How do I organize my data?
• How do I secure my data?
• How do I manage cost?
11. How do I set up my data lake?
• Centralized vs Federated implementation
• Data management and administration – done by a central team vs business units/domains
• Blueprint approach to federated data lakes with centralized governance
Flexible – single or multiple storage accounts
Blueprint
12. Recommendations
Isolate development vs pre-production and production data lakes
Identify logical datasets, resources and management needs – this
drives the centralized vs federated approach
• Business unit boundaries
• Regional boundaries
Promote sharing data/insights across business units – beware of
data silos
13. How do I organize my data?
• Azure Data Lake Storage hierarchy
• Storage account
Azure resource that contains data objects
• Container
Organize within storage account - contains a set
of files/folders
• Folder/directory
Organize within container - contains a set of
files/folders, Hadoop file system friendly
• File
Holds data that can be read or written
14. Recommendations
Organize data based on semantic structure as well as desired
access control
Separate the different zones into different accounts, containers or
folders depending on business need
15. How do I secure my data?
PERIMETER/NETWORK
Service Endpoints
Private Endpoints
AUTHENTICATION
Azure Active Directory
(recommended)
Shared Keys
SAS tokens
Shared Key
AUTHORIZATION
RBACs (coarse grained)
POSIX ACLs (fine
grained)
Shared Key
DATA PROTECTION
Encryption on-the-wire with HTTPS
Encryption at Rest
• Service and Customer Managed Keys
Diagnostic Logs
16. A Little More on Authorization
RBACs and ACLs integrated with AAD
• RBACs – Storage account and container
• ACLs – File and folders
Other access mechanisms (not
recommended)
• Shared Keys – Disable if not needed
(preview)
• SAS Tokens – short lived access
17. Recommendations
Service or Private endpoints for network security
Use Azure Active Directory authentication to manage access
Use RBACs for coarse grained access (at storage account or
container level) and ACLs for fine grained access control (at file or
folder level)
AAD groups largely simplify your access management issues
18. How do I manage cost?
• Choose the right set of features for your business – cost vs benefit
• E.g. Redundancy option – criticality of geo-redundancy for production vs dev environments
LRS ZRS GZRS(RA-)GRS
Single Region Dual RegionGRS
19. How do I manage cost? (Continued…)
• Control data growth –
minimize risk of data
swamp
• Workspace data
management
• Leverage lifecycle
management policies
• Tiering
• Retention
20. Recommendations
Choose the features of data lake storage based on business need
Pre-prod and development environment needs might vary from
production environment needs
Leverage lifecycle management policies for better data
management
Move data to a cooler tier if not actively used – be aware of higher
transaction costs and minimum retention policies
Use retention policies to delete data that is not needed
21. How do I optimize my data lake?
Goal
• Optimize for performance AND scale as the data and
applications continue to grow on the data lake
The basic considerations are…
• Optimize for high throughput
• Target getting at least a few MBs (higher the better)
per transaction.
• Optimize data access patterns
• Reduce unnecessary scanning of files
• Read only the data you need to read.
• Write efficiently so downstream applications that read
the data benefit
22. File size and format
• Too many small files adversely impact performance
• Choosing the right format – better performance
AND lower cost
• Parquet – integrated optimizations with Azure Synapse
Analytics and Azure Databricks
• Recommendations
Modify source to ingest larger files into the data lake
Coalesce and convert to right format (E.g. Parquet) in
curation phase of your analytics pipelines
Realtime analytics pipelines (E.g. sensor data in IoT
application) – microbatch for larger writes
23. Partition your data for optimized access
Partition based on consumption patterns for optimized performance
Sensor ID Year Temperature
Humidity
Pressure
24. Microsoft Confidential
Query Acceleration (Preview)
Optimize access to structured data by filtering data directly in the storage service
Single file predicate evaluation and column projection to optimize analytics engines
Eg:
SELECT _1, _3 FROM BlobStorage WHERE _14 < 250 AND _16 > '2019-07-01'
25. Guidance from experts
Microsoft Docs
Explore overviews, tutorials,
code samples, and more.
Azure Data Lake Storage: https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-introduction
Azure Data Lake Storage Guidance Document: https://aka.ms/adls/guidancedoc
Azure Synapse Analytics: https://docs.microsoft.com/azure/synapse-analytics
An Azure Virtual Network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the Azure cloud dedicated to your subscription. ... When you create a VNet, your services and VMs within your VNet can communicate directly and securely with each other in the cloud.
Symptom: Job latencies
Investigation
Storage request throttling
Root cause
Too many read operations to storage.
Large number of row groups in databrick delta parquet file resulted in lots of reads operations.
Solution
Adjusted parquet.block.size config value to reduce number of row groups per parquet file
Job runtimes reduced by 3x
Symptom: Job timeouts
Investigation
Transaction and throughput peaks, bursty pattern of load
Storage request throttling
Root cause
Data cleanup during SLA job execution
Large number of partitions
10’s of thousands of partitions
Solution
Reduced number of partitions to 250
Best practice: Partitioning strategy must align with your query pattern
Reduced the number of delete operations while SLA job is running