The document discusses best practices and performance tuning for U-SQL in Azure Data Lake. It provides an overview of U-SQL query execution, including the job scheduler, query compilation process, and vertex execution model. The document also covers techniques for analyzing and optimizing U-SQL job performance, including analyzing the critical path, using heat maps, optimizing AU usage, addressing data skew, and query tuning techniques like data loading tips, partitioning, predicate pushing and column pruning.
It's Spooky Science Month at Orlando Science Center so our Young Maker Workshop activity was cardboard automatas!
This is a really fun activity that you can try at home! Use this project to explore simple machines, specifically wheels and axles. Use your critical thinking skills and creativity to make your project move. Check out how you can create your own cardboard automata in this slideshow.
The jewellery world did not have the concept of a portfolio till recently, that is why this is a presentation about art portfolio but it also encompasses the tips to make a jewellery portfolio.
Presented by The students of BA Degree in Jewellery Design & Manufacturing Techniques batch 13
Seul Ulusal Üniversitesi, Kore Üniversitesi, Yonsei Üniversitesi hakkında bil...Elif Sude Sayın
Herkese merhabalar!! Çok bilnen ve Kore'nin en iyi üniversiteleri olarak geçen SKY Üniversitelei hakkında detaylı bir sunum hazrladık ve bunu sizlerle paylaşmak istedik. Ayrıca bu konu hakkında youtube'da instagramda yapmış olduğumuz canlı yayınımız da mevcut. Daha çok canlı yayın ve seminer için bizi takip edebilirsiniz!!
İletişim Kanallarımız:
Instagram: ew_egitim
Mail: info@ewedu.co
Youtube: East West Yurtdışı Eğitim
This guide is designed to help new or inexperienced consultants grasp the basics of workplace etiquette and complete a successful engagement without a hitch.
It's Spooky Science Month at Orlando Science Center so our Young Maker Workshop activity was cardboard automatas!
This is a really fun activity that you can try at home! Use this project to explore simple machines, specifically wheels and axles. Use your critical thinking skills and creativity to make your project move. Check out how you can create your own cardboard automata in this slideshow.
The jewellery world did not have the concept of a portfolio till recently, that is why this is a presentation about art portfolio but it also encompasses the tips to make a jewellery portfolio.
Presented by The students of BA Degree in Jewellery Design & Manufacturing Techniques batch 13
Seul Ulusal Üniversitesi, Kore Üniversitesi, Yonsei Üniversitesi hakkında bil...Elif Sude Sayın
Herkese merhabalar!! Çok bilnen ve Kore'nin en iyi üniversiteleri olarak geçen SKY Üniversitelei hakkında detaylı bir sunum hazrladık ve bunu sizlerle paylaşmak istedik. Ayrıca bu konu hakkında youtube'da instagramda yapmış olduğumuz canlı yayınımız da mevcut. Daha çok canlı yayın ve seminer için bizi takip edebilirsiniz!!
İletişim Kanallarımız:
Instagram: ew_egitim
Mail: info@ewedu.co
Youtube: East West Yurtdışı Eğitim
This guide is designed to help new or inexperienced consultants grasp the basics of workplace etiquette and complete a successful engagement without a hitch.
Introduction to Azure Data Lake and U-SQL for SQL users (SQL Saturday 635)Michael Rys
Data Lakes have become a new tool in building modern data warehouse architectures. In this presentation we will introduce Microsoft's Azure Data Lake offering and its new big data processing language called U-SQL that makes Big Data Processing easy by combining the declarativity of SQL with the extensibility of C#. We will give you an initial introduction to U-SQL by explaining why we introduced U-SQL and showing with an example of how to analyze some tweet data with U-SQL and its extensibility capabilities and take you on an introductory tour of U-SQL that is geared towards existing SQL users.
slides for SQL Saturday 635, Vancouver BC, Aug 2017
Amazon Athena is a new serverless query service that makes it easy to analyze data in Amazon S3, using standard SQL. With Athena, there is no infrastructure to setup or manage, and you can start analyzing your data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3.
Best Practices for Migrating Your Data Warehouse to Amazon RedshiftAmazon Web Services
by Darin Briskman, Technical Evangelist, AWS
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms. Level: 200
AWS Athena vs. Google BigQuery for interactive SQL QueriesDoiT International
During the re:Invent 2016, AWS has released the Amazon Athena - an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
We took a look on AWS Athena and compared it to the Google BigQuery - another player of serverless interactive data analysis.
Would you like to know which one is the right tool for you? Join us for this meetup to learn AWS Athena and for the test drive of querying exactly the same dataset using AWS Athena and Google BigQuery to see where each one shines (or totally blows it).
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Best practices on Building a Big Data Analytics Solution (SQLBits 2018 Traini...Michael Rys
From theory to implementation - follow the steps of implementing an end-to-end analytics solution illustrated with some best practices and examples in Azure Data Lake.
During this full training day we will share the architecture patterns, tooling, learnings and tips and tricks for building such services on Azure Data Lake. We take you through some anti-patterns and best practices on data loading and organization, give you hands-on time and the ability to develop some of your own U-SQL scripts to process your data and discuss the pros and cons of files versus tables.
This were the slides presented at the SQLBits 2018 Training Day on Feb 21, 2018.
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
Introduction to Azure Data Lake and U-SQL for SQL users (SQL Saturday 635)Michael Rys
Data Lakes have become a new tool in building modern data warehouse architectures. In this presentation we will introduce Microsoft's Azure Data Lake offering and its new big data processing language called U-SQL that makes Big Data Processing easy by combining the declarativity of SQL with the extensibility of C#. We will give you an initial introduction to U-SQL by explaining why we introduced U-SQL and showing with an example of how to analyze some tweet data with U-SQL and its extensibility capabilities and take you on an introductory tour of U-SQL that is geared towards existing SQL users.
slides for SQL Saturday 635, Vancouver BC, Aug 2017
Amazon Athena is a new serverless query service that makes it easy to analyze data in Amazon S3, using standard SQL. With Athena, there is no infrastructure to setup or manage, and you can start analyzing your data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3.
Best Practices for Migrating Your Data Warehouse to Amazon RedshiftAmazon Web Services
by Darin Briskman, Technical Evangelist, AWS
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms. Level: 200
AWS Athena vs. Google BigQuery for interactive SQL QueriesDoiT International
During the re:Invent 2016, AWS has released the Amazon Athena - an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
We took a look on AWS Athena and compared it to the Google BigQuery - another player of serverless interactive data analysis.
Would you like to know which one is the right tool for you? Join us for this meetup to learn AWS Athena and for the test drive of querying exactly the same dataset using AWS Athena and Google BigQuery to see where each one shines (or totally blows it).
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Best practices on Building a Big Data Analytics Solution (SQLBits 2018 Traini...Michael Rys
From theory to implementation - follow the steps of implementing an end-to-end analytics solution illustrated with some best practices and examples in Azure Data Lake.
During this full training day we will share the architecture patterns, tooling, learnings and tips and tricks for building such services on Azure Data Lake. We take you through some anti-patterns and best practices on data loading and organization, give you hands-on time and the ability to develop some of your own U-SQL scripts to process your data and discuss the pros and cons of files versus tables.
This were the slides presented at the SQLBits 2018 Training Day on Feb 21, 2018.
Similar to Best Practices and Performance Tuning of U-SQL in Azure Data Lake (SQL Konferenz 2018) (20)
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
Running cost effective big data workloads with Azure Synapse and ADLS (MS Ign...Michael Rys
Presentation by James Baker and myself on Running cost effective big data workloads with Azure Synapse and Azure Datalake Storage (ADLS) at Microsoft Ignite 2020. Covers Modern Data warehouse architecture supported by Azure Synapse, integration benefits with ADLS and some features that reduce cost such as Query Acceleration, integration of Spark and SQL processing with integrated meta data and .NET For Apache Spark support.
Running cost effective big data workloads with Azure Synapse and Azure Data L...Michael Rys
The presentation discusses how to migrate expensive open source big data workloads to Azure and leverage latest compute and storage innovations within Azure Synapse with Azure Data Lake Storage to develop a powerful and cost effective analytics solutions. It shows how you can bring your .NET expertise with .NET for Apache Spark to bear and how the shared meta data experience in Synapse makes it easy to create a table in Spark and query it from T-SQL.
Building data pipelines for modern data warehouse with Apache® Spark™ and .NE...Michael Rys
This presentation shows how you can build solutions that follow the modern data warehouse architecture and introduces the .NET for Apache Spark support (https://dot.net/spark, https://github.com/dotnet/spark)
Modernizing ETL with Azure Data Lake: Hyperscale, multi-format, multi-platfor...Michael Rys
More and more customers who are looking to modernize analytics needs are exploring the data lake approach in Azure. Typically, they are most challenged by a bewildering array of poorly integrated technologies and a variety of data formats, data types not all of which are conveniently handled by existing ETL technologies. In this session, we’ll explore the basic shape of a modern ETL pipeline through the lens of Azure Data Lake. We will explore how this pipeline can scale from one to thousands of nodes at a moment’s notice to respond to business needs, how its extensibility model allows pipelines to simultaneously integrate procedural code written in .NET languages or even Python and R, how that same extensibility model allows pipelines to deal with a variety of formats such as CSV, XML, JSON, Images, or any enterprise-specific document format, and finally explore how the next generation of ETL scenarios are enabled though the integration of Intelligence in the data layer in the form of built-in Cognitive capabilities.
Bring your code to explore the Azure Data Lake: Execute your .NET/Python/R co...Michael Rys
Big data processing increasingly needs to address not just querying big data but needs to apply domain specific algorithms to large amounts of data at scale. This ranges from developing and applying machine learning models to custom, domain specific processing of images, texts, etc. Often the domain experts and programmers have a favorite language that they use to implement their algorithms such as Python, R, C#, etc. Microsoft Azure Data Lake Analytics service is making it easy for customers to bring their domain expertise and their favorite languages to address their big data processing needs. In this session, I will showcase how you can bring your Python, R, and .NET code and apply it at scale using U-SQL.
U-SQL Killer Scenarios: Custom Processing, Big Cognition, Image and JSON Proc...Michael Rys
When analyzing big data, you often have to process data at scale that is not rectangular in nature and you would like to scale out your existing programs and cognitive algorithms to analyze your data. To address this need and make it easy for the programmer to add her domain specific code, U-SQL includes a rich extensibility model that allows you to process any kind of data, ranging from CSV files over JSON and XML to image files and add your own custom operators. In this presentation, we will provide some examples on how to use U-SQL to process interesting data formats with custom extractors and functions, including JSON, images, use U-SQL’s cognitive library and finally show how U-SQL allows you to invoke custom code written in Python and R.
Slides for SQL Saturday 635, Vancouver BC presentation, Vancouver BC. Aug 2017.
Killer Scenarios with Data Lake in Azure with U-SQLMichael Rys
Presentation from Microsoft Data Science Summit 2016
Presents 4 examples of custom U-SQL data processing: Overlapping Range Aggregation, JSON Processing, Image Processing and R with U-SQL
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Best Practices and Performance Tuning of U-SQL in Azure Data Lake (SQL Konferenz 2018)
1. Best Practices and Performance
Tuning of U-SQL in Azure Data
Lake
Michael Rys
Principal Program Manager, Microsoft
@MikeDoesBigData, usql@microsoft.com
2. Session Objectives And Takeaways
• Session Objective(s):
• Understand some common best practices to make your U-SQL scale
• Understand the U-SQL Query execution
• Be able to understand and improve U-SQL job performance/cost
• Be able to understand and improve the U-SQL query plan
• Know how to write more efficient U-SQL scripts
• Key Takeaways:
• U-SQL is designed for scale-out
• U-SQL provides scalable execution of user code
• U-SQL has a tool set that can help you analyze and improve your scalability, cost and performance
3. Agenda
• Job Execution Experience and Investigations
Query Execution
Stage Graph
Dryad crash course
Job Metrics
Resource Planning
Job Parallelism and AUs
• Job Performance Analysis
Analyze the critical path
Heat Map
Critical Path
Data Skew
• Cost Optimizations
AU usage
AU modeller
• Tuning / Optimizations
Data Loading Tips
Data Partitioning (Files and Tables)
INSERT optimizations Partition Elimination
Predicate Pushing
Column Pruning
Addressing Data Skew
Some Data Hints
UDOs can be evil
5. Expression-flow Programming Style
• Automatic "in-lining" of U-SQL expressions – whole
script leads to a single execution model.
• Execution plan that is optimized out-of-the-box and w/o
user intervention.
• Per job and user driven level of parallelization.
• Detail visibility into execution steps, for debugging.
• Heatmap like functionality to identify performance
bottlenecks.
6. Job Scheduler
& Queue
Front-EndService
Vertex Execution
Consume
Local
Storage
Data Lake
Store
Author
Plan
Compiler Optimizer
Vertexes
running in
YARN
Containers
U-SQL
Runtime
Optimized
Plan
Vertex Scheduling
On containers
Job Manager
USQL
Catalog
Overall U-SQL Batch Job Execution Lifetime
Stage Codegen
(C++/C#
Compilation)
7. Finalization Phase Execution Phase
Queueing
Preparation Phase
Job Scheduler
& Queue
Front-EndService
Vertex Execution
Consume
Local
Storage
Data Lake
Store
Author
Plan
Compiler Optimizer
Vertexes
running in
YARN
Containers
U-SQL
Runtime
Optimized
Plan
Vertex Scheduling
On containers
Job Manager
USQL
Catalog
Overall U-SQL Batch Job Execution Lifetime
Vertex
Codegen
(C++/C#
Compilation)
8. U-SQL Preparation into Vertex (.NET example)
C#
C++
Algebra
Additional non-dll files &
Deployed resources
managed dll
native dll
Compilation output (in job folder)
Compilation and Optimization
U-SQL
Metadata
Service
Deployed to
Vertices
REFERENCE ASSEMBLY
ADLS
(or WASB)
DEPLOY RESOURCE
System files
(built-in Runtimes, Core DLLs, OS)
13. Stage Details
6307 Pieces of work: Same
code applied to different
data partitions
AVG Vertex execution
time
4.4 Billion rows
Data Read &
Written
Super Vertex = Stage
16. AU allocation vs Job Parallelism/Scale Out
• Job’s inherent scale out is determined by
• Size of input data plus data partitioning
• Operations
• Cross join vs parallelizable equi-join
• Order By: Global order vs local order
• REDUCE ALL vs REDUCE ON part_key
• Hints:
• Data size hints
• Partition plan hint (works only if the plan space contains an option for that plan)
• AU allocation does not specify Job Scale Out but specifies upper limit
• Increasing the AU does not make your job necessarily faster
• If you over-specify AU allocation, you will pay for overallocation!!!
24. Data Storage
• Files
• Tables
• Unstructured Data in files
• Files are split into 250MB extents
• 4 extents per vertex -> 1GB per vertex
• Different file content formats:
• Splittable formats are parallelizable:
• row-oriented (CSV etc)
• Where data does not span extents
• Non-splittable formats cannot be parallelized:
• XML/JSON
• Have to be processed in single vertex extractor with
atomicFileProcessing=true.
• Use File Sets to provide semantic partition pruning
• Tables
• Clustered Index (row-oriented) storage
• Vertical and horizontal partitioning
• Statistics for the optimizer (CREATE STATISTICS)
• Native scalar value serialization
26. Real-Time Data Ingestion
Promises of the Data Lake:
1. Store everything and anything in its original form, figure it out later
2. BYO compute (Hadoop, ADLA, other) to a shared collection of data
3. Massive scale, no limits, never run out of storage
No 1 is where we see most customers get into trouble!
27. Real-Time Data Ingestion
Many customers take the “original form” a little too literally
“The original form of my data is the event… I need to keep it”
But
• Discrete database transactions have been occurring forever, one row at a time
• We store them in singular (table) structures for efficiency
• We can still find a discrete record in its absolute original form
• We should apply similar optimizations to Data Lakes!!!
29. Real-Time Data Ingestion
Moral of the story…
• Small files are suboptimal in every scenario
• Consider alternatives to concatenate into larger files:
• Offline outside of Azure
• Event Hubs Capture
• Stream Analytics
• … or ADLA fast file sets to compact most recent deltas
30. // Unstructured Files (24 hours daily log impressions)
@Impressions =
EXTRACT ClientId int, Market string, OS string, ...
FROM @"wasb://ads@wcentralus/2015/10/30/{*}.nif"
FROM @"wasb://ads@wcentralus/2015/10/30/{Market}_{*}.nif"
;
// …
// Filter to by Market
@US =
SELECT * FROM @Impressions
WHERE Market == "en"
;
Partition Elimination
• Even with unstructured files!
• Leverage Virtual Columns (Named)
• Avoid unnamed {*}
• WHERE predicates on named virtual columns
• That binds the PE range during compilation time
• Named virtual columns without predicate = warning
• Design directories/files with PE in mind
• Design for elimination early in the tree, not in the leaves
Extracts all files in the folder
Post filter = pay I/O cost to drop most data
PE pushes this predicate to the EXTRACT
EXTRACT now only reads “en” files!
en_10.0.nif
en_8.1.nif
de_10.0.nif
jp_7.0.nif
de_8.1.nif
../2015/10/30/
…
U-SQL Optimizations
Partition Elimination – Unstructured Files
31. FileSet optimizations
• Use
SET @@FeaturePreviews="InputFileGrouping:on";
to group small files into single vertex (instead of one file per vertex)
• Also use to speed up preparation time and produce faster code:
SET @@FeaturePreviews =
"FileSetV2Dot5:on,AsyncCompilerStoreAccess:on";
(will be default with next refresh and not needed afterwards)
32. Cost of small files with custom extractor
• Custom code deployment during vertex creation dominates vertex
execution time over actual work!
Vertex creation time:
Deploy Vertex content
Actual vertex execution
work!
37. Table Partitioning
and Distribution Fine grained (horizontal) partitioning/distribution
• Distributes within a partition (together with clustering) to
keep same data values close
• Choose for:
• Join alignment, partition size, filter selectivity, partition elimination
Coarse grained (vertical) partitioning
• Based on Partition keys
• Partition is addressable in language
• Query predicates will allow partition pruning
• Choose for data life cycle management, partition elimination
Distribution Scheme When to use?
HASH(keys) Automatic Hash for fast item lookup
DIRECT HASH(id) Exact control of hash bucket value
RANGE(keys) Keeps ranges together
ROUND ROBIN To get equal distribution (if others give skew)
38. Partitions, Distributions and Clusters
TABLE
T ( id …
, C …
, date DateTime, …
, INDEX i
CLUSTERED (id, C)
PARTITIONED BY (date)
DISTRIBUTED BY
HASH(id) INTO 4
)
PARTITION (@date1) PARTITION (@date2) PARTITION (@date3)
HASH DISTRIBUTION 1
HASH DISTRIBUTION 2
HASH DISTRIBUTION 3
HASH DISTRIBUTION 1
HASH DISTRIBUTION 1
HASH DISTRIBUTION 2
HASH DISTRIBUTION 3
HASH DISTRIBUTION 4 HASH DISTRIBUTION 3
id1C1
id5C2
id9C3
id1C1
id1C2
id2C4
id6C5
id2C4
id3C6
id7C6
id7C7
id11C8
id7C7
id2C5
id3C6
id4C9
id4C10
id5C1
id9C3
/catalog/…/tables/Guid(T)/
Guid(T.p1).ss Guid(T.p2).ss Guid(T.p3).ss
LOGICAL
PHYSICAL
39. Benefits of
Tables
Benefits of Table clustering and distribution
• Faster lookup of data provided by distribution and clustering when right
distribution/cluster is chosen
• Data distribution provides better localized scale out
• Used for filters, joins and grouping
Benefits of Table partitioning
• Provides data life cycle management (“expire” old partitions):
Partition on date/time dimension
• Partial re-computation of data at partition level
• Query predicates can provide partition elimination
Do not use when…
• No filters, joins and grouping
• No reuse of the data for future queries
If in doubt: use sampling (e.g., SAMPLE ANY(x)) and test.
40. Benefits of
Distribution in
Tables
Benefits
• Design for most frequent/costly queries
• Manage data skew in partition/table
• Manage parallelism in querying (by number of
distributions)
• Manage minimizing data movement in joins
• Provide distribution seeks and range scans for query
predicates (distribution bucket elimination)
Distribution in tables is mandatory, chose according to
desired benefits
41. Benefits of
Clustered Index
in Distribution
Benefits
• Design for most frequent/costly queries
• Manage data skew in distribution bucket
• Provide locality of same data values
• Provide seeks and range scans for query predicates (index
lookup)
Clustered index in tables is mandatory, chose according to
desired benefits
Pro Tip:
Distribution keys should be prefix of Clustered Index keys:
Especially for RANGE distribution
Optimizer will make use of global ordering then:
If you make the RANGE distribution key a prefix of the index key, U-SQL will
repartition on demand to align any UNIONALLed or JOINed tables or
partitions!
Split points of table distribution partitions are chosen independently, so any
partitioned table can do UNION ALL in this manner if the data is to be
processed subsequently on the distribution key.
42. Benefits of
Partitioned
Tables
Benefits
• Partitions are addressable
• Enables finer-grained data lifecycle management at
partition level
• Manage parallelism in querying by number of partitions
• Query predicates provide partition elimination
• Predicate has to be constant-foldable
Use partitioned tables for
• Managing large amounts of incrementally growing
structured data
• Queries with strong locality predicates
• point in time, for specific market etc
• Managing windows of data
• provide data for last x months for processing
43. Partitioned tables
Use partitioned tables for
querying parts of large
amounts of incrementally
growing structured data
Get partition elimination
optimizations with the right
query predicates
Creating partition table
CREATE TABLE PartTable(id int, event_date DateTime, lat float, long float
, INDEX idx CLUSTERED (vehicle_id ASC)
PARTITIONED BY(event_date) DISTRIBUTED BY HASH (vehicle_id) INTO 4);
Creating partitions
DECLARE @pdate1 DateTime = new DateTime(2014, 9, 14,
00,00,00,00,DateTimeKind.Utc);
DECLARE @pdate2 DateTime = new DateTime(2014, 9, 15,
00,00,00,00,DateTimeKind.Utc);
ALTER TABLE vehiclesP ADD PARTITION (@pdate1), PARTITION (@pdate2);
Loading data into partitions dynamically
DECLARE @date1 DateTime = DateTime.Parse("2014-09-14");
DECLARE @date2 DateTime = DateTime.Parse("2014-09-16");
INSERT INTO vehiclesP ON INTEGRITY VIOLATION IGNORE
SELECT vehicle_id, event_date, lat, long FROM @data
WHERE event_date >= @date1 AND event_date <= @date2;
Filters and inserts clean data only, ignore “dirty” data
Loading data into partitions statically
ALTER TABLE vehiclesP ADD PARTITION (@pdate1), PARTITION (@baddate);
INSERT INTO vehiclesP ON INTEGRITY VIOLATION MOVE TO @baddate
SELECT vehicle_id, lat, long FROM @data
WHERE event_date >= @date1 AND event_date <= @date2;
Filters and inserts clean data only, put “dirty” data into special partition
44. Avoid Table
Fragmentation
and
Over-
partitioning!
What is Table Fragmentation
• ADLS is an append-only store!
• Every INSERT statement is creating a new file (INSERT fragment)
Why is it bad?
• Every INSERT fragment contains data in its own distribution buckets,
thus query processing loses ability to get “localized” fast access
• Query generation has to read from many files now -> slow preparation
phase that may time out.
• Reading from too many files is disallowed:
Current LIMIT: 3000 table partitions and INSERT fragments per job!
What if I have to add data incrementally?
• Batch inserts into table
• Use ALTER TABLE REBUILD/ALTER TABLE REBUILD PARTITION regularly to reduce
fragmentation and keep performance.
45. @Impressions =
SELECT * FROM
searchDM.SML.PageView(@start, @end) AS PageView
OPTION(SKEWFACTOR(Query)=0.5)
;
// Q1(A,B)
@Sessions =
SELECT
ClientId,
Query,
SUM(PageClicks) AS Clicks
FROM
@Impressions
GROUP BY
Query, ClientId
;
// Q2(B)
@Display =
SELECT * FROM @Sessions
INNER JOIN @Campaigns
ON @Sessions.Query == @Campaigns.Query
; Input must be distributed on: (Query)
Input must be distributed on:
(Query) or (ClientId) or (Query, ClientId)
Optimizer wants to distribute only once
But Query could be skewed
Data Distribution
• Re-Distributing is very expensive
• Many U-SQL operators can handle multiple distribution choices
• Optimizer bases decision upon estimations
Wrong statistics may result in worse query performance
U-SQL Optimizations
Distributions – Minimize (re)partitions
46. // Unstructured (24 hours daily log impressions)
@Huge = EXTRACT ClientId int, ...
FROM
@"wasb://ads@wcentralus/2015/10/30/{*}.nif"
;
// Small subset (ie: ForgetMe opt out)
@Small = SELECT * FROM @Huge
WHERE Bing.ForgetMe(x,y,z)
OPTION(ROWCOUNT=500)
;
// Result (not enough info to determine simple Broadcast
join)
@Remove = SELECT * FROM Bing.Sessions
INNER JOIN @Small ON Sessions.Client ==
@Small.Client
;
Broadcast JOIN right?
Broadcast is now a candidate.
Wrong statistics may result in worse query performance
=> CREATE STATISTICS
Optimizer has no stats this is small...
U-SQL Optimizations
Distribution - Cardinality
47. Write the Right Query!
• Do I really need a CROSS JOIN?
Example: Get a daily “expansion” of my fact validity interval
Approach 1 (standard text-book solution):
Join fact table with daily date dimension table using daily dimension between
begin and end of fact
=> Does not scale: CROSS JOIN on single node
Approach 2:
Cross apply daily expansion expression to each fact row
=> Scales very well since only depends on a row
49. What is Data Skew?
• Some data points are
much more common
than others
• data may be
distributed such that
all rows that match a
certain key go to a
single vertex
• imbalanced execution,
vertex time out.
0
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
35,000,000
40,000,000
Population by State
50. Low Distinctiveness Keys
• Keys with small
selectivity can lead to
large vertices even
without skew
@rows =
SELECT Gender, AGG<MyAgg>(…) AS Result
FROM @HugeInput
GROUP BY Gender;
Gender==Male Gender==Female
@HugeInput
Vertex 0 Vertex 1
51. Why is this a problem?
• Vertexes have a 5 hour runtime limit!
• Your UDO or join may excessively allocate memory.
• Your memory usage may not be obvious due to garbage collection
52. Addressing Data Skew/Low distinctiveness
• Improve data partition sizes:
• Find more fine grained keys, eg, states and congressional districts or ZIP codes
• If no fine grained keys can be found or are too fine-grained: use ROUND ROBIN
distribution
• Write queries that can handle data skew:
• Use filters that prune skew out early
• Use Data Hints to identify skew and “low distinctness” in keys:
• SKEWFACTOR(columns) = x
provides hint that given columns have a skew factor x between
0 (no skew) and 1 (very heavy skew))
• DISTINCTVALUE(columns) = n
let’s you specify how many distinct values the given columns have (n>1)
• Implement aggregation/reducer recursively if possible
55. // Metrics per domain
@Metric =
REDUCE @Impressions ON UrlDomain
USING new Bing.TopNReducer(count:10)
;
// …
Inherent Data Skew
[SqlUserDefinedReducer(IsRecursive = true)]
public class TopNReducer : IReducer
{
public override IEnumerable<IRow>
Reduce(IRowset input, IUpdatableRow output)
{
// Compute TOP(N) per group
// …
}
}
Recursive
• Allow multi-stage aggregation trees
• Requires same schema (input => output)
• Requires associativity:
• R(x, y) = R( R(x), R(y) )
• Default = non-recursive
• User code has to honor recursive semantics
www.bing.com
brought to a single vertex
U-SQL Partitioning
Data Skew – Recursive Reducer
57. // Bing impressions
@Impressions = SELECT * FROM
searchDM.SML.PageView(@start, @end) AS PageView
;
// Compute sessions
@Sessions =
REDUCE @Impressions ON Client, Market
READONLY Market
USING new Bing.SessionReducer(range : 30)
;
// Users metrics
@Metrics =
SELECT * FROM @Sessions
WHERE
Market == "en-us"
;
// …
Microsoft Confidential
U-SQL Optimizations
Predicate pushing – UDO pass-through columns
58. // Bing impressions
@Impressions = SELECT * FROM
searchDM.SML.PageView(@start, @end) AS PageView
;
// Compute page views
@Impressions =
PROCESS @Impressions
READONLY Market
PRODUCE Client, Market, Header string
USING new Bing.HtmlProcessor()
;
@Sessions =
REDUCE @Impressions ON Client, Market
READONLY Market
USING new Bing.SessionReducer(range : 30)
;
// Users metrics
@Metrics =
SELECT * FROM @Sessions
WHERE
Market == "en-us"
;
Microsoft Confidential
U-SQL Optimizations
Predicate pushing – UDO row level processors
public abstract class IProcessor : IUserDefinedOperator
{
/// <summary/>
public abstract IRow Process(IRow input, IUpdatableRow output);
}
public abstract class IReducer : IUserDefinedOperator
{
/// <summary/>
public abstract IEnumerable<IRow> Reduce(IRowset input, IUpdatableRow output);
}
59. // Bing impressions
@Impressions = SELECT Client, Market, Html FROM
searchDM.SML.PageView(@start, @end) AS PageView
;
// Compute page views
@Impressions =
PROCESS @Impressions
PRODUCE Client, Market, Header string
USING new Bing.HtmlProcessor()
;
// Users metrics
@Metrics =
SELECT * FROM @Sessions
WHERE
Market == "en-us"
&& Header.Contains("microsoft.com")
AND Header.Contains("microsoft.com")
;
U-SQL Optimizations
Predicate pushing – relational vs. C# semantics
60. // Bing impressions
@Impressions = SELECT * FROM
searchDM.SML.PageView(@start, @end) AS PageView
;
// Compute page views
@Impressions =
PROCESS @Impressions
PRODUCE *
REQUIRED ClientId, HtmlContent(Header, Footer)
USING new Bing.HtmlProcessor()
;
// Users metrics
@Metrics =
SELECT ClientId, Market, Header FROM @Sessions
WHERE
Market == "en-us"
;
U-SQL Optimizations
Column Pruning and dependencies
C H M
C H M
C H M
Column Pruning
• Minimize I/O (data shuffling)
• Minimize CPU (complex processing, html)
• Requires dependency knowledge:
• R(D*) = Input ( Output )
• Default no pruning
• User code has to honor reduced columns
A B C D E F G J KH I … M … 1000
61. UDO Tips and Warnings
• Tips when Using UDOs:
• READONLY clause to allow pushing predicates through UDOs
• REQUIRED clause to allow column pruning through UDOs
• PRESORT on REDUCE if you need global order
• Hint Cardinality if it does choose the wrong plan
• Warnings and better alternatives:
• Use SELECT with UDFs instead of PROCESS
• Use User-defined Aggregators instead of REDUCE
• Learn to use Windowing Functions (OVER expression)
• Good use-cases for PROCESS/REDUCE/COMBINE:
• The logic needs to dynamically access the input and/or output schema.
E.g., create a JSON doc for the data in the row where the columns are not known apriori.
• Your UDF based solution creates too much memory pressure and you can write your code more memory efficient in a UDO
• You need an ordered Aggregator or produce more than 1 row per group
62. Resources • Blogs and community page:
• http://usql.io (U-SQL Github)
• http://blogs.msdn.microsoft.com/azuredatalake/
• http://blogs.msdn.microsoft.com/mrys/
• https://channel9.msdn.com/Search?term=U-SQL#ch9Search
• Documentation, presentations and articles:
• http://aka.ms/usql_reference
• https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide
• https://docs.microsoft.com/en-us/azure/data-lake-analytics/
• https://msdn.microsoft.com/en-us/magazine/mt614251
• https://msdn.microsoft.com/magazine/mt790200
• http://www.slideshare.net/MichaelRys
• Getting Started with R in U-SQL
• https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-python-extensions
• ADL forums and feedback
• https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=AzureDataLake
• http://stackoverflow.com/questions/tagged/u-sql
• http://aka.ms/adlfeedback
Continue your education at
Microsoft Virtual Academy
online.
This slide is required. Do NOT delete. This should be the first slide after your Title Slide. This is an important year and we need to arm our attendees with the information they can use to Grow Share! Please ensure that your objectives are SMART (defined below) and that they will enable them to go in and win against the competition to grow share. If you have questions, please contact your Track PM for guidance. We have also posted guidance on writing good objectives, out on the Speaker Portal (https://www.mytechready.com).
This slide should introduce the session by identifying how this information helps the attendee, partners and customers be more successful. Why is this content important?
This slide should call out what’s important about the session (sort of the why should we care, why is this important and how will it help our customers/partners be successful) as well as the key takeaways/objectives associated with the session. Call out what attendees will be able to execute on using the information gained in this session. What will they be able to walk away from this session and execute on with their customers.
Good Objectives should be SMART (specific, measurable, achievable, realistic, time-bound). Focus on the key takeaways and why this information is important to the attendee, our partners and our customers.
Each session has objectives defined and published on www.mytechready.com, please work with your Track PM to call these out here in the slide deck.
If you have questions, please contact your Track PM. See slide 5 in this template for a complete list of Tracks and TPMs.