Snowflake SnowPro Core Cert CheatSheet.pdfDustin Liu
Thanks for your reading/downloading, it would be highly appreciated if you could clap on the original post of this cheatsheet at https://medium.com/@grdustin/snowflake-snowpro-core-cert-cheatsheet-ead4a01a5428. Cheers!
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Snowflake SnowPro Core Cert CheatSheet.pdfDustin Liu
Thanks for your reading/downloading, it would be highly appreciated if you could clap on the original post of this cheatsheet at https://medium.com/@grdustin/snowflake-snowpro-core-cert-cheatsheet-ead4a01a5428. Cheers!
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Introduction to Snowflake Datawarehouse and Architecture for Big data company. Centralized data management. Snowpipe and Copy into a command for data loading. Stream loading and Batch Processing.
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Common Strategies for Improving Performance on Your Delta LakehouseDatabricks
The Delta Architecture pattern has made the lives of data engineers much simpler, but what about improving query performance for data analysts? What are some common places to look at for tuning query performance? In this session we will cover some common techniques to apply to our delta tables to make them perform better for data analysts queries. We will look at a few examples of how you can analyze a query, and determine what to focus on to deliver better performance results.
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
Flink Forward San Francisco 2022.
With a real-time processing engine like Flink and a transactional storage layer like Hudi, it has never been easier to build end-to-end low-latency data platforms connecting sources like Kafka to data lake storage. Come learn how to blend Lakehouse architectural patterns with real-time processing pipelines with Flink and Hudi. We will dive deep on how Flink can leverage the newest features of Hudi like multi-modal indexing that dramatically improves query and write performance, data skipping that reduces the query latency by 10x for large datasets, and many more innovations unique to Flink and Hudi.
by
Ethan Guo & Kyle Weller
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Snowflake: Your Data. No Limits (Session sponsored by Snowflake) - AWS Summit...Amazon Web Services
Struggling to keep up with an ever-increasing demand for data at your organisation? Do you spend hours tinkering with your streaming data pipelines? Does that one data scientist with direct EDW access keep you up at night? Introducing Snowflake, a brand new SQL data warehouse built for the cloud. We’ve designed and implemented a unique cloud-based architecture that addresses the most common shortcomings of existing data solutions. With Snowflake, you can unlock unlimited concurrency, enable instant scalability, and take advantage of built-in tuning and optimisation. Join us and find out what Netflix, Adobe, and Nike all have in common.
Master the Multi-Clustered Data Warehouse - SnowflakeMatillion
Snowflake is one of the most powerful, efficient data warehouses on the market today—and we joined forces with the Snowflake team to show you how it works!
In this webinar:
- Learn how to optimize Snowflake
- Hear insider tips and tricks on how to improve performance
- Get expert insights from Craig Collier, Technical Architect from Snowflake, and Kalyan Arangam, Solution Architect from Matillion
- Find out how leading brands like Converse, Duo Security, and Pets at Home use Snowflake and Matillion ETL to make data-driven decisions
- Discover how Matillion ETL and Snowflake work together to modernize your data world
- Learn how to utilize the impressive scalability of Snowflake and Matillion
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Optimize the performance, cost, and value of databases.pptxIDERA Software
Today’s businesses run on data, making it essential for them to access data quickly and easily. This requirement means that databases must run efficiently at all times but keeping a database performing at its best remains a challenging task. Fortunately, database administrators (DBAs) can adopt many practices to achieve this goal, thus saving time and money.
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
Actionable Insights with AI - Snowflake for Data ScienceHarald Erb
Talk @ ScaleUp 360° AI Infrastructures DACH, 2021: Data scientists spend 80% and more of their time searching for and preparing data. This talk explains Snowflake’s Platform capabilities like near-unlimited data storage and instant and near-infinite compute resources and how the platform can be used to seamlessly integrate and support the machine learning libraries and tools data scientists rely on.
Snowflake: The Good, the Bad, and the UglyTyler Wishnoff
Learn how to solve the top 3 challenges Snowflake customers face, and what you can do to ensure high-performance, intelligent analytics at any scale. Ideal for those currently using Snowflake and those considering it. Learn more at: https://kyligence.io/
The Parquet Format and Performance Optimization OpportunitiesDatabricks
The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads.
As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is ‘many small files’, and will discuss the open-source Delta Lake format in relation to this and Parquet in general.
This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
Introduction to Snowflake Datawarehouse and Architecture for Big data company. Centralized data management. Snowpipe and Copy into a command for data loading. Stream loading and Batch Processing.
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Common Strategies for Improving Performance on Your Delta LakehouseDatabricks
The Delta Architecture pattern has made the lives of data engineers much simpler, but what about improving query performance for data analysts? What are some common places to look at for tuning query performance? In this session we will cover some common techniques to apply to our delta tables to make them perform better for data analysts queries. We will look at a few examples of how you can analyze a query, and determine what to focus on to deliver better performance results.
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
Flink Forward San Francisco 2022.
With a real-time processing engine like Flink and a transactional storage layer like Hudi, it has never been easier to build end-to-end low-latency data platforms connecting sources like Kafka to data lake storage. Come learn how to blend Lakehouse architectural patterns with real-time processing pipelines with Flink and Hudi. We will dive deep on how Flink can leverage the newest features of Hudi like multi-modal indexing that dramatically improves query and write performance, data skipping that reduces the query latency by 10x for large datasets, and many more innovations unique to Flink and Hudi.
by
Ethan Guo & Kyle Weller
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Snowflake: Your Data. No Limits (Session sponsored by Snowflake) - AWS Summit...Amazon Web Services
Struggling to keep up with an ever-increasing demand for data at your organisation? Do you spend hours tinkering with your streaming data pipelines? Does that one data scientist with direct EDW access keep you up at night? Introducing Snowflake, a brand new SQL data warehouse built for the cloud. We’ve designed and implemented a unique cloud-based architecture that addresses the most common shortcomings of existing data solutions. With Snowflake, you can unlock unlimited concurrency, enable instant scalability, and take advantage of built-in tuning and optimisation. Join us and find out what Netflix, Adobe, and Nike all have in common.
Master the Multi-Clustered Data Warehouse - SnowflakeMatillion
Snowflake is one of the most powerful, efficient data warehouses on the market today—and we joined forces with the Snowflake team to show you how it works!
In this webinar:
- Learn how to optimize Snowflake
- Hear insider tips and tricks on how to improve performance
- Get expert insights from Craig Collier, Technical Architect from Snowflake, and Kalyan Arangam, Solution Architect from Matillion
- Find out how leading brands like Converse, Duo Security, and Pets at Home use Snowflake and Matillion ETL to make data-driven decisions
- Discover how Matillion ETL and Snowflake work together to modernize your data world
- Learn how to utilize the impressive scalability of Snowflake and Matillion
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Optimize the performance, cost, and value of databases.pptxIDERA Software
Today’s businesses run on data, making it essential for them to access data quickly and easily. This requirement means that databases must run efficiently at all times but keeping a database performing at its best remains a challenging task. Fortunately, database administrators (DBAs) can adopt many practices to achieve this goal, thus saving time and money.
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
Actionable Insights with AI - Snowflake for Data ScienceHarald Erb
Talk @ ScaleUp 360° AI Infrastructures DACH, 2021: Data scientists spend 80% and more of their time searching for and preparing data. This talk explains Snowflake’s Platform capabilities like near-unlimited data storage and instant and near-infinite compute resources and how the platform can be used to seamlessly integrate and support the machine learning libraries and tools data scientists rely on.
Snowflake: The Good, the Bad, and the UglyTyler Wishnoff
Learn how to solve the top 3 challenges Snowflake customers face, and what you can do to ensure high-performance, intelligent analytics at any scale. Ideal for those currently using Snowflake and those considering it. Learn more at: https://kyligence.io/
The Parquet Format and Performance Optimization OpportunitiesDatabricks
The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads.
As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is ‘many small files’, and will discuss the open-source Delta Lake format in relation to this and Parquet in general.
This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
(ISM304) Oracle to Amazon RDS MySQL & Aurora: How Gallup Made the MoveAmazon Web Services
"Amazon RDS MySQL offers a highly scalable, available and high performing database service at a fraction of the cost of a commercially licensed database provider. To take advantage of Amazon RDS MySQL benefits such as Multi-AZ replication and ease of administration, Gallup transitioned its Reporting and Analytics platforms to AWS.
Swapan Golla, Technical Architect at Gallup, will talk about the benefits the company has seen moving from an on-premise Oracle deployment to RDS MySQL. Learn about the solution architecture and how they tuned their schemas and application code to take full advantage of the scalability and performance of RDS. He will also talk about the next steps in the team's roadmap, which involve Amazon Aurora."
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Choosing to migrate to Kubernetes can be a tough decision, and even tougher to execute. We at Kash Corp took the plunge just over a year ago with Kubernetes 1.2, and haven't looked back. This talk will detail some of our solutions to dealing with Configuration Management, Continuous Delivery, Monitoring, Maintenance, as well as talk about mistakes, frustrations and lessons learned along the way, and where we're going next.
This is an exam cheat sheet hopes to cover all keys points for GCP Data Engineer Certification Exam
Let me know if there is any mistake and I will try to update it
Serverless Machine Learning on Modern Hardware Using Apache Spark with Patric...Databricks
Recently, there has been increased interest in running analytics and machine learning workloads on top of serverless frameworks in the cloud. The serverless execution model provides fine-grained scaling and unburdens users from having to manage servers, but also adds substantial performance overheads due to the fact that all data and intermediate state of compute task is stored on remote shared storage.
In this talk I first provide a detailed performance breakdown from a machine learning workload using Spark on AWS Lambda. I show how the intermediate state of tasks — such as model updates or broadcast messages — is exchanged using remote storage and what the performance overheads are. Later, I illustrate how the same workload performs on-premise using Apache Spark and Apache Crail deployed on a high-performance cluster (100Gbps network, NVMe Flash, etc.). Serverless computing simplifies the deployment of machine learning applications. The talk shows that performance does not need to be sacrificed.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
1. Snowflake SnowPro Certification Exam Cheat Sheet by Jeno Yamma
// Snowflake - General //
Introduction
- Analytic data warehouse
- SaaS offering
- No hardware (or installation or updates/patch)
- No ongoing maintenance or tuning
- Can’t run privately (on-prem or hosted)
- Runs completely on the cloud
- Either AWS, Azure or GCP (NO ON-PREM or HYBRID!)
- Has its own VPC!
- Decoupled compute and storage (scaled compute does not need scaled storage)
Pricing
- Unit costs for Credits and data storage determined by region (not cloud platform)
- Cost of Warehouse used
- Minimum 60s on resuming, per 1s after
- Cost of storage (Temp, Transient, Perm Tables & time-travel data & fail-safe)
- Calcs based on daily average
- Pricing model = on demand or discounted upfront
- Just remember:
- >= Enterprise = 90 days time travel (default: 1 day for all) + materialized view +
multi-cluster warehouse
- >= Business Critical = lots more security (HIPAA, SOC 1&2, PCI DSS)
- VPS edition = own cloud service layer (not shared with accounts)
Supported Regions
- Multi-region account isn’t supported, each SF in single region
- 17 Regions in total
- AWS - 9 regions (Asia Pacific-3, EU-2, North America-4)
- GCP - 1 (Asia Pacific-0, EU-0, North America-1) >> in preview
- Azure - 7 (Asia Pacific-2, EU-1, North America-4)
// Connecting to SF //
- Web-based UI (see above for usage,
capabilities, restriction)
- Command line client (SnowSQL)
- ODBC and JDBC (you have to download the
driver!)
- Native Connectors (Python, Spark & Kafka)
- Third party connectors (e.g. Matilion)
- Others (Node.js, .Net)
Snowflake Address
https://account_name_region_{provider}.snowflakecomputing.com
Account_name either:
- AWS = account_name.region
- GCP/Azure = account_name.region.gcp/azure
e.g. https://pp12345.ap-southeast-2.snowflakecomputing.com
// Architecture //
Overview
- Hybrid of shared-disk db and shared-nothing db
- Uses central data repo for persisted storage - All compute nodes have data access
- Using MPP clusters to process queries - Each node stores portion of data locally
Architectural layers
Storage
- All data stored as an internal, optimised, compressed columnar format (micro-partitions -
represents logical structure of table)
- Can’t access the data directly, only through Snowflake (SQL etc)
Query Processing
- Where the processing of query happens
- Uses virtual warehouses - an MPP compute cluster
- Each cluster = Independent - doesn’t share resources with other vwh and doesn’t impact
others
Cloud Services
- Command centre of SF - coordinates and ties all activities together within SF
- Gets provisioned by Snowflake and within AWS, Azure or GCP
- You don’t have access to the build of this or to do any modifications
- Instance is shard to other SF accounts unless you have VPS SF Edition
- Services that this layer takes care of:
- Authentication
- Infrastructure management
- Metadata management
- Query parsing and optimisation
- Access control
Caches
- Snowflake Caches different data to improve query performance and assist in reducing cost
>Metadata Cache - Cloud Services layer
- Improves compile time for queries against commonly used tables
>Result Cache - Cloud Services layer
- Holds the query results
- If Customers run the exact same query within 24 hours, result cache is used and no warehouse is
required to be active
>Local Disk Cache or Warehouse Cache - Storage Layer
- Caches the data used by the SQL query in its local SSD and memory
- This improves query performance if the same data was used (less time to fetch remotely)
- Cache is deleted when the Warehouse is suspended
2. // Data Loading //
File Location
- On Local
- On Cloud
- AWS (Can load directly from S3 into SF)
- Azure (Can load directly from Blob Storage into SF)
- GCP (Can load directly from GCS into SF)
File Type
- Structured
- Delimited files (CSV, TSV etc)
- Sem-structured
- JSON (SF can auto detect if Snappy compressed)
- ORC (SF can auto detect if Snappy compressed or zlib)
- Parquet (SF can auto detect if Snappy compressed)
- XML (in preview)
- * Note: If files = uncompressed, on load to SF it is gzip (can disable)
>Compression
- SF auto compresses data from local fs to gzip - can change (AUTO_COMPRESS)
- Specify compression type on loading compressed data
>Encryption for load
- Loading unencrypted files
- SF auto encrypts files using 126-bit keys! (or 256-keys - requires configuring)
- Loading encrypted files
- Provide your key to SF on load
Best Practice
>File Sizing
- ~10-100 MB file, compressed - Optimises parallel operations for data load
- Aggregate smaller files - Minimises processing overhead
- Split up larger files to small files - Distributes workload among server in active vwh
- # data files processed in parallel depends on capacity of servers in vwh
- Parquet: >3GB compressed could time out - split into 1GB chunks
>Semi Structured Sizing
- VARIANT data type has 16 MB compressed size limit per row
- For JSON or Avro, outer array structure can be removed using STRIP_OUTER_ARRAY.
>Continues Data Loads File Sizing
- Load new data within a minute after file notification sent, longer if file is large
- Large compute is required (decompress, decrepit, transform etc).
- If > 1min to accumulate MBs of data in source app, create data file once per min
- Lowers cost, improve performance (load latency)
- Creating smaller files and staging them in cloud storage >1 time per minute = disadvantage
- Reduction in latency between staging and loading data != guaranteed
- Overhead to manage file in internal load queue - increase utilisation cost
- Overhead charges apply if per than 1000 files in queue
Planning Data Load
Dedicating Separate Warehouse
- Load of large data = affect query performance
- Use seperate Warehouse
- # data files processed in parallel determined by servers in the warehouse.
- Split large data to scale linearly, use of small vwh should be sufficient
Staging Data
> Organising Data by Path - Snowflake Staging
- Both internal and external ref can include path (prefix) in the cloud storage
- SF recommends (for file organisation in Cloud Storage):
1. Partition into logical path - includes identifying detail with date
2. Organising file by path = allow copy fraction of partitioned data in SF in one command
- Result: Easier to execute concurrent COPY statements - takes advantage of
parallel operations
- Named stage operation:
- Staging = A place where the location/path of the data is stored to assist in processing the
upload of files
- Remember: Files uploaded to snowflake Staging area using PUT are automatically
encrypted with 128-bit or 256-bit (CLIENT_ENCRYPTION_KEY_SIZE to specify) keys!
Loading Data
> COPY command - parallel execution
- Supports loading by Internal Stage or S3 Bucket path
- Specifying specific list of files to upload (1000 files max at a time)
- Identify files through pattern matching (regex
- Validating Data:
- Use VALIDATION_MODE - validate errors on load - does not load into table
- ON_ERROR to run actions to follow
- When COPY command runs, SF sets load status in the table’s metadata
- Prevents parallel COPY from loading the same file
- On complete, SF sets the load status and to those that failed as well
- Metadata (expires after 64 days):
- Name of file
- File size
- ETag of the file
- # rows parsed in file
- Timestamp of last load
- Information on errors during load
- SF recommends removing the Data from the Stage once the load is completed to avoid reloading
again - use REMOVE command (and specify PURGE in COPY argument)
- Improves performance - doesn’t have to scan processed files
> Note for Semi-structured
- SF loads ss data to VARIANT type column
- Can load ss data into multiple columns but ss data must be stored as field in structured data
- Use FLATTEN to explode compounded values into multiple rows
>Data Transformation during load
- Supported
- Sequence, substring, to_binary, to_decimal
- Commands not supported
- WHERE, FLATTEN, JOIN, GROUP BY, DISTINCT (not fully supported)
- VALIDATION_MODE (if any aggregation applied)
- CURRENT_TIME (will result in the same time)
3. Snowflake Stage
Types of Stages
- Default: each table and user are allocated an internal named stage
- Specify Internal Stage in PUT command when uploading file to SF
- Specify the Same Stage in COPY INTO when loading data into a table
> User Stage (Ref: @~)
- Accessed by single user but need copying to multiple tables
- Can’t be altered or dropped
- Can’t set file format, need to specify in COPY command to table
> Table Stage (Ref: @%)
- Accessed by multiple users and copied to single tale
- Can’t be altered or dropped
- Can’t set file format, need to specify in COPY command to table
- No transformation while loading
Internal Named Stages (Ref: @)
- A Database object
- Can load data into any tables (Needs user with privilege)
- Ownership of stage can be transferred
AWS - Bulk Load Loading from S3
- SF uses S3 Gateway Endpoint. If region bucket = SF region - no route through public internet.
>Securely access S3
1. Configure storage integration object - delegate authentication responsibility for external cloud
storage to SF identity and IAM
2. Configure AWS IAM role -> allow S3 bucket access
3. Configure IAM user, provide Key and Secret Key
Snowpipe - Incremental Load
>Introduction
- Enable you to loads data as soon as they are in stage (uses COPY command)
- Uses Pipe (first-class SF object) which contains the COPY command
- Can DROP, CREATE, ALTER, DESCRIBE and SHOW PIPES
- Think of this as SF way of streaming data into the table as long as you have created a Named
Stage.
- Generally loads older files first but doesn’t guarantee - Snowpipe appends to queue.
- Has loading metadata to avoid duplicated loads
>Billing and Usage
- User doesn’t need to worry about managing vwh
- Charges based on resource usage - consume credit only when active
- View charges and usage
- Web UI - Account > Billing & Usage
- SQL - Information Schema > PIPE_USAGE_HISTORY
>Automating Snowpipe
- Check out their AWS, GCP, Azure, and REST API article
Querying from Staged File - Query data outside SF
- SF allows you to query data directly in external locations as long as you have created a stage for it
- SF recommends using simple queries only for
- Performance = impacted since it’s not compressed and columnised to SF standard
// Monitoring //
Resource Monitor
- Helps control the cost and unexpected spike in credit usage
- Set Actions (or triggers) to Notify and/or Suspend if credit usage is above certain threshold
- Set intervals of monitoring
- Created by users with admin roles (ACCOUNTADMIN)
- If monitor suspends, any warehouse assigned to the monitor cannot be resumed until
1. Next interval starts
2. Minitor is dropped
3. Credit quota is increased
4. Warehouse is no longer assigned to the monitor
5. Credit threshold is suspended
4. // Unloading Data //
Introduction
- Allows you to export your data out of SF
- Uses COPY INTO <location> (either external or SF stage)
- If SF then use GET command to download file
- Use SELECT and other full SQL syntax to build data for export
- Can specify MAX_FILE_SIZE to split into multiple files by chunks
Format and Restrictions
- Allowed formats for export (only UTF-8 as encoding allowed)
- Delimited Files (CSV etc)
- JSON
- Parquet
- Allowed compression for export
- gzip (default)
- bzip2
- Brotli
- Zstandard
- Encryption for export
- Internal Stage - 128-bit or 256-bit - gets decrypted when downloaded
- External - Customer supply encryption key
Considerations
- Empty String and Null
- FIELD_OPTIONALLY_ENCLOSED_BY
- EMPTY_FIELD_AS_NULL
- NULL_IF
- Unloading a single file
- MAX_FILE_SIZE default = 16mb
- Max max is 5gb for AWS, GCP, and 256mb for Azure
- Unloading Relational table to JSON
- User OBJECT_OBSTRUCT obstruct in COPY command to convert rows of relational
table to VARIANT column then unload the data as per usual
Unloading in SF Stage Unloading into S3
// Virtual Warehouse //
Cluster of resource
- Provides CPU, Mem and Temp Storage
- Is active on usage of SELECT and DML (DELETE, INSERT, COPY INTO etc)
- Can be stopped at any time and resized at any time (even while running)
- Running queries are do not get affected, only new queries
- Warehouse sizes = T-shirt sizes (generally query performance scales linearly with vwh size)
- X-Small - 4X-Large
- When creating a vwh you can specify:
- Auto-suspend (default: 10mins) - suspends warehouse if active after certain time
- Auto-resume - auto-resumes warehouse whenever a statement that requires active vwh
is required
- Query Caching
- vwh maintains cache of table data - Improves query performance
- The Larger the vwh, larger the cache - Cache dropped when vwh suspended
Choosing VWH
- SF recommends experimenting by running the same queries on different vwh
- Monitoring Warehouse Load - WebUI: In Warehouse (view queued and running queries)
Query Processing and Concurrency
- When queries submitted, vwh calls and reserves resource
- Query is queued if vwh doesn't have enough resource
- STATEMENT_QUEUED_TIMEOUT_IN_SECONDS
- STATEMENT_TIMEOUT_IN_SECONDS
- Both can be used to control query processing and concurrency
- Size and complexity of query determines the concurrency (also # queries being executed)
Multi-cluster warehouse
- Up to 10 server clusters
- Auto-suspend and auto-resume is of whole cluster not 1 server
- Can resize anytime
- Multi-cluster mode
- Maximised - associate same min and max for # cluster
- SF starts all clusters at start would be good for expected workload
- Auto-scale - different min and max for # cluster
- To help control this, SF provides scaling policy
- Some rules:
5. // Tables in Snowflake //
Micro-partition
- All of SF Tables are divided into micro-partition
- Each micro-partitions are compressed columnar data
- Max size of 16mb compressed (50-500MB of uncompressed data)
- Stored on logical hard-drive
- Data only accessible via query through SF not directly
- They are immutable, can’t be changed
- Order of data ingestion are used to partition these data
- SF uses natural clustering to colocate column with the same value or similar range
- Results to non-overlapping micro-partition and least depth
- Improves query performance to avoid scanning unnecessarily micr-partitions
- Metadata of the micropartitions are stored in the Cloud Services Layer
- Customers can query this without needing an Active vwh
- Range of each columns
- Number of Distinct Values
- Count NULL
>Clustering Key
- SF recommends the use of Clustering key once your table grows really large (multi-terabyte
range).
- Especially when data does not cluster optimally.
- Clustering keys are subset of columns designed to colocate data in the table in the same
micro-partitions
- Improves query performance on large tables by skipping data that is not part of filtering
predicate
Zero-copy Cloning
- Allows Customers to Clone their table
- SF references the original data (micro-partition) - hence zero-copy (no additional storage cost)
- When a change is made to the cloned table (new data added, deleted or updated) then
a new micro-partition is created (incurs storage cost).
- Cloned object do not inherit the source’s granted privileges
Types of Tables (Internal)
>Temporary Tables
- Used to store data temporarily
- Non-permanent and exists only within the session - data is purged after session ends
- Not visible to other users and not recoverable
- Contributes to overall storage
- Belongs to DB and Schema - Can have the same name as another non-temp table within the same
DB and Schema
- Default 1 day Time Travel
>Transient Tables (or DB and Schema)
- Persists until dropped
- Have all functionality as permanent table but no Fail-safe mode (no FS storage cost)
- Default 1 day Time Travel
>Permanent Table
- Have 7 days fail safe
- Default 1 day Time Travel (up to 90 days for >=Enterprise edition)
>External Table
- Allows access to data stored in External Stage
>Creating tables
- SF doesn’t enforce any constraints (primary key, unique, foreign key) beside the NOT NULL
constraint
Types of Views
>Non-materialized views (views)
- Named definition of query
- Result are not stored
>Materialized views
- Result are stored
- Faster performance than normal views
- Contributes towards storage cost
>Secure view
- Both non-materialized and materialized view can be defined as Secure view
- Improves data privacy - hides view definition and details from unauthorised viewers
- Performance is impacted as internal optimization is bypassed
6. // Time Travel and Fail Safe //
Introduction
>Time Travel (0-90 days)
- Allows db to query, clone or restore historical data to tables, schema or db for up to 90 days
- Can use this to access snapshot of data at a point in time
- Useful if data was deleted, dropped or updated.
>Fail safe (7 days)
- Allows disaster recovery of historical data - Only accessible by Snowflake!
// Data Sharing //
Introduction
- Only ACCOUNTADMIN can provision share
- The sharer is the Provider while the user of the shared data is the Consumer
- Only ACCOUNTADMIN role can create this
- Secure data sharing enables account-to-account sharing of Database, Tables and Views
- No actual data is copied or transferred
- Sharing accomplished using SF service layer and metadata store
- Charge is compute resources used to query shared data
- User creates share of DB then grants object level access
- read-only db is created on consumer side
- Access can be revoked at any time
- Type of share includes Share and Reader
- No limit on number of shares or accounts but 1 DB per share
Share
- Between SF accounts
- Each share consists of
- Privilege that grant db access
- Privilege that grant access to specific object (tables, views)
- Consumer accounts with which db and object are shared
- Can perform DML operations
Reader Account
- Share data with user who isn’t on SF
- Objects can only be read not modified (no DML) - consumer can only consume the data
7. // Access Control within Snowflake //
Network Policy
- Allows access based on IP whitelist or restrictions to IP blacklist
- Apply through SQL or WebUI
- Only ACCOUNTADMIN or SECURITYADMIN can modify, drop or create these
PrivateLink
- If time permits briefly read through these (AWS and Azure)
MFA
- Powered by Duo System and enrolled on all accounts, Customers need to enable it
- Recommend enabling MFA on ACCOUNTADMIN
- Use with WebUI, SnowSQL, ODBC, JDCB, Python Connector
Federated Auth and SSO
- User authenticate through external SAML 2.0-compliant identity provider (IdP)
- Users don’t have to log into Snowflake directly
- As per basic introduction to SSO, a token is passed to the application that the user is trying to
login to authenticate which in turns will open access when verified
Access Control Models
SF approach to access control uses the following models:
- Discretionary Access Control (DAC) - All object have an owner, owner can grant access to their
objects
- Role-based Access Control (RBAC) - Privileges are assigned to roles, then roles to users
- Roles are entities that contains granted privileges (level of access to object) which are
System Defined Roles
>ACCOUNTADMIN
- encapsulates SECURITYADMIN and SYSADMIN
>SECURITYADMIN
- Creates, modify and drops user, roles, networks, monitor or grants
>SYSADMIN
- Has privileges to create vwh, and db and its objects
- Recommend assigning all custom roles to SYSADMIN so it has control over all objects created
(create role hierarchy based on this)
>PUBLIC
- Automatically grant to all users
- Can own secured objects
- Used where explicit access control is not required
>Custom Roles
- Created by SECURITYADMIN
- Assigned to SYSADMIN
- If not assigned, only roles with MANAGE GRANTS can modify grants on it.
- Create custom roles with least privilege and role them up
- Visit “Access Control Privileges” in SF Doc and have a quick skim over the privileges
// Data Security within Snowflake //
> End-to-end Encryption
- Snowflake encrypts all data by default, no additional cost
- No one other than the customer or runtime components can read the data
- Data is always protected as it is in an encrypted state
- Customer provides encrypted data to the external staging area (S3 etc)
- Provide SF with the encryption master key when creating the Named Stage
- Client Side Encryption
- Customer provides unencrypted data to SF internal staging area, SF will automatically encrypt the
data.
>Key Rotation
- Snowflake encrypts all data by default and keys are rotated regularly
- Uses Hierarchy Key Model for its Encryption Key Management - comprises of
- Root Key
- Account Master Key (auto-rotate if >30 days old)
- Table Master Key (auto-rotate if >30 days old)
- File Keys
>Tri-Secret Secure and Customer-managed keys (Business Critical Edition)
- Combines the customer key with snowflake’s maintained key
- Creates composite master key then use it to encrypts all data in the account
- If customer key or snowflake key is revoked, data cannot be decrypted