This document provides an introduction to data lakes and discusses key aspects of creating a successful data lake. It defines different stages of data lake maturity from data puddles to data ponds to data lakes to data oceans. It identifies three key prerequisites for a successful data lake: having the right platform (such as Hadoop) that can handle large volumes and varieties of data inexpensively, obtaining the right data such as raw operational data from across the organization, and providing the right interfaces for business users to access and analyze data without IT assistance.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Wallchart - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
This presentation was part of the IDS Webinar on Data Governance. It gives a brief overview of the history on Data Governance, describes how governing data has to be further developed in the era of business and data ecosystems, and outlines the contribution of the International Data Spaces Association on the topic.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Wallchart - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
This presentation was part of the IDS Webinar on Data Governance. It gives a brief overview of the history on Data Governance, describes how governing data has to be further developed in the era of business and data ecosystems, and outlines the contribution of the International Data Spaces Association on the topic.
Building the Data Lake with Azure Data Factory and Data Lake AnalyticsKhalid Salama
In essence, a data lake is commodity distributed file system that acts as a repository to hold raw data file extracts of all the enterprise source systems, so that it can serve the data management and analytics needs of the business. A data lake system provides means to ingest data, perform scalable big data processing, and serve information, in addition to manage, monitor and secure the it environment. In these slide, we discuss building data lakes using Azure Data Factory and Data Lake Analytics. We delve into the architecture if the data lake and explore its various components. We also describe the various data ingestion scenarios and considerations. We introduce the Azure Data Lake Store, then we discuss how to build Azure Data Factory pipeline to ingest the data lake. After that, we move into big data processing using Data Lake Analytics, and we delve into U-SQL.
Data Warehousing Trends, Best Practices, and Future OutlookJames Serra
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn:
- Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart
- Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon
- Step by step approach to building an effective data warehouse architecture
- Common reasons for the failure of data warehouse implementations and how to avoid them
Data Warehouse or Data Lake, Which Do I Choose?DATAVERSITY
Today’s data-driven companies have a choice to make – where do we store our data? As the move to the cloud continues to be a driving factor, the choice becomes either the data warehouse (Snowflake et al) or the data lake (AWS S3 et al). There are pro’s and con’s for each approach. While the data warehouse will give you strong data management with analytics, they don’t do well with semi-structured and unstructured data with tightly coupled storage and compute, not to mention expensive vendor lock-in. On the other hand, data lakes allow you to store all kinds of data and are extremely affordable, but they’re only meant for storage and by themselves provide no direct value to an organization.
Enter the Open Data Lakehouse, the next evolution of the data stack that gives you the openness and flexibility of the data lake with the key aspects of the data warehouse like management and transaction support.
In this webinar, you’ll hear from Ali LeClerc who will discuss the data landscape and why many companies are moving to an open data lakehouse. Ali will share more perspective on how you should think about what fits best based on your use case and workloads, and how some real world customers are using Presto, a SQL query engine, to bring analytics to the data lakehouse.
Building an Effective Data Warehouse ArchitectureJames Serra
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
This presenation explains basics of ETL (Extract-Transform-Load) concept in relation to such data solutions as data warehousing, data migration, or data integration. CloverETL is presented closely as an example of enterprise ETL tool. It also covers typical phases of data integration projects.
Differentiate Big Data vs Data Warehouse use cases for a cloud solutionJames Serra
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Big data architectures and the data lakeJames Serra
With so many new technologies it can get confusing on the best approach to building a big data architecture. The data lake is a great new concept, usually built in Hadoop, but what exactly is it and how does it fit in? In this presentation I'll discuss the four most common patterns in big data production implementations, the top-down vs bottoms-up approach to analytics, and how you can use a data lake and a RDBMS data warehouse together. We will go into detail on the characteristics of a data lake and its benefits, and how you still need to perform the same data governance tasks in a data lake as you do in a data warehouse. Come to this presentation to make sure your data lake does not turn into a data swamp!
Data modelling for the business half day workshop presented at the Enterprise Data & Business Intelligence conference in London on November 3rd 2014
chris.bradley@dmadvisors.co.uk
In this session, Sergio covered the Lakehouse concept and how companies implement it, from data ingestion to insight. He showed how you could use Azure Data Services to speed up your Analytics project from ingesting, modelling and delivering insights to end users.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
I've shown you in this ppt, the difference between Data and Big Data. How Big Data is generated, Opportunities with Big Data, Problem occurred in Big Data, solution of that problem, Big Data tools, What is Data Science & how it's related with the Big Data, Data Scientist vs Data Analyst. At last, one Real-life scenario where Big data, data scientists, and data analysts work together.
Metadata management is critical for organizations looking to understand the context, definition and lineage of key data assets. Data models play a key role in metadata management, as many of the key structural and business definitions are stored within the models themselves. Can data models replace traditional metadata solutions? Or should they integrate with larger metadata management tools & initiatives?
Join this webinar to discuss opportunities and challenges around:
How data modeling fits within a larger metadata management landscape
When can data modeling provide “just enough” metadata management
Key data modeling artifacts for metadata
Organization, Roles & Implementation Considerations
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Hadoop was born out of the need to process Big Data.Today data is being generated liked never before and it is becoming difficult to store and process this enormous volume and large variety of data, In order to cope this Big Data technology comes in.Today Hadoop software stack is go-to framework for large scale,data intensive storage and compute solution for Big Data Analytics Applications.The beauty of Hadoop is that it is designed to process large volume of data in clustered commodity computers work in parallel.Distributing the data that is too large across the nodes in clusters solves the problem of having too large data sets to be processed onto the single machine.
Using Data Lakes to Sail Through Your Sales GoalsIrshadKhan682442
Using Data Lakes to Sail Through Your Sales Goals Most Popular Busting 5 Common CRM Myths Fail-Proof Ways to Hire A-Lister in Sales Our Recommendations Retail Redefined - Where does the innovation takes us?
To know more visit here: https://www.denave.com/resources/ebooks/using-data-lakes-to-sail-through-your-sales-goals/
Building the Data Lake with Azure Data Factory and Data Lake AnalyticsKhalid Salama
In essence, a data lake is commodity distributed file system that acts as a repository to hold raw data file extracts of all the enterprise source systems, so that it can serve the data management and analytics needs of the business. A data lake system provides means to ingest data, perform scalable big data processing, and serve information, in addition to manage, monitor and secure the it environment. In these slide, we discuss building data lakes using Azure Data Factory and Data Lake Analytics. We delve into the architecture if the data lake and explore its various components. We also describe the various data ingestion scenarios and considerations. We introduce the Azure Data Lake Store, then we discuss how to build Azure Data Factory pipeline to ingest the data lake. After that, we move into big data processing using Data Lake Analytics, and we delve into U-SQL.
Data Warehousing Trends, Best Practices, and Future OutlookJames Serra
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn:
- Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart
- Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon
- Step by step approach to building an effective data warehouse architecture
- Common reasons for the failure of data warehouse implementations and how to avoid them
Data Warehouse or Data Lake, Which Do I Choose?DATAVERSITY
Today’s data-driven companies have a choice to make – where do we store our data? As the move to the cloud continues to be a driving factor, the choice becomes either the data warehouse (Snowflake et al) or the data lake (AWS S3 et al). There are pro’s and con’s for each approach. While the data warehouse will give you strong data management with analytics, they don’t do well with semi-structured and unstructured data with tightly coupled storage and compute, not to mention expensive vendor lock-in. On the other hand, data lakes allow you to store all kinds of data and are extremely affordable, but they’re only meant for storage and by themselves provide no direct value to an organization.
Enter the Open Data Lakehouse, the next evolution of the data stack that gives you the openness and flexibility of the data lake with the key aspects of the data warehouse like management and transaction support.
In this webinar, you’ll hear from Ali LeClerc who will discuss the data landscape and why many companies are moving to an open data lakehouse. Ali will share more perspective on how you should think about what fits best based on your use case and workloads, and how some real world customers are using Presto, a SQL query engine, to bring analytics to the data lakehouse.
Building an Effective Data Warehouse ArchitectureJames Serra
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
This presenation explains basics of ETL (Extract-Transform-Load) concept in relation to such data solutions as data warehousing, data migration, or data integration. CloverETL is presented closely as an example of enterprise ETL tool. It also covers typical phases of data integration projects.
Differentiate Big Data vs Data Warehouse use cases for a cloud solutionJames Serra
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Big data architectures and the data lakeJames Serra
With so many new technologies it can get confusing on the best approach to building a big data architecture. The data lake is a great new concept, usually built in Hadoop, but what exactly is it and how does it fit in? In this presentation I'll discuss the four most common patterns in big data production implementations, the top-down vs bottoms-up approach to analytics, and how you can use a data lake and a RDBMS data warehouse together. We will go into detail on the characteristics of a data lake and its benefits, and how you still need to perform the same data governance tasks in a data lake as you do in a data warehouse. Come to this presentation to make sure your data lake does not turn into a data swamp!
Data modelling for the business half day workshop presented at the Enterprise Data & Business Intelligence conference in London on November 3rd 2014
chris.bradley@dmadvisors.co.uk
In this session, Sergio covered the Lakehouse concept and how companies implement it, from data ingestion to insight. He showed how you could use Azure Data Services to speed up your Analytics project from ingesting, modelling and delivering insights to end users.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
I've shown you in this ppt, the difference between Data and Big Data. How Big Data is generated, Opportunities with Big Data, Problem occurred in Big Data, solution of that problem, Big Data tools, What is Data Science & how it's related with the Big Data, Data Scientist vs Data Analyst. At last, one Real-life scenario where Big data, data scientists, and data analysts work together.
Metadata management is critical for organizations looking to understand the context, definition and lineage of key data assets. Data models play a key role in metadata management, as many of the key structural and business definitions are stored within the models themselves. Can data models replace traditional metadata solutions? Or should they integrate with larger metadata management tools & initiatives?
Join this webinar to discuss opportunities and challenges around:
How data modeling fits within a larger metadata management landscape
When can data modeling provide “just enough” metadata management
Key data modeling artifacts for metadata
Organization, Roles & Implementation Considerations
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Hadoop was born out of the need to process Big Data.Today data is being generated liked never before and it is becoming difficult to store and process this enormous volume and large variety of data, In order to cope this Big Data technology comes in.Today Hadoop software stack is go-to framework for large scale,data intensive storage and compute solution for Big Data Analytics Applications.The beauty of Hadoop is that it is designed to process large volume of data in clustered commodity computers work in parallel.Distributing the data that is too large across the nodes in clusters solves the problem of having too large data sets to be processed onto the single machine.
Using Data Lakes to Sail Through Your Sales GoalsIrshadKhan682442
Using Data Lakes to Sail Through Your Sales Goals Most Popular Busting 5 Common CRM Myths Fail-Proof Ways to Hire A-Lister in Sales Our Recommendations Retail Redefined - Where does the innovation takes us?
To know more visit here: https://www.denave.com/resources/ebooks/using-data-lakes-to-sail-through-your-sales-goals/
The volume, variety, velocity and veracity of big data are getting increasingly complex
each passing day. The way the data is stored, processed, managed and shared with
decision-makers is getting impacted by this complexity and to tackle the same, a
revolutionary approach to data management has come into picture. A data lake.
Busting 5 Common CRM Myths Most Read Fail-Proof Ways to Hire A-Listers in Sales Fail-Proof Ways to Use Data Lakes to Achieve Your Sales Goals Recommendations from Us Where does innovation lead us with respect to retail redefined?
WHAT IS A DATA LAKE? Know DATA LAKES & SALES ECOSYSTEMRajaraj64
As the name suggests, data lake is a large reservoir of data – structured or unstructured, fed through disparate channels. The data is fed through channels in anad-hoc manner into these data lakes, however, owing to the predefined set of rules orschema, correlation between the database is established automatically to help with the extraction of meaningful information.
For more information visit:- https://bit.ly/3lMLD1h
Enterprise Data Lake:
How to Conquer the Data Deluge and Derive Insights
that Matters
Data can be traced from various consumer sources.
Managing data is one of the most serious challenges faced
by organizations today. Organizations are adopting the data
lake models because lakes provide raw data that users can
use for data experimentation and advanced analytics.
A data lake could be a merging point of new and historic
data, thereby drawing correlations across all data using
advanced analytics. A data lake can support the self-service
data practices. This can tap undiscovered business value
from various new as well as existing data sources.
Furthermore, a data lake can aid data warehousing,
analytics, data integration by modernizing. However, lakes
also face hindrances like immature governance, user skills
and security.
This white paper will present the opportunities laid down by
data lake and advanced analytics, as well as, the challenges
in integrating, mining and analyzing the data collected from
these sources. It goes over the important characteristics of
the data lake architecture and Data and Analytics as a
Service (DAaaS) model. It also delves into the features of a
successful data lake and its optimal designing. It goes over
data, applications, and analytics that are strung together to
speed-up the insight brewing process for industry’s
improvements with the help of a powerful architecture for
mining and analyzing unstructured data – data lake.
Modern Integrated Data Environment - Whitepaper | QuboleVasu S
A whit-paper is about building a modern data platform for data driven organisations with using cloud data warehouse with modern data platform architecture
https://www.qubole.com/resources/white-papers/modern-integrated-data-environment
Data lakes are central repositories that store large volumes of structured, unstructured, and semi-structured data. They are ideal for machine learning use cases and support SQL-based access and programmatic distributed data processing frameworks. Data lakes can store data in the same format as its source systems or transform it before storing it. They support native streaming and are best suited for storing raw data without an intended use case. Data quality and governance practices are crucial to avoid a data swamp. Data lakes enable end-users to leverage insights for improved business performance and enable advanced analytics.
Optimising Data Lakes for Financial ServicesAndrew Carr
By using a data lake, you can potentially do more with your company’s data than ever before.
You can gather insights by combining previously disparate data sets, optimise your operations, and build new products. However, how you design the architecture and implementation can significantly impact the results. In this white paper, we propose a number of ways to tackle such challenges and optimise the data lake to ensure it fulfils its desired function.
Big Data and BI Tools - BI Reporting for Bay Area Startups User GroupScott Mitchell
This presentation was presented at the July 8th 2014 user group meeting for BI Reporting for Bay Area Start Ups
Content - Creation Infocepts/DWApplications
Presented by: Scott Mitchell - DWApplications
The Evolving Role of the Data Engineer - Whitepaper | QuboleVasu S
A whitepaper about how the evolving data engineering profession helps data-driven companies work smarter and lower cloud costs with Qubole.
https://www.qubole.com/resources/white-papers/the-evolving-role-of-the-data-engineer
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
1. Introduction to data lake
Prepared By
Dr. Swarnalatha K.S
Professor, Dept. of ISE
NMIT
2. Introduction to Data Lakes : Data-driven decision making is changing how
we work and live. From data science, machine learning, and advanced analytics
to real-time dashboards, decision makers are demanding data to help make
decisions. Companies like Google, Amazon, and Facebook are data-driven
juggernauts that are taking over traditional businesses by leveraging data.
Financial services organizations and insurance companies have always been
data driven, with quants and automated trading leading the way. The Internet
of Things (IoT) is changing manufacturing, transportation, agriculture, and
healthcare. From governments and corporations in every vertical to non-profits
and educational institutions, data is being seen as a game changer.
Artificial intelligence and machine learning are permeating all aspects of our
lives. The world is bingeing on data because of the potential it represents. We
even have a term for this binge: big data, defined by Doug Laney of Gartner in
terms of the three Vs (volume, variety, and velocity), to which he later added a
fourth and, in my opinion, the most important V—veracity.
3. With so much variety, volume, and velocity, the
old systems and processes are no longer able to
support the data needs of the enterprise. Veracity
is an even bigger problem for advanced analytics
and artificial intelligence, where the principle of
“GIGO” (garbage in = garbage out) is even more
critical because it is virtually impossible to tell
whether the data was bad and caused bad
decisions in statistical and machine learning
models or the model was bad
4. Data Lake Maturity
The data lake is a relatively new concept, so it is useful to define some of the
stages of maturity you might observe and to clearly articulate the differences
between these stages:
A data puddle is basically a single-purpose or single-project data mart built
using big data technology. It is typically the first step in the adoption of big data
technology. The data in a data puddle is loaded for the purpose of a single
project or team. It is usually well known and well understood, and the reason
that big data technology is used instead of traditional data warehousing is to
lower cost and provide better performance.
A data pond is a collection of data puddles. It may be like a poorly designed data
warehouse, which is effectively a collection of colocated data marts, or it may be
an offload of an existing data warehouse. While lower technology costs and
better scalability are clear and attractive benefits, these constructs still require a
high level of IT participation. Furthermore, data ponds limit data to only that
needed by the project, and use that data only for the project that requires it.
Given the high IT costs and limited data availability, data ponds do not really
help us with the goals of democratizing data usage or driving self-service and
data-driven decision making for business users.
A data lake is different from a data pond in two important ways. First, it
supports self-service, where business users are able to find and use data sets
that they want to use without having to rely on help from the IT department.
Second, it aims to contain data that business users might possibly want even if
there is no project requiring it at the time.
5. A data ocean expands self-service data and data-driven decision making to all
enterprise data, wherever it may be, regardless of whether it was loaded into the
data lake or not.
Figure 1-1 illustrates the differences between these concepts. As maturity grows
from a puddle to a pond to a lake to an ocean, the amount of data and the
number of users grow—sometimes quite dramatically. The usage pattern moves
from one of high-touch IT involvement to self-service, and the data expands
beyond what’s needed for immediate projects.
6. The key difference between the data pond and the data lake is the focus. Data
ponds provide a less expensive and more scalable technology alternative to
existing relational data warehouses and data marts. Whereas the latter are
focused on running routine, production-ready queries, data lakes enable
business users to leverage data to make their own decisions by doing ad hoc
analysis and experimentation with a variety of new types of data and tools, as
illustrated in Figure 1-2.
Before we get into what it takes to create a successful data lake, let’s take a closer
look at the two maturity stages that lead up to it.
7. Creating a Successful Data Lake
•So what does it take to have a successful data lake? As with any project, aligning
it with the company’s business strategy and having executive sponsorship and
broad buy-in are a must. In addition, based on discussions with dozens of
companies deploying data lakes with varying levels of success, three key
prerequisites can be identified:
•The right platform
•The right data
•The right interfaces
8. The Right Platform
Big data technologies like Hadoop and cloud solutions like Amazon Web
Services (AWS), Microsoft Azure, and Google Cloud Platform are the most
popular platforms for a data lake. These technologies share several important
advantages:
Volume
These platforms were designed to scale out—in other words, to scale
indefinitely without any significant degradation in performance.
Cost
We have always had the capacity to store a lot of data on fairly inexpensive
storage, like tapes, WORM disks, and hard drives. But not until big data
technologies did we have the ability to both store and process huge volumes of
data so inexpensively—usually at one-tenth to one-hundredth the cost of a
commercial relational database.
Variety
These platforms use filesystems or object stores that allow them to store all
sorts of files: Hadoop HDFS, MapR FS, AWS’s Simple Storage Service (S3), and
so on. Unlike a relational database that requires the data structure to be
predefined (schema on write), a filesystem or an object store does not really care
what you write. Of course, to meaningfully process the data you need to know
its schema, but that’s only when you use the data.
9. This approach is called schema on read and it’s one of the important
advantages of big data platforms, enabling what’s called “frictionless
ingestion.” In other words, data can be loaded with absolutely no processing,
unlike in a relational database, where data cannot be loaded until it is
converted to the schema and format expected by the database.
Because our requirements and the world we live in are in flux, it is critical to
make sure that the data we have can be used to help with our future needs.
Today, if data is stored in a relational database, it can be accessed only by that
relational database. Hadoop and other big data platforms, on the other hand,
are very modular.
The same file can be used by various processing engines and programs—from
Hive queries (Hive provides a SQL interface to Hadoop files) to Pig scripts to
Spark and custom MapReduce jobs, all sorts of different tools and systems can
access and use the same files. Because big data technology is evolving rapidly,
this gives people confidence that any future projects will still be able to access
the data in the data lake.
The Right Data
Most data collected by enterprises today is thrown away. Some small percentage
is aggregated and kept in a data warehouse for a few years, but most detailed
operational data, machine-generated data, and old historical data is either
aggregated or thrown away altogether. That makes it difficult to do analytics.
For example, if an analyst recognizes the value of some data that was
traditionally thrown away, it may take months or even years to accumulate
enough history of that data to do meaningful analytics. The promise of the data
lake, therefore, is to be able to store as much data as possible for future use.
10. So, the data lake is sort of like a piggy bank (Figure 1-4)—you often don’t know
what you are saving the data for, but you want it in case you need it one day.
Moreover, because you don’t know how you will use the data, it doesn’t make
sense to convert or treat it prematurely. You can think of it like traveling with
your piggy bank through different countries, adding money in the currency of
the country you happen to be in at the time and keeping the contents in their
native currencies until you decide what country you want to spend the money
in; you can then convert it all to that currency, instead of needlessly converting
your funds (and paying conversion fees) every time you cross a border. To
summarize, the goal is to save as much data as possible in its native format.
Figure 1-4. A data lake is like a piggy bank, allowing you to keep the data in its native or raw format
11. Another challenge with getting the right data is data silos. Different
departments might hoard their data, both because it is difficult and expensive
to provide and because there is often a political and organizational reluctance
to share.
In a typical enterprise, if one group needs data from another group, it has to
explain what data it needs and then the requests as much as possible and then
take as long as they can get away with to provide the data. This extra work is
often used as an excuse to not share data.
With a data lake, because the lake consumes raw data through frictionless
ingestion (basically, it’s ingested as is without any processing), that challenge
(and excuse) goes away. A well-governed data lake is also centralized and offers
a transparent process to people throughout the organization about how to
obtain data, so ownership becomes much less of a barrier.
The Right Interface
Once we have the right platform and we’ve loaded the data, we get to the more
difficult aspects of the data lake, where most companies fail—choosing the
right interface. To gain wide adoption and reap the benefits of helping business
users make data-driven decisions, the solutions companies provide must be
self-service, so their users can find, understand, and use the data without
needing help from IT. IT will simply not be able to scale to support such a large
user community and such a large variety of data.
There are two aspects to enabling self-service: providing data at the right level
of expertise for the users, and ensuring the users are able to find the right data.
12. Providing data at the right level of expertise
To get broad adoption for the data lake, we want everyone from data
scientists to business analysts to use it. However, when considering such
divergent audiences with different needs and skill levels, we have to be
careful to make the right data available to the right user populations.
For example, analysts often don’t have the skills to use raw data. Raw
data usually has too much detail, is too granular, and frequently has too
many quality issues to be easily used. For instance, if we collect sales
data from different countries that use different applications, that data
will come in different formats with different fields (e.g., one country may
have sales tax whereas another doesn’t) and different units of measure
(e.g., lb versus kg, $ versus €).
In order for the analysts to use this data, it has to be harmonized—put
into the same schema with the same field names and units of measure—
and frequently also aggregated to daily sales per product or per
customer. In other words, analysts want “cooked” prepared meals, not
raw data.
Data scientists, on the other hand, are the complete opposite. For them,
cooked data often loses the golden nuggets that they are looking for. For
example, if they want to see how often two products are bought together,
but the only information they can get is daily totals by product, data
scientists will be stuck. They are like chefs who need raw ingredients to
create their culinary or analytic masterpieces.
13. Roadmap to Data Lake Success
Now that we know what it takes for a data lake to be
successful and what pitfalls to look out for, how do we go
about building one? Usually, companies follow this process:
Stand up the infrastructure (get the Hadoop cluster up and
running).
Organize the data lake (create zones for use by various user
communities and ingest the data).
Set the data lake up for self-service (create a catalog of data
assets, set up permissions, and provide tools for the analysts to
use).
Open the data lake up to the users.
14. Standing Up a Data Lake
When I started writing this book back in 2015, most enterprises were building
on-premises data lakes using either open source or commercial Hadoop
distributions. By 2018, at least half of enterprises were either building their data
lakes entirely in the cloud or building hybrid data lakes that are both on
premises and in the cloud. Many companies have multiple data lakes, as well.
All this variety is leading companies to redefine what a data lake is. We’re now
seeing the concept of a logical data lake: a virtual data lake layer across multiple
heterogeneous systems. The underlying systems can be Hadoop, relational, or
NoSQL databases, on premises or in the cloud.
Figure 1-7 compares the three approaches. All of them offer a catalog that the
users consult to find the data assets they need. These data assets either are
already in the Hadoop data lake or get provisioned to it, where the analysts can
use them.
15. Organizing the Data Lake
Most data lakes that I have encountered are organized roughly the same way,
into various zones:
A raw or landing zone where data is ingested and kept as close as possible to its
original state.
A gold or production zone where clean, processed data is kept.
A dev or work zone where the more technical users such as data scientists and
data engineers do their work. This zone can be organized by user, by project, by
subject, or in a variety of other ways. Once the analytics work performed in the
work zone gets productized, it is moved into the gold zone.
A sensitive zone that contains sensitive data.
Figure 1-8 illustrates this organization.