SlideShare a Scribd company logo
1 of 132
Data Warehousing Concepts and Design
Introduction & Ground Rules
Objectives ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Objectives ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
What is Business Intelligence (BI)? ,[object Object],[object Object],[object Object],[object Object]
Objective of Business Intelligence Value Volume BI can be defined as taking ‘Decisions based on Data’. The objective of BI is to transform large volumes of data into useful information. Intelligence Knowledge Information Data
Evolution of BI ,[object Object],[object Object],[object Object],[object Object],EIS MIS DSS BI
Information ,[object Object],[object Object],[object Object],[object Object],[object Object]
Features of OLTP Systems ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Why OLTP systems are not suitable for analysis? Database design:  Dimensional Database design:  Normalized Data needs to be integrated Islands of operational systems Data required at summary level Data stored at transaction level Historical information to analyze Supports day-to-day operations Analytical Reporting OLTP
OLTP Versus Data Warehouse Large to very large, Few GB to TB Small to large  Few MB to GB Size Subject, time Application Data Organization Snapshots over time (Quarter, Month, etc). Historical 30 – 60 days or  1 year - 2 years. Current Age of Data Primarily Read only Data goes out DML Data goes in Operations Seconds to hours Sub seconds to seconds Response Time Data Warehouse OLTP Property
OLTP Versus Data Warehouse De-Normalized, Star schema Normalized Database Design Thousands to millions of records One record at a time No. of records Atomic and/or  Summarized (aggregate), less granularity Atomic (Detail), transactional level, Highest granularity Grain Analysis Processes Activities Operational, Internal, External Operational, Internal Data Sources Data Warehouse OLTP Property
OLTP Versus Data Warehouse
Data Extract Processing ,[object Object],[object Object],[object Object],Decision makers Operational  systems Extracts
Issues with Data Extract Programs  Extracts Operational   systems Decision makers Extract Explosion
Data Quality Issues with Extract Processing ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Data Warehousing and Business Intelligence
Advances Enabling Data Warehousing ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Definition of a Data Warehouse ,[object Object],[object Object],[object Object]
Data Warehouse Properties Integrated Time-variant Nonvolatile Subject- oriented Data Warehouse
Subject-Oriented ,[object Object],OLTP Applications Equity Plans Shares Insurance Loans Savings Data   Warehouse  Subject Customer financial  information
Integrated ,[object Object],Data Warehouse OLTP Applications Customer Savings Current  Accounts Loans
Time-Variant ,[object Object],Data  Warehouse
Non-volatile ,[object Object],Warehouse Read Load Operational Insert, Update,  Delete, or Read
Changing Warehouse Data Operational Databases Warehouse Database First time load Refresh Refresh Refresh Purge or  Archive
Goals of a Data Warehouse ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Usage Curves ,[object Object],[object Object],[object Object],[object Object]
User Expectations ,[object Object],[object Object],[object Object],[object Object],[object Object]
Enterprisewide Data Warehouse ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Data Warehouse Vocabulary ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Data Marts ,[object Object],[object Object],[object Object]
Data Warehouses Versus Data Marts Bottom-up Top-Down Approach Data Warehouse Data Mart Next level of migration Lower Higher Initial effort, cost, Risk < 100 GB 100 GB to > 1 TB Size Months Months to years Implementation time Few Many Data Source Single-subject, LOB Multiple Subjects Department Enterprise Scope Data Mart Data Warehouse Property
Data Warehouses Versus Data Marts
Dependent Data Mart Data  Warehouse Data Marts Marketing Sales Finance HR Flat Files Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data External  Data
Independent Data Mart Sales or Marketing Flat Files Operational Systems External Data Operations Data Legacy Data External  Data
Warehouse Development Approaches ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Top-Down Approach Build the Data Warehouse Build the Data Marts
Top-Down Approach Data  Warehouse Data Marts Marketing Sales Finance HR Flat Files Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data External  Data
Bottom-Up Approach Build Data Marts Build the Data Warehouse
Bottom-Up Approach Data  Warehouse Data Marts Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data
Hybrid Approach ,[object Object],[object Object],Starts by designing DW and DM models synchronously, Build out first 2-3 DMs that are mutually exclusive and critical Backfill a DW behind the DMs  Build the enterprise model and move atomic data to the DW
Federated Approach This approach is referred to as “an architecture of architectures”. Emphasizes the need to integrate new and existing  heterogeneous BI environments.
Data Warehouse Components Source  Systems Staging  Area Presentation  Area Access Tools Operational External Legacy Metadata Repository Data Marts Data  Warehouse ODS
Source  Systems Staging  Area Presentation  Area Access Tools Operational External Legacy Metadata Repository Data Marts Data  Warehouse Data Warehouse Components ODS
Examining Data Sources ,[object Object],[object Object],[object Object],[object Object]
Production Data ,[object Object],[object Object],[object Object],[object Object],IMS DB2 Oracle Sybase Informix VSAM SAP Dun and Bradstreet Financials Oracle Financials Baan PeopleSoft
Archive Data ,[object Object],[object Object],[object Object],Operation  databases Warehouse  database
Internal Data ,[object Object],[object Object],[object Object],[object Object],[object Object],Warehouse database Planning Accounting Marketing
External Data ,[object Object],[object Object],[object Object],A.C. Nielsen, IRI, IMRB,  ORG-MARG Barron's Dun and  Bradstreet Purchased  databases Wall Street  Journal Economic  forecasts Competitive  information Warehousing databases
Extraction, Transformation and Loading (ETL)
Extraction, Transformation and Loading (ETL) ,[object Object],[object Object],Source Target Staging Area
Staging Models ,[object Object],[object Object]
Remote Staging Model Load Load Data staging area within the warehouse environment Data staging area in its own independent environment Extract Extract Transform Staging area Transform Staging area Warehouse Warehouse Operational system Operational system
On-site Staging Model ,[object Object],Extract Load Warehouse Operational  system Transform Staging area
Extraction Methods ,[object Object],[object Object],[object Object]
Extraction Methods ,[object Object],[object Object],[object Object]
ETL Techniques ,[object Object],[object Object],[object Object],[object Object],[object Object]
Mapping Data ,[object Object],[object Object],[object Object],[object Object],Metadata File A F1 Staging File One Number F2 F3 Name DOB Staging File One Number USA123 Name  Mr. Bloggs DOB   10-Dec-56 File A F1 123 F2 Bloggs F3 10/12/56
Transformation Routines ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Transforming Data: Problems and Solutions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Data Anomalies ,[object Object],[object Object],[object Object],[object Object],181 North Street, Key West, FLA Oracle Corp UK Ltd 90345672 15 Main Road, Ft. Lauderdale, FLA Oracle Corp. UK 90234889 15 Main Road, Ft. Lauderdale Oracle Computing 90233489 100 N.E. 1st St. Oracle Limited 90233479 ADDRESS NAME CUSNUM
Multipart Keys Problem ,[object Object],Country  code Sales  territory Product number Salesperson  code Product code =  12   M   65431 3  45
Multiple Local Standards Problem ,[object Object],[object Object],cm inches cm USD 600 1,000 GBP FF 9,990 DD/MM/YY MM/DD/YY DD-Mon-YY
Multiple Source Files Problem ,[object Object],Transformed data Multiple  source files Logic to detect correct source
Missing Values Problem ,[object Object],[object Object],[object Object],[object Object],[object Object],If NULL then field = ‘A’ A
Duplicate Values Problem ,[object Object],[object Object],[object Object],ACME Inc ACME Inc ACME Inc SQL> SELECT ... 2  FROM  table_a, table_b 3  WHERE  table_a.key (+)= table_b.key 4  UNION 5  SELECT ... 6  FROM  table_a, table_b 7  WHERE  table_a.key = table_b.key (+);
Element Names Problem ,[object Object],[object Object],Customer Customer Client Contact Name
Element Meaning Problem ,[object Object],[object Object],[object Object],Product number p_no Purchase order  number Policy number
Input Format Problem ,[object Object],ASCII EBCDIC 12373 “ 123-73” ACME Co. áøåëéí äáàéí Beer (Pack of 8)
Referential Integrity Problem ,[object Object],[object Object],[object Object],[object Object],40 30 20 10 Department 60 Harris 6786 50 Doe 1234 20 Jones 1289 10 Smith 1099 Department Name Emp
Name and Address Problem ,[object Object],[object Object],Mr. J. Smith,100 Main St., Bigtown, County Luth, 23565 Database 1 M300 HARRY H. ENFIELD N100 DIANNE ZIEFELD LOCATION NAME Database 2 300 ENFIELD, HARRY H 100 ZIEFELD, DIANNE  LOCATION NAME 23565 Code County Luth Country Bigtown Town 100 Main St. Street Mr. J. Smith Name
Transformation Timing and Location ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Adding a Date Stamp: Fact Tables and Dimensions Item Table Item_id Dept_id Time_key Store Table Store_id District_id Time_key Sales Fact Table Item_id Store_id Time_key Sales_dollars Sales_units Time Table Week_id Period_id Year_id Time_key Product Table Product_id Time_key Product_desc
Summarizing Data ,[object Object],[object Object],Operational databases Warehouse database Staging area
Loading Data into the Warehouse ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Operational databases Warehouse database Staging area Extract Transform Transport, Load
Load Window Requirements ,[object Object],[object Object],[object Object],[object Object],[object Object],0  3 am  6  9  12 pm  3  6  9  12   User Access Period Load Window Load Window
Planning the Load Window ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],0  3 am  6  9  12 pm  3  6  9  12   User Access Period
Initial Load and Refresh ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Data Refresh Models ,[object Object],[object Object],[object Object],T1 T2 T3 Operational databases
Data Refresh Models ,[object Object],[object Object],[object Object],[object Object],T1 T2 T3 Operational databases
Post-Processing of Loaded Data Post-processing  of loaded data Extract Transform Load Warehouse Staging area Create indexes Generate keys Summarize Filter
Unique Indexes ,[object Object],[object Object],[object Object],Load data Disable constraints Enable constraints Create index Reprocess Catch  errors
Creating Derived Keys ,[object Object],[object Object],[object Object],[object Object],109908 01 109908 109908 100
 
Metadata Users Metadata  repository End users Developers IT Professionals
Metadata Documentation Approaches ,[object Object],[object Object],[object Object],[object Object]
Data Warehouse Design ,[object Object],[object Object],[object Object],[object Object],[object Object]
Business Requirements Drive the Design Process ,[object Object],[object Object],Existing Metadata Production ERD Model Business Requirements Research
Perform Strategic Analysis ,[object Object],[object Object],[object Object],Business Benefit Low High Low High Feasibility
Using a Business Process Matrix DW Bus Architecture Promotion Channel Product Date Inventory Customer Returns Sales Business Processes Business Dimensions
Conformed Dimensions ,[object Object],[object Object]
Determine the Grain YEAR? QUARTER? MONTH? WEEK? DAY?
Documenting the Granularity ,[object Object],[object Object],[object Object],Low-level grain  (Transaction-level data) High-level grain  (Summary data)
Defining Time Granularity Fiscal Time Hierarchy Current dimension grain Future dimension grain Fiscal Year Fiscal Quarter Fiscal Month Fiscal Week Day
Identify the Facts and Dimensions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Facts (Measures) Dimensions
Data Warehouse Environment Data Structures ,[object Object],[object Object],[object Object],[object Object]
Star Schema Customer Location Sales Supplier Product
Star Schema Model Product Table Product_id Product_disc,...   Time Table Day_id Month_id Year_id,... Sales Fact Table Product_id Store_id Item_id Day_id Sales_amount Sales_units, ... Item Table Item_id Item_desc,... Store Table Store_id District_id,... Central  fact table Denormalized dimensions
Fact Table Characteristics ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Sales Fact Table Product_id Store_id Item_id Day_id Sales_amount Sales_units ...
Dimension Table Characteristics ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Item Table Item_id Item_desc,...
Advantages of Using a Star Dimensional Model ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Base and Derived Data Payroll   table Derived   data Base   data Emp_FK Month_FK Salary Comm Comp 101 05 1,000 0 1,000 102 05 1,500 100 1,600 103 05 1,000 200 1,200 104 05 1,500 1,000 2,500
Translating Business Measures into a Fact Table  Business measures Facts Business Measures Number of Items Amount Cost Profit Fact Number of Items Item Amount Item Cost Profit Base Base Base Derived
Snowflake Schema Model Time Table Week_id Period_id Year_id Dept Table Dept_id Dept_desc Mgr_id Mgr Table Dept_id Mgr_id Mgr_name Product Table Product_id Product_desc Item Table Item_id Item_desc Dept_id Sales Fact Table Item_id Store_id Product_id Week_id Sales_amount Sales_units Store Table Store_id Store_desc District_id District Table District_id District_desc
Snowflake Model . . . . Order Web History_PK Customer History History_FK Customer_FK Product_FK Channel_FK Item_nbr Item_desc Quantity Discnt_price Unit-price Order_amt … Product   Channel Customer_PK . . . . Product_PK . . . . Web_PK Web_url Channel_PK Web_PK Channel_desc
Snowflake Schema Model ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Country State County City
Constellation Configuration Atomic   fact
Fact Table Measures Nonadditive: Cannot be added  along any dimension Semiadditive:  Added along some dimensions Additive:  Added across all dimensions
More on Factless Fact Tables Emp_FK Sal_FK Age_FK Ed_FK Grade_FK Grade dimension Grade_PK Education dimension Ed_PK Employee dimension Emp_PK Salary dimension Sal_PK Age dimension Age_PK PK = Primary Key  &  FK = Foreign Key
Factless Fact Tables ,[object Object],[object Object]
Bracketed Dimensions ,[object Object],[object Object],[object Object]
Bracketing Dimensions Customer_PK Bracket_FK Bracket_PK Customer_PK Bracket_FK Bracket dimension Customer dimension Income fact Bracket_PK Income (10Ks) Marital Status Gender Age 1 60-90 Single Male <21 2 60-90 Single Male 21-35 3 60-90 Single Male 35-55 4 60-90 Single Male >55 5 60-90 Single Female <21 6 60-90 Single Female 21-35
Identifying Analytical Hierarchies Store   dimension Store ID Store Desc Location Size Type District ID District Desc Region ID Region Desc Business hierarchies describe organizational structure and logical  parent-child relationships within the data. Region District Store Organization   hierarchy
Multiple Hierarchies Store ID Store Desc Location Size Type District ID District Desc Region ID Region Desc City ID City Desc County ID County Desc State ID State Desc Region District Store Organization   hierarchy Store   dimension Region District Store Geography hierarchy
Multiple Time Hierarchies Fiscal year Fiscal quarter Fiscal month Fiscal   time   hierarchy Fiscal week Calendar year Calendar quarter Calendar month Calendar   time   hierarchy Calendar week
Drilling Up and Drilling Down Store 5 Store 1 Store 2 Region 2 District 2 District 4 Store 4 Group Market Hierarchy Region 1 District 1 Store 6 Store 3 District 3
Drilling Across Region District Stores   > 20,000 sq. ft. Group Market hierarchy Region District Store Store City City City hierarchy
Using Time in the Data Warehouse ,[object Object],[object Object],[object Object],Where should the element of time be stored? Time dimension Sales fact
Date Dimension ,[object Object]
Applying the Changes to Data ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
OLAP Models ,[object Object],[object Object],[object Object],[object Object]
Slowly Changing Dimensions (SCDs) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Type I - Overwriting a Record ,[object Object],[object Object],[object Object],Single John Doe 42135 Married John Doe 42135
Type II - Adding a New Record ,[object Object],[object Object],Single John Doe 42135 Married John Doe 42135_01
Type III - Adding a Current Field ,[object Object],[object Object],[object Object],Single John Doe 42135 Married 1-Jan-01 Single John Doe 42135
Maintain History ,[object Object],[object Object],[object Object],Product Time Sales HIST_CUST   CUSTOMER
Versioning ,[object Object],[object Object],Time Product Customer Sales $12,000 2 1234 $11,000 1 1234 Sales Facts Version Sales.CustId Comer 2 1234 Comer 1 1234 Customer Name Version Customer.CustId
Rapidly Changing Dimensions (RCDs) ,[object Object],[object Object],[object Object],Mini Dimension :::::::::: :::: ::::: :::: 20000 – 30000 1-2 25-30 5 <20000 0 25-30 4 >30000 > 2 20-24 3 20000 – 30000 1-2 20-24 2 <20000 0 20–24 1 income children Age Demographics Key
Junk Dimension ,[object Object],Junk Dimension ::::; :::::: ::::: ::::: Fax Urgent Credit  4 Fax Normal Credit 3 Web Urgent Cash 2 Web Normal Cash 1 Order Mode Order type Payment Type Junk Key
Secret of Success Think big, start small!
References ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Thank-you

More Related Content

What's hot

Enabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data VirtualizationEnabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
 
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...Edureka!
 
Data Warehouse Design and Best Practices
Data Warehouse Design and Best PracticesData Warehouse Design and Best Practices
Data Warehouse Design and Best PracticesIvo Andreev
 
Data warehouse presentaion
Data warehouse presentaionData warehouse presentaion
Data warehouse presentaionsridhark1981
 
Intro to Data Vault 2.0 on Snowflake
Intro to Data Vault 2.0 on SnowflakeIntro to Data Vault 2.0 on Snowflake
Intro to Data Vault 2.0 on SnowflakeKent Graziano
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureDatabricks
 
Data Mesh for Dinner
Data Mesh for DinnerData Mesh for Dinner
Data Mesh for DinnerKent Graziano
 
Introduction Data warehouse
Introduction Data warehouseIntroduction Data warehouse
Introduction Data warehouseAmin Choroomi
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
 
Warehousing dimension star-snowflake_schemas
Warehousing dimension star-snowflake_schemasWarehousing dimension star-snowflake_schemas
Warehousing dimension star-snowflake_schemasEric Matthews
 
Introduction to Data Engineering
Introduction to Data EngineeringIntroduction to Data Engineering
Introduction to Data EngineeringHadi Fadlallah
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseDatabricks
 
Basic Introduction of Data Warehousing from Adiva Consulting
Basic Introduction of  Data Warehousing from Adiva ConsultingBasic Introduction of  Data Warehousing from Adiva Consulting
Basic Introduction of Data Warehousing from Adiva Consultingadivasoft
 
Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshJeffrey T. Pollock
 
Online analytical processing
Online analytical processingOnline analytical processing
Online analytical processingSamraiz Tejani
 

What's hot (20)

Enabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data VirtualizationEnabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data Virtualization
 
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...
Data Warehouse Tutorial For Beginners | Data Warehouse Concepts | Data Wareho...
 
Data Warehouse Design and Best Practices
Data Warehouse Design and Best PracticesData Warehouse Design and Best Practices
Data Warehouse Design and Best Practices
 
Data warehouse presentaion
Data warehouse presentaionData warehouse presentaion
Data warehouse presentaion
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
Intro to Data Vault 2.0 on Snowflake
Intro to Data Vault 2.0 on SnowflakeIntro to Data Vault 2.0 on Snowflake
Intro to Data Vault 2.0 on Snowflake
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse Architecture
 
Data Mesh for Dinner
Data Mesh for DinnerData Mesh for Dinner
Data Mesh for Dinner
 
Introduction Data warehouse
Introduction Data warehouseIntroduction Data warehouse
Introduction Data warehouse
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 
Data Engineering Basics
Data Engineering BasicsData Engineering Basics
Data Engineering Basics
 
Warehousing dimension star-snowflake_schemas
Warehousing dimension star-snowflake_schemasWarehousing dimension star-snowflake_schemas
Warehousing dimension star-snowflake_schemas
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
Introduction to Data Engineering
Introduction to Data EngineeringIntroduction to Data Engineering
Introduction to Data Engineering
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a Lakehouse
 
Basic Introduction of Data Warehousing from Adiva Consulting
Basic Introduction of  Data Warehousing from Adiva ConsultingBasic Introduction of  Data Warehousing from Adiva Consulting
Basic Introduction of Data Warehousing from Adiva Consulting
 
Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
 
Online analytical processing
Online analytical processingOnline analytical processing
Online analytical processing
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 

Viewers also liked

DATA WAREHOUSING
DATA WAREHOUSINGDATA WAREHOUSING
DATA WAREHOUSINGKing Julian
 
Data warehouse architecture
Data warehouse architectureData warehouse architecture
Data warehouse architecturepcherukumalla
 
Data Warehousing and Data Mining
Data Warehousing and Data MiningData Warehousing and Data Mining
Data Warehousing and Data Miningidnats
 
Data warehousev2.1
Data warehousev2.1Data warehousev2.1
Data warehousev2.1Tuan Luong
 
Build a Big Data Warehouse on the Cloud in 30 Minutes
Build a Big Data Warehouse on the Cloud in 30 MinutesBuild a Big Data Warehouse on the Cloud in 30 Minutes
Build a Big Data Warehouse on the Cloud in 30 MinutesCaserta
 
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZA
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZAThe Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZA
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZAWaldemar Adams
 
Business Intelligence: Data Warehouses
Business Intelligence: Data WarehousesBusiness Intelligence: Data Warehouses
Business Intelligence: Data WarehousesMichael Lamont
 
Use of Star Schema in Health Care
Use of Star Schema in Health CareUse of Star Schema in Health Care
Use of Star Schema in Health CareVINEETH M
 
Dw design 2_conceptual_model
Dw design 2_conceptual_modelDw design 2_conceptual_model
Dw design 2_conceptual_modelClaudia Gomez
 
Difference between star schema and snowflake schema
Difference between star schema and snowflake schemaDifference between star schema and snowflake schema
Difference between star schema and snowflake schemaUmar Ali
 
Data Provisioning & Optimization
Data Provisioning & OptimizationData Provisioning & Optimization
Data Provisioning & OptimizationAmbareesh Kulkarni
 
Multidimensional data models
Multidimensional data  modelsMultidimensional data  models
Multidimensional data models774474
 
DATA MART APPROCHES TO ARCHITECTURE
DATA MART APPROCHES TO ARCHITECTUREDATA MART APPROCHES TO ARCHITECTURE
DATA MART APPROCHES TO ARCHITECTURESachin Batham
 
Dimensional Modeling Basic Concept with Example
Dimensional Modeling Basic Concept with ExampleDimensional Modeling Basic Concept with Example
Dimensional Modeling Basic Concept with ExampleSajjad Zaheer
 
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!Caserta
 
Data Warehouse Back to Basics: Dimensional Modeling
Data Warehouse Back to Basics: Dimensional ModelingData Warehouse Back to Basics: Dimensional Modeling
Data Warehouse Back to Basics: Dimensional ModelingDunn Solutions Group
 
SQL - Parallel Data Warehouse (PDW)
SQL - Parallel Data Warehouse (PDW)SQL - Parallel Data Warehouse (PDW)
SQL - Parallel Data Warehouse (PDW) Karan Gulati
 
10 warehouse management
10 warehouse management10 warehouse management
10 warehouse managementPranav
 

Viewers also liked (20)

DATA WAREHOUSING
DATA WAREHOUSINGDATA WAREHOUSING
DATA WAREHOUSING
 
Data warehouse architecture
Data warehouse architectureData warehouse architecture
Data warehouse architecture
 
Warehousing
WarehousingWarehousing
Warehousing
 
Data Warehousing and Data Mining
Data Warehousing and Data MiningData Warehousing and Data Mining
Data Warehousing and Data Mining
 
Data warehousev2.1
Data warehousev2.1Data warehousev2.1
Data warehousev2.1
 
Build a Big Data Warehouse on the Cloud in 30 Minutes
Build a Big Data Warehouse on the Cloud in 30 MinutesBuild a Big Data Warehouse on the Cloud in 30 Minutes
Build a Big Data Warehouse on the Cloud in 30 Minutes
 
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZA
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZAThe Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZA
The Digital Imperative >> Keynote MasteringSAP Analytics Joburg, ZA
 
Business Intelligence: Data Warehouses
Business Intelligence: Data WarehousesBusiness Intelligence: Data Warehouses
Business Intelligence: Data Warehouses
 
Use of Star Schema in Health Care
Use of Star Schema in Health CareUse of Star Schema in Health Care
Use of Star Schema in Health Care
 
Dw design 2_conceptual_model
Dw design 2_conceptual_modelDw design 2_conceptual_model
Dw design 2_conceptual_model
 
Difference between star schema and snowflake schema
Difference between star schema and snowflake schemaDifference between star schema and snowflake schema
Difference between star schema and snowflake schema
 
A19 amis
A19 amisA19 amis
A19 amis
 
Data Provisioning & Optimization
Data Provisioning & OptimizationData Provisioning & Optimization
Data Provisioning & Optimization
 
Multidimensional data models
Multidimensional data  modelsMultidimensional data  models
Multidimensional data models
 
DATA MART APPROCHES TO ARCHITECTURE
DATA MART APPROCHES TO ARCHITECTUREDATA MART APPROCHES TO ARCHITECTURE
DATA MART APPROCHES TO ARCHITECTURE
 
Dimensional Modeling Basic Concept with Example
Dimensional Modeling Basic Concept with ExampleDimensional Modeling Basic Concept with Example
Dimensional Modeling Basic Concept with Example
 
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
 
Data Warehouse Back to Basics: Dimensional Modeling
Data Warehouse Back to Basics: Dimensional ModelingData Warehouse Back to Basics: Dimensional Modeling
Data Warehouse Back to Basics: Dimensional Modeling
 
SQL - Parallel Data Warehouse (PDW)
SQL - Parallel Data Warehouse (PDW)SQL - Parallel Data Warehouse (PDW)
SQL - Parallel Data Warehouse (PDW)
 
10 warehouse management
10 warehouse management10 warehouse management
10 warehouse management
 

Similar to Date warehousing concepts

Datawarehousing & DSS
Datawarehousing & DSSDatawarehousing & DSS
Datawarehousing & DSSDeepali Raut
 
Data warehouse
Data warehouseData warehouse
Data warehouseMR Z
 
dw_concepts_2_day_course.ppt
dw_concepts_2_day_course.pptdw_concepts_2_day_course.ppt
dw_concepts_2_day_course.pptDougSchoemaker
 
Datawarehousing
DatawarehousingDatawarehousing
Datawarehousingwork
 
1-_Intro_to_Data_Minning__DWH.ppt
1-_Intro_to_Data_Minning__DWH.ppt1-_Intro_to_Data_Minning__DWH.ppt
1-_Intro_to_Data_Minning__DWH.pptBsMath3rdsem
 
DATAWAREHOUSE MAIn under data mining for
DATAWAREHOUSE MAIn under data mining forDATAWAREHOUSE MAIn under data mining for
DATAWAREHOUSE MAIn under data mining forAyushMeraki1
 
11667 Bitt I 2008 Lect4
11667 Bitt I 2008 Lect411667 Bitt I 2008 Lect4
11667 Bitt I 2008 Lect4ambujm
 
Dataware housing
Dataware housingDataware housing
Dataware housingwork
 
What is a Data Warehouse and How Do I Test It?
What is a Data Warehouse and How Do I Test It?What is a Data Warehouse and How Do I Test It?
What is a Data Warehouse and How Do I Test It?RTTS
 
Dw Concepts
Dw ConceptsDw Concepts
Dw Conceptsdataware
 
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ NewyorksysWhat is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ NewyorksysNEWYORKSYS-IT SOLUTIONS
 
Dw & etl concepts
Dw & etl conceptsDw & etl concepts
Dw & etl conceptsjeshocarme
 
Datawarehousing
DatawarehousingDatawarehousing
Datawarehousingsumit621
 
20IT501_DWDM_PPT_Unit_I.ppt
20IT501_DWDM_PPT_Unit_I.ppt20IT501_DWDM_PPT_Unit_I.ppt
20IT501_DWDM_PPT_Unit_I.pptPalaniKumarR2
 

Similar to Date warehousing concepts (20)

Datawarehousing & DSS
Datawarehousing & DSSDatawarehousing & DSS
Datawarehousing & DSS
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
Datawarehouse and OLAP
Datawarehouse and OLAPDatawarehouse and OLAP
Datawarehouse and OLAP
 
dw_concepts_2_day_course.ppt
dw_concepts_2_day_course.pptdw_concepts_2_day_course.ppt
dw_concepts_2_day_course.ppt
 
DWH_Session_1.pptx
DWH_Session_1.pptxDWH_Session_1.pptx
DWH_Session_1.pptx
 
Datawarehousing
DatawarehousingDatawarehousing
Datawarehousing
 
1-_Intro_to_Data_Minning__DWH.ppt
1-_Intro_to_Data_Minning__DWH.ppt1-_Intro_to_Data_Minning__DWH.ppt
1-_Intro_to_Data_Minning__DWH.ppt
 
DATAWAREHOUSE MAIn under data mining for
DATAWAREHOUSE MAIn under data mining forDATAWAREHOUSE MAIn under data mining for
DATAWAREHOUSE MAIn under data mining for
 
bi notes.docx
bi notes.docxbi notes.docx
bi notes.docx
 
11667 Bitt I 2008 Lect4
11667 Bitt I 2008 Lect411667 Bitt I 2008 Lect4
11667 Bitt I 2008 Lect4
 
Dataware housing
Dataware housingDataware housing
Dataware housing
 
What is a Data Warehouse and How Do I Test It?
What is a Data Warehouse and How Do I Test It?What is a Data Warehouse and How Do I Test It?
What is a Data Warehouse and How Do I Test It?
 
Dw Concepts
Dw ConceptsDw Concepts
Dw Concepts
 
DW 101
DW 101DW 101
DW 101
 
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ NewyorksysWhat is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
 
Dw & etl concepts
Dw & etl conceptsDw & etl concepts
Dw & etl concepts
 
CTP Data Warehouse
CTP Data WarehouseCTP Data Warehouse
CTP Data Warehouse
 
Datawarehousing
DatawarehousingDatawarehousing
Datawarehousing
 
20IT501_DWDM_PPT_Unit_I.ppt
20IT501_DWDM_PPT_Unit_I.ppt20IT501_DWDM_PPT_Unit_I.ppt
20IT501_DWDM_PPT_Unit_I.ppt
 
Oracle sql plsql & dw
Oracle sql plsql & dwOracle sql plsql & dw
Oracle sql plsql & dw
 

Recently uploaded

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 

Recently uploaded (20)

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 

Date warehousing concepts

  • 3.
  • 4.
  • 5.
  • 6. Objective of Business Intelligence Value Volume BI can be defined as taking ‘Decisions based on Data’. The objective of BI is to transform large volumes of data into useful information. Intelligence Knowledge Information Data
  • 7.
  • 8.
  • 9.
  • 10. Why OLTP systems are not suitable for analysis? Database design: Dimensional Database design: Normalized Data needs to be integrated Islands of operational systems Data required at summary level Data stored at transaction level Historical information to analyze Supports day-to-day operations Analytical Reporting OLTP
  • 11. OLTP Versus Data Warehouse Large to very large, Few GB to TB Small to large Few MB to GB Size Subject, time Application Data Organization Snapshots over time (Quarter, Month, etc). Historical 30 – 60 days or 1 year - 2 years. Current Age of Data Primarily Read only Data goes out DML Data goes in Operations Seconds to hours Sub seconds to seconds Response Time Data Warehouse OLTP Property
  • 12. OLTP Versus Data Warehouse De-Normalized, Star schema Normalized Database Design Thousands to millions of records One record at a time No. of records Atomic and/or Summarized (aggregate), less granularity Atomic (Detail), transactional level, Highest granularity Grain Analysis Processes Activities Operational, Internal, External Operational, Internal Data Sources Data Warehouse OLTP Property
  • 13. OLTP Versus Data Warehouse
  • 14.
  • 15. Issues with Data Extract Programs Extracts Operational systems Decision makers Extract Explosion
  • 16.
  • 17. Data Warehousing and Business Intelligence
  • 18.
  • 19.
  • 20. Data Warehouse Properties Integrated Time-variant Nonvolatile Subject- oriented Data Warehouse
  • 21.
  • 22.
  • 23.
  • 24.
  • 25. Changing Warehouse Data Operational Databases Warehouse Database First time load Refresh Refresh Refresh Purge or Archive
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32. Data Warehouses Versus Data Marts Bottom-up Top-Down Approach Data Warehouse Data Mart Next level of migration Lower Higher Initial effort, cost, Risk < 100 GB 100 GB to > 1 TB Size Months Months to years Implementation time Few Many Data Source Single-subject, LOB Multiple Subjects Department Enterprise Scope Data Mart Data Warehouse Property
  • 34. Dependent Data Mart Data Warehouse Data Marts Marketing Sales Finance HR Flat Files Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data External Data
  • 35. Independent Data Mart Sales or Marketing Flat Files Operational Systems External Data Operations Data Legacy Data External Data
  • 36.
  • 37. Top-Down Approach Build the Data Warehouse Build the Data Marts
  • 38. Top-Down Approach Data Warehouse Data Marts Marketing Sales Finance HR Flat Files Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data External Data
  • 39. Bottom-Up Approach Build Data Marts Build the Data Warehouse
  • 40. Bottom-Up Approach Data Warehouse Data Marts Marketing Sales Finance Operational Systems External Data Operations Data Legacy Data
  • 41.
  • 42. Federated Approach This approach is referred to as “an architecture of architectures”. Emphasizes the need to integrate new and existing heterogeneous BI environments.
  • 43. Data Warehouse Components Source Systems Staging Area Presentation Area Access Tools Operational External Legacy Metadata Repository Data Marts Data Warehouse ODS
  • 44. Source Systems Staging Area Presentation Area Access Tools Operational External Legacy Metadata Repository Data Marts Data Warehouse Data Warehouse Components ODS
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 51.
  • 52.
  • 53. Remote Staging Model Load Load Data staging area within the warehouse environment Data staging area in its own independent environment Extract Extract Transform Staging area Transform Staging area Warehouse Warehouse Operational system Operational system
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73. Adding a Date Stamp: Fact Tables and Dimensions Item Table Item_id Dept_id Time_key Store Table Store_id District_id Time_key Sales Fact Table Item_id Store_id Time_key Sales_dollars Sales_units Time Table Week_id Period_id Year_id Time_key Product Table Product_id Time_key Product_desc
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81. Post-Processing of Loaded Data Post-processing of loaded data Extract Transform Load Warehouse Staging area Create indexes Generate keys Summarize Filter
  • 82.
  • 83.
  • 84.  
  • 85. Metadata Users Metadata repository End users Developers IT Professionals
  • 86.
  • 87.
  • 88.
  • 89.
  • 90. Using a Business Process Matrix DW Bus Architecture Promotion Channel Product Date Inventory Customer Returns Sales Business Processes Business Dimensions
  • 91.
  • 92. Determine the Grain YEAR? QUARTER? MONTH? WEEK? DAY?
  • 93.
  • 94. Defining Time Granularity Fiscal Time Hierarchy Current dimension grain Future dimension grain Fiscal Year Fiscal Quarter Fiscal Month Fiscal Week Day
  • 95.
  • 96.
  • 97. Star Schema Customer Location Sales Supplier Product
  • 98. Star Schema Model Product Table Product_id Product_disc,... Time Table Day_id Month_id Year_id,... Sales Fact Table Product_id Store_id Item_id Day_id Sales_amount Sales_units, ... Item Table Item_id Item_desc,... Store Table Store_id District_id,... Central fact table Denormalized dimensions
  • 99.
  • 100.
  • 101.
  • 102. Base and Derived Data Payroll table Derived data Base data Emp_FK Month_FK Salary Comm Comp 101 05 1,000 0 1,000 102 05 1,500 100 1,600 103 05 1,000 200 1,200 104 05 1,500 1,000 2,500
  • 103. Translating Business Measures into a Fact Table Business measures Facts Business Measures Number of Items Amount Cost Profit Fact Number of Items Item Amount Item Cost Profit Base Base Base Derived
  • 104. Snowflake Schema Model Time Table Week_id Period_id Year_id Dept Table Dept_id Dept_desc Mgr_id Mgr Table Dept_id Mgr_id Mgr_name Product Table Product_id Product_desc Item Table Item_id Item_desc Dept_id Sales Fact Table Item_id Store_id Product_id Week_id Sales_amount Sales_units Store Table Store_id Store_desc District_id District Table District_id District_desc
  • 105. Snowflake Model . . . . Order Web History_PK Customer History History_FK Customer_FK Product_FK Channel_FK Item_nbr Item_desc Quantity Discnt_price Unit-price Order_amt … Product Channel Customer_PK . . . . Product_PK . . . . Web_PK Web_url Channel_PK Web_PK Channel_desc
  • 106.
  • 108. Fact Table Measures Nonadditive: Cannot be added along any dimension Semiadditive: Added along some dimensions Additive: Added across all dimensions
  • 109. More on Factless Fact Tables Emp_FK Sal_FK Age_FK Ed_FK Grade_FK Grade dimension Grade_PK Education dimension Ed_PK Employee dimension Emp_PK Salary dimension Sal_PK Age dimension Age_PK PK = Primary Key & FK = Foreign Key
  • 110.
  • 111.
  • 112. Bracketing Dimensions Customer_PK Bracket_FK Bracket_PK Customer_PK Bracket_FK Bracket dimension Customer dimension Income fact Bracket_PK Income (10Ks) Marital Status Gender Age 1 60-90 Single Male <21 2 60-90 Single Male 21-35 3 60-90 Single Male 35-55 4 60-90 Single Male >55 5 60-90 Single Female <21 6 60-90 Single Female 21-35
  • 113. Identifying Analytical Hierarchies Store dimension Store ID Store Desc Location Size Type District ID District Desc Region ID Region Desc Business hierarchies describe organizational structure and logical parent-child relationships within the data. Region District Store Organization hierarchy
  • 114. Multiple Hierarchies Store ID Store Desc Location Size Type District ID District Desc Region ID Region Desc City ID City Desc County ID County Desc State ID State Desc Region District Store Organization hierarchy Store dimension Region District Store Geography hierarchy
  • 115. Multiple Time Hierarchies Fiscal year Fiscal quarter Fiscal month Fiscal time hierarchy Fiscal week Calendar year Calendar quarter Calendar month Calendar time hierarchy Calendar week
  • 116. Drilling Up and Drilling Down Store 5 Store 1 Store 2 Region 2 District 2 District 4 Store 4 Group Market Hierarchy Region 1 District 1 Store 6 Store 3 District 3
  • 117. Drilling Across Region District Stores > 20,000 sq. ft. Group Market hierarchy Region District Store Store City City City hierarchy
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130. Secret of Success Think big, start small!
  • 131.

Editor's Notes

  1. Course: Data Warehousing Concepts &amp; Design Owner: Technical Training Cell (TTC), Mastek Ltd. Document Version: 1.0
  2. What Is Business Intelligence (BI)? Business Intelligence is the process of transforming relevant business data into information, information into knowledge and through iterative discoveries turning knowledge into Intelligence. Data Represents reality, facts and figures. Also referred to as ‘raw data’. Information It is the relevant data processed and interpreted. Knowledge It refers to the meaning and understanding that results from the information processed.
  3. Objective of BI BI can be also be defined as taking ‘ Decisions based on Data ’. The purpose of BI is to turn large volumes of data into information and linking bits of information together within a decision context that turns into knowledge that can be used for decision making.
  4. Evolution of BI EIS applications were developed by the IS team and written in 3GL, 4GL, C++, or some other structured programming language. These were predefined, restrictive queries that were delivered in tabular or chart form. Generally, information provided was limited. (1GL: Machine language 2GL: Assembly 3GL: C, C++, Cobol, Java 4GL: VB, PB, SQL, Access 5GL: Prolog) MIS provided management with reports to assess the performance of the business. Reports were submitted as a request to MIS team and were delivered to users after a period of time- few days, weeks or even months. DSS applications (VisiCalc, Lotus 1-2-3, VPP, Excel) were the first generation of packaged software that provided dynamically generated SQL enabling users to extract data from relational databases. This data was relevant to their business needs and focus. Primarily used by analysts, BI, the next generation of DSS, provides the capability to format reports easily. Additionally, multiple sources and multiple subject matters can be used simultaneously to provide an accurate assessment of the business. What-if scenarios can be run against the multi-dimensional models and Web-enabled applications provide decentralization of the warehouse.
  5. Information It is one of the most important assets of the organization. Information in an organization exists in 2 different forms: Operational systems (OLTP) Data Warehouse (DWH) Both the systems have different business needs, purpose, users.
  6. Features of OLTP Systems OLTP systems handle day-to-day transactions and operations of the business. They are high performance, high throughput Systems. They run mission critical applications. OLTP systems store, update and retrieve Operational Data. Operational Data is the data that runs the business. Accounting system, banking software, payroll package, Order-processing system, SAP, airline reservation system are some typical examples of OLTP systems. OLTP systems are built over a period of time. They are not built under a single project. They evolve/change as business grows and new lines of business get added. They are built using different tools and technologies. E.g. A bank could start its operations with savings and current account only and later with market changes and time, might add Demat, Forex and home loans services to its operations. All these systems could get built at different intervals in isolation.
  7. Why OLTP Is Not Suitable for Analytical Reporting Operational systems largely exist to support transactions. Decision support is a complex analysis and is different from OLTP. Most OLTP transactions require a single record in a database to be located and updated or an addition of one or more new records. Solution is to separate the two systems – Production systems and Analytical systems. This is done by building a Data Warehouse.
  8. Data Warehouse Versus Online Transaction Processing (OLTP) Response Time and Data Operations Data warehouses are constructed for very different reasons than online transactional processing (OLTP) systems. OLTP systems are optimized for getting data in—for storing data as a transaction occurs. Data warehouses are optimized for getting data out—for providing quick response for analysis purposes. Because there tends to be a high volume of activity in the OLTP environment, rapid response is critical; whereas, data warehouse applications are analytical rather than operational. Therefore slower performance is acceptable. Nature of Data The data stored in each database varies in nature: the data warehouse contains snapshots of data over time to support time-series analysis whereas, the OLTP system stores very detailed data for a short time such as 30 to 60 days or 1 year to 2 years. The data is current in OLTP. In a warehouse, data is Historical.
  9. Data Warehouse Versus Online Transaction Processing (OLTP) (continued) Data Organization The data warehouse is subject specific and supports analysis so data is arranged accordingly. In order for the OLTP system to support sub-second response, the data must be arranged to optimize the application. For example, an order entry system may have tables that hold each of the elements of the order whereas a data warehouse may hold the same data but arrange it by subject such as customer, product, and so on. Size The size of the operational database is few MB to GB. In a data warehouse, size of the database ranges from GB to TB. Data Sources Because the data warehouse is created to support analytical activities, data from a variety of sources can be integrated. The operational data store of the OLTP system holds only internal data or data necessary to capture the operation or transaction.
  10. Data Warehouse Versus Online Transaction Processing (OLTP) (continued) Activities OLTP is process oriented designed to handle day-to-day operations of the business. A data warehouse is subject oriented built to handle querying and analysis. No. of records OLTP system deals with one transaction or one record at a time. However, a query fired on a data warehouse may involve going through few thousands to millions of records. Grain The grain of data in an OLTP system is the most granular. Data exists at detail or transaction level (atomic). In a data warehouse, data could exist at atomic and/or aggregate level depending on the business needs. Database Design OLTP systems are fully normalized and are designed to store operational data, one transaction at a time. A data warehouse is designed differently. It is in dimensional model form (generally in star schema) with highly de-normalized dimension tables. This design enables complex query analysis. Note : Dimensional modeling is covered later in the session.
  11. Data Extract Processing Extract processing was a logical progression from decision support systems. It was seen as a way to move the data from the high-performance, high throughput online transaction processing systems onto client machines that are dedicated to analysis. Extract processing also gave the user ownership of the data. DSS and Degradation The problem of performance degradation was partially solved by using extract processing techniques that select data from one environment and transport it to another environment for user access (a data extract). Data Extract Program The data extract program searches through files and databases, gathering data according to specific criteria. The data is then placed into a separate set of files, which resides on another environment, for use by analysts for decision support activities.
  12. Management Issues with Data Extract Programs Although the principle of extracts appears logical, and to some degree represents a model similar to the way a data warehouse works, there are problems with processing extracts . Extract programs may become the source for other extracts, and extract management can become a full-time task for information systems departments. In some companies hundreds of extract programs are run at any time. Productivity Issues Extract effort is duplicated, because multiple extracts access the same data and use mainframe resources unnecessarily. The program designed to access the extracted data must encompass all technologies that are employed by the source data. There is no common metadata providing a standard way of extracting, integrating, and using the data.
  13. Data Quality Issues with Extract Processing Following are the data quality issues in an extract processing environment: The data has no time basis and users cannot compare query results with confidence. The data extracts may have been taken at a different point-in-time. Each data extract may use a different algorithm for calculating derived and computed values. This makes the data difficult to evaluate, compare, and communicate by managers who may not know the methods or algorithms that are used to create the data extract or reports. Data extract programs may use different levels of extraction. Access to external data may not be consistent, and the granularity of the external data may not be well defined. Data sources may be difficult to identify, and data elements may be repeated on many extracts. The data field names and values may have different meanings in the various systems in the enterprise (lack of semantic integrity). There are no data correction rules to ensure that the extracted data is correct and clean. The reports provide data rather than information, and no drill-down capability.
  14. Data Warehousing and Business Intelligence For the past decade, there has been a transition from decision support using data extracts to decision support using the data warehouse. A data warehouse is a strategic collection of all types of data in support of the decision-making process at all levels of an enterprise. A Data warehouse is a single data store created for two primary reasons: Analytical reporting and Decision support.
  15. Technology Needed to Support the Business Needs Today’s information technology climate provides you with cost-effective computing resources in the hardware and software arena, Internet and intranet solutions, and databases that can hold very large volumes of data for analysis, using a multitude of data access technologies. Technological Advances Enabling Data Warehousing Technology (specifically open systems technology) is making it affordable to analyze vast amounts of data, and hardware solutions are now more cost-effective. Recent advances in parallelism have benefited all aspects of computing: Hardware environment Operating system environment Database management systems and all associated database operations Query tools and techniques and Applications
  16. W. H. Inmon states that “A data warehouse is a subject oriented, integrated, nonvolatile, and time variant collection of data in support of management’s decisions”. Subject-oriented - Data that gives information about a particular subject instead of about a company&apos;s on-going process or operations. Integrated - Data that is gathered into the data warehouse from a variety of sources and merged into a coherent whole. Time-variant - All data in the data warehouse is identified with a particular time period. Non-volatile - Data is read-only. Data is stable in a data warehouse. More data is added, but data is never removed. This enables management to gain a consistent picture of the business.
  17. Data Warehouse Properties Bill Inmon’s definition of a data warehouse makes reference to the main properties of a data warehouse: Subject-oriented Integrated Nonvolatile Time-variant Let us understand and discuss these properties one by one in detail.
  18. Subject-oriented data is organized around major subject areas of an enterprise, and is useful for an enterprisewide understanding of those subjects. For example, a banking operational system keeps independent records of customer savings, loans, and other transactions. A warehouse pulls this independent data together to provide financial information. You can access subject-oriented data related to any major subject area of an enterprise: Customer financial information Toll calls made in the telecommunications industry Airline passenger booking information Insurance claim data While the data in an OLTP system is stored to support a specific business process (for example, order entry, campaign management, and so on) as efficiently as possible, data in a data warehouse is stored based on common subject areas (for example, customer, product, and so on). That is because the complete set of questions to be posed to a data warehouse is never known. Every question the data warehouse answers spawns new questions. Thus, the focus of the design of a data warehouse is providing users easy access to the data so that current and future questions can be answered.
  19. Integrated In many organizations, data resides in diverse independent systems, making it difficult to integrate into one set of meaningful information for analysis. A key characteristic of a warehouse is that data is completely integrated. Data is stored in a globally acceptable manner, even when the underlying source data is stored differently. The transformation and integration process can be time consuming and costly. Data Consistency You must deal with data inconsistencies and anomalies before the data is loaded into the warehouse. Consistency is applied to naming conventions, measurements, encoding structures, and physical attributes of the data. Data Redundancy Data Redundancy at the detail level in the warehouse is eliminated. However, selective redundancy in the form of aggregates and summaries is required to improve the performance of queries especially drill down analysis. In many implementations, about 70% or more of the warehouse development resources are used in developing extraction, transformation, and Loading (ETL) routines.
  20. Time-Variant Warehouse data is by nature historical; it does not usually contain real-time transactional data. Data is represented over a long time horizon, from 2 to 10 years, compared with 1 to 3 months (sometimes 1 to 2 years) of data for a typical operational system. The data allows for analysis of past and present trends, and for forecasting using “what-if” scenarios. Time Element The data warehouse always contains a key element of time, such as quarter, month, week, or day, that determines when the data was loaded. The date may be a single snapshot date, such as 10-JAN-02, or a range, such as 01-JAN-02 to 31-JAN-02. Snapshots by Time Period Warehouse data is a series of snapshots by time periods that do not change.
  21. Nonvolatile Typically, data in the data warehouse is read-only (from end-user or business user perspective). Data is loaded into the data warehouse for the first-time load, and then refreshed regularly. Warehouse operations typically involve: Loading the initial set of warehouse data (often called the first-time load) Refreshing the data regularly (called the refresh cycle) Accessing the Data Once a snapshot of data is loaded into the warehouse, it rarely changes. Therefore, data manipulation is not a consideration at the physical design level. The physical warehouse is optimized for data retrieval and analysis. Refresh Cycle The data in the warehouse is refreshed; that is, snapshots are added. The refresh cycle is determined by the business users. A refresh cycle need not be the same as the grain (level at which the data is stored) of the data for that cycle. For example, you may choose to refresh the warehouse weekly, but the grain of the data may be daily.
  22. Changing Warehouse Data The following operations are typical of a data warehouse: The initial set of data is loaded into the warehouse, often called the first-time load. This is the data by which you will measure the business, and the data containing the criteria by which you will analyze the business. Frequent snapshots of core data warehouse data are added, (more occurrences), according to the refresh cycle and using data from the multiple source systems. Warehouse data may need to be changed for a number of reasons: The data that you are using to analyze the business may change, the data warehouse must be kept up-to-date to keep it accurate. The business determines how much historical data is needed for analysis, say five years worth. Older data is either archived or purged. Inappropriate or inaccurate data values may be deleted from or migrated out of the data warehouse.
  23. As seen above, successful data warehousing demands a bevy of technical and business skills. Being only a DBA or a technician does not help. According to Ralph Kimball, the skill set to be possessed by an individual to successfully implement a data warehouse are: ½ MBA and ½ DBA .
  24. Usage Curves Operational systems and data warehouses have different usage curves. An operational system has a more predictable usage curve, whereas the warehouse a less predictable, more varied, and random usage curve. Access to the warehouse varies not just on a daily basis, but may even be affected by forces such as seasonal variations. For this reason, you cannot expect the operational system to handle heavy analytical queries (DSS) and at the same time be able to handle the load of transactions for the minute-by-minute processing that require fast transaction rates.
  25. User Expectations The difference in response time may be significant between a data warehouse and a client-server environment fronted by personal computers. You must control the user’s expectations regarding response. Set reasonable and achievable targets for query response time, which can be assessed and proved in the first increment of development. You can then define, specify, and agree on Service Level Agreements (SLA). If users are accustomed to fast PC-based systems, they may find the warehouse excessively slow. However, it is up to those educating the users to ensure that they are aware of just how big the warehouse is, how much data there is, and of what benefit the information is to both the user and the business. Exponential Growth and Use Once implemented, data warehouses continue to grow in size. Each time the warehouse is refreshed more data is added, deleted, or archived. The refresh happens on a regular cycle. Successful data warehouses grow very quickly. Once the success of the warehouse is proven, its use increases dramatically and it often grows faster than expected.
  26. Enterprisewide Data Warehouse To summarize, an enterprisewide warehouse stores data from all subject areas within the business for analysis by end users. The scope of the warehouse is the entire business and all operational aspects within the business. An enterprisewide warehouse is normally (and should be) created through a series of incrementally developed solutions. Never create an enterprisewide data warehouse under one project umbrella, it will not work. With an enterprisewide data warehouse all users access the warehouse, which provides: A single source of corporate enterprisewide data A single source of synchronized data in the enterprisewide warehouse for each subject area A single point for distribution of data to dependent data marts
  27. Grain of Data - Granularity Granularity is usually mentioned in the context of dimensional data structures (i.e., facts and dimensions) and refers to the level of detail in a given fact table. The more detail there is in the fact table, the higher its granularity and vice versa. The meaning of a single row in a fact table is referred to as a grain of data. Fact table It is similar to the transaction table. It stores the facts or measures of the business. Facts are generally numeric and additive. E.g.: SALES table in dimensional model. Dimension table It is similar to the master table. It stores the textual descriptors of the business. E.g.: CUSTOMER, PRODUCT table in dimensional model. Note : Dimensional modeling is covered in detail later in the session.
  28. Data Warehouse Versus Data Mart Definition Data mart is a subset of data warehouse that provides users with information specific to their departmental requirements. It can be a subject-oriented data warehouse for functional or departmental information needs, or it can be a mini enterprisewide data warehouse combining data from multiple subject areas and acting as a kernel to feed the enterprise warehouse. Scope A data warehouse deals with multiple subject areas and is typically implemented and controlled by a central organizational unit such as the Corporate Information Technology group. It is often called a central or enterprise data warehouse. Subjects A data mart is a departmental form of a data warehouse designed for a single line of business (LOB) or functional area such as sales, finance, or marketing.
  29. Data Warehouse Versus Data Mart Definition Data mart is a subset of data warehouse fact and summary data that provides users with information specific to their departmental requirements. It can be a subject-oriented data warehouse for functional or departmental information needs, or it can be a mini enterprisewide data warehouse combining data from multiple subject areas and acting as a kernel to feed the enterprise warehouse. Scope A data warehouse deals with multiple subject areas and is typically implemented and controlled by a central organizational unit such as the Corporate Information Technology group. It is often called a central or enterprise data warehouse. Subjects A data mart is a departmental form of a data warehouse designed for a single line of business (LOB) or functional area such as sales, finance, or marketing.
  30. Data Warehouse Versus Data Mart (continued) Data Source A data warehouse typically assembles data from multiple source systems. A data mart typically assembles data from fewer sources. Implementation Time Data marts are typically less complex than data warehouses and therefore are typically easier to build and maintain. A data mart can be built as a “ proof of concept ” (POC) step toward the creation of an enterprisewide warehouse. Note Size is not necessarily a differentiating factor when comparing a data warehouse to a data mart. In fact, sometimes, data marts are often as big or even bigger than the enterprise data warehouse because they are leveraging third party data sources. The main difference to highlight is that data marts are departmental in focus. Several independent enterprise DWs or DMs Distributed Single enterprise DW Centralized Enterprise DW with several dependent DMs Multi-tiered When There Exists... System Design Is
  31. Dependent Data Marts Data marts can be categorized into two types: dependent and independent. The categorization is based primarily on the data source that feeds the data mart. Dependent Data Mart Dependent data marts have the following characteristics: The source is the warehouse. Dependent data marts rely on the data warehouse for content. The extraction, transformation, and loading (ETL) process is easy. Dependent data marts draw data from a central data warehouse that has already been created. Thus, the main effort in building a mart, the data cleansing and extraction, has already been performed. The dependent data mart simply requires data to be moved from one database to another. The data mart is part of the enterprise plan. Dependent data marts are usually built to achieve improved performance and availability, better control, and lower telecommunication costs resulting from local access to data relevant to a specific department.
  32. Independent Data Marts Independent data marts are stand-alone systems built from scratch that draw data directly from operational or external sources of data. Independent data marts have the following characteristics: The sources are operational systems and external sources. The ETL process is difficult. Because independent data marts draw data from unclean or inconsistent data sources, efforts are directed towards error processing and integration of data. The data mart is built to satisfy analytical needs. The creation of independent data marts is often driven by the need for a quick solution to analysis demands. Note Many dependent data marts still obtain some of their internal and external data outside of the data warehouse. Independent data marts are often not seen as a good solution and should be avoided for a number of reasons (for example, different answers to the same business question from multiple data marts, duplication of ETL, and so on).
  33. Warehouse Development Approaches The most challenging aspect of data warehousing lies not in its technical difficulty, but in choosing the best approach to data warehousing for your company’s structure and culture, and dealing with the organizational and political issues that will inevitably arise during implementation. Among the different approaches to developing a data warehouse are: Top-down approach Bottom-up approach Hybrid approach Federated approach There is still a great deal of confusion about the similarities and differences among these architectures, especially true for the “top-down” and “bottom-up” approaches.
  34. Top-Down Approach Also referred as “Big-Bang” approach. There are certain similarities and differences among these 4 data warehouse approaches. This is especially true of the “top-down” and “bottom-up” approaches. The “top-down” and “bottom-up” approaches have existed the longest and occupy the polar ends of the development spectrum. The two most influential approaches are championed by industry heavyweights Bill Inmon and Ralph Kimball, both prolific authors and consultants in the data warehousing field. Inmon, who is credited with coining the term “data warehousing” in the early 1990s, advocates a top-down approach, in which companies first build a data warehouse followed by data marts. Kimball’s approach, on the other hand, is often called bottom-up because it starts and ends with data marts, negating the need for a physical data warehouse altogether.
  35. Top-Down Approach (Continued) In the top-down approach, the data warehouse holds atomic or transaction data that is extracted from one or more source systems and integrated within a normalized, enterprise data model. From there, the data is summarized, dimensionalized, and distributed to one or more “dependent” data marts. These data marts are “dependent” because they derive all their data from a centralized data warehouse. Sometimes, organizations supplement the data warehouse with a staging area to collect and store source system data before it can be moved and integrated within the data warehouse. A separate staging area is particularly useful if there are numerous source systems, large volumes of data, or small batch windows with which to extract data from source systems. The major benefit of a “top-down” approach is that it provides an integrated, flexible architecture to support downstream analytic data structures. First, this means the data warehouse provides a departure point for all data marts, enforcing consistency and standardization so that organizations can achieve a single version of the truth. Second, the atomic data in the warehouse lets organizations re-purpose that data in any number of ways to meet new and unexpected business needs. For example, a data warehouse can be used to create rich data sets for statisticians, deliver operational reports, or support operational data stores (ODS) and analytic applications. Moreover, users can query the data warehouse if they need cross-functional or enterprise views of the data. On the downside, a top-down approach may take longer and cost more to deploy than other approaches, especially in the initial increments. This is because organizations must create a reasonably detailed enterprise data model as well as the physical infrastructure to house the staging area, data warehouse, and the marts before deploying their applications or reports. (Of course, depending on the size of an implementation, organizations can deploy all three “tiers” within a single database.) This initial delay may cause some groups with their own IT budgets to build their own analytic applications. Also, it may not be intuitive or seamless for end users to drill through from a data mart to a data warehouse to find the details behind the summary data in their reports.
  36. Bottom-Up Approach In a bottom-up approach, the goal is to deliver business value by deploying dimensional data marts as quickly as possible. Unlike the top-down approach, these data marts contain all the data—both atomic and summary—that users may want or need, now or in the future. Data is modeled in a star schema design to optimize usability and query performance. Each data mart builds on the next, reusing dimensions and facts so users can query across data marts, if desired, to obtain a single version of the truth as well as both summary and atomic data. The “bottom-up” approach consciously tries to minimize back-office operations, preferring to focus an organization’s effort on developing dimensional designs that meet end-user requirements. The “bottom-up” staging area is non-persistent, and may simply stream flat files from source systems to data marts using the file transfer protocol. In most cases, dimensional data marts are logically stored within a single database. This approach minimizes data redundancy and makes it easier to extend existing dimensional models to accommodate new subject areas.
  37. Bottom-Up Approach This approach is similar to the top-down approach but the emphasis is on the data rather than the business benefit. This is a “proof of concept” type of approach, therefore appealing to IT. Both approaches advocate building a robust enterprise architecture that adapts easily to changing business needs and delivers a single version of the truth. In some cases, the differences are more semantic than substantive in nature. For example, both approaches collect data from source systems into a single data store, from which data marts are populated. But while “top-down” subscribers call this a data warehouse, “bottom-up” adherents often call this a “staging area.”
  38. Hybrid Approach It attempts to capitalize on the speed and user-orientation of the “bottom-up” approach without sacrificing the integration enforced by a data warehouse in a “top down” approach. Pieter Mimno is currently the most vocal proponent of this approach. The hybrid approach recommends spending about two weeks developing an enterprise model before developing the first data mart. The first several data marts are also designed concurrently. After deploying the first few “dependent” data marts, an organization then backfills a data warehouse behind the data marts, instantiating the “fleshed out” version of the enterprise data model. The organization then transfers atomic data from the data marts to the data warehouse and consolidates redundant data feeds, saving the organization time, money, and processing resources. Organizations typically backfill a data warehouse once business users request views of atomic data across multiple data marts. Once the DM is proven to be a good investment, we have to avoid building too many DMs (generally no more than two to three) without implementing an enterprise DW. The main reason for this is to be able to create a common data model for the enterprise, by which smaller DMs can be refreshed, maintained and tuned for performance. It is here that we combine the strength of the top-down approach by reducing the data replication from the operational systems. Furthermore, it is essential to centralize the data extraction and removal process by having one common, shared repository. Data cleansing and transformation, which account for most of the unexpected cost, can now be minimized. At the end, this translates to less maintenance and better performance.
  39. Federated Approach As defined by its most vocal proponent, Doug Hackney—is not a methodology or architecture per se, but a concession to the natural forces that undermine the best laid plans for deploying a perfect system. A federated approach rationalizes the use of whatever means possible to integrate analytical resources to meet changing needs or business conditions. Hackney says the federated approach is “an architecture of architectures.” It recommends how to integrate a multiplicity of heterogeneous data warehouses, data marts, and packaged applications that companies have already deployed and will continue to implement in spite of the IT group’s best effort to enforce standards and adhere to a specific architecture. The major problem with the federated approach is that it is not well documented. There are only a few columns written on the subject. But perhaps this is enough, as it doesn’t prescribe a specific end-state or approach. Another potential problem is that without a specific architecture in mind, a federated approach can perpetuate the continued decentralization and fragmentation of analytical resources, making it harder to deliver an enterprise view in the end. Also, integrating meta data is a pernicious problem in a heterogeneous, ever-changing environment.
  40. Typical Data Warehouse Components 1. Source Systems Source systems may be in the form of data existing in: Production operational systems within an organization Archives Internal files not directly associated with company operational systems, such as individual spreadsheets and workbooks External data from outside the company 2. Data Staging Area The data staging area is analogous to a kitchen of a restaurant where raw food products are transformed into a fine meal. In a data warehouse, raw operational data is transformed into a warehouse deliverable analytical data fit for query and consumption. It is both a storage area and a set of processes commonly known as Extract-Transformation-Load (ETL). It is off limits to business users and it is not suitable for querying and reporting. ETL is covered in detail later in the session. It may be an operational data store (ODS) environment, a set of flat files, a series of tables in a relational database server, or proprietary data structures used by data staging tools. A normalized database for data staging area is acceptable.
  41. Operational Data Store (ODS) An ODS is an integrated database of operational data. ODS is a subject-oriented, integrated, current-valued and volatile collection of detailed data that provides a true enterprise view of information. ODS is frequently updated (synchronously, hourly, daily or batch processing). The frequency of update and degree of integration depends on the specific business requirements. The detailed current information in the ODS is transactional in nature. An ODS may contain 30 to 60 days of information. ODS can also provide a stepping-stone to feed operational data into the data warehouse. The data warehouse industry has developed ODS, usually presenting it as running on a separate platform, to satisfy tactical decision making requirements. ODS can deliver operational reports, especially when both the legacy systems and the OLTP systems do not provide adequate operational reports. It relieves production system of reporting and analysis demands and provides access to current data. ODS can be a third physical system sitting between the operational systems and the data warehouse or a partition of the data warehouse itself. ODS is optional. ODS can be transient or persistent. 3. Data Presentation The data presentation area is where the data is stored and optimized for direct querying, reporting, and analysis. It consists of the data warehouse or a series of integrated data marts. For business community, presentation area is the data warehouse. Data in the presentation area strictly is in dimensional form. It must contain detailed, atomic data. It must adhere to the data warehouse bus architecture. While the logical design of presentation area is dimensional model, physical implementation may be in RDBMS or MDDB. If RDBMS is used, data is stored in tables referred to as star schemas. If MDDB is used, then data is stored in cubes. 4. Data Access Tools Data access tools are used by the business community to query the data warehouse’s presentation area.They may be ad hoc query tools, data mining tools, or application tools. Users may need to perform simple to complex business modeling, complex drill-down, simple queries on prepared summary information, what-if analysis, trend analysis and forecasting and data mining using data spanning several time periods. Metadata Metadata is information about the actual data itself. Metadata explains what data exists, where it is located and how to access it. It comes in many shapes and forms and it supports the data warehouse’s technical, administrative, and business user groups.
  42. Examining Data Sources The data source systems may comprise data existing in: Production - refers to operational systems running on various operating system platforms, File systems (flat-files, VSAM, ISAM), Database systems (Oracle, SQL, Sybase), vertical applications (SAP, PS, Baan, Oracle Financials) Archives - Historical data useful for first-time-load. May require unique transformations and clear details of the changes must be maintained in metadata Internal – refers to files not directly associated with company operational systems, such as individual spreadsheets and workbooks. Internal data must be transformed, documented in metadata, and mapped between the source and target databases. External – refers to data collected from outside the company. External data is important to compare the performance of your business against others. There are many sources for external data: Periodicals and reports External syndicated data feeds Competitive analysis information Newspapers/ Purchased marketing, competitive, and customer related data Free data from the Web
  43. Production Data Production data may come from a multitude of different sources: Operating system platforms File systems (flat files, VSAM, and ISAM) Database systems, for example, Oracle, DB2, dBase, Informix, and so on Vertical applications, such as Oracle Financials, SAP, PeopleSoft, Baan, and Dun and Bradstreet (D&amp;B) (D&amp;B is the leading provider of global business information, tools, and insight, that enables customers to decide with confidence and provide them with quality information whenever and wherever they need it)
  44. Archive Data Archive data may be useful to the enterprise in supplying historical data. Historical data is needed if analysis over long periods of time is to be achieved. Archive data is not used consistently as a source for the warehouse; for example, it would not be used for regular data refreshes. However, for the initial implementation of a data warehouse (and for the first-time load), archived data is an important source of historical data. You need to consider this carefully when planning the data warehouse. How much historical data do you have available for the data warehouse? How much effort is necessary to transform it into an acceptable format? The archive data may need some careful and unique transformations, and clear details of the changes must be maintained in metadata.
  45. Internal Data Internal data may be information prepared by planning, sales, or marketing organizations that contains data such as budgets, forecasts, or sales quotas. The data contains figures (numbers) that are used across the enterprise for comparison purposes. The data is maintained using software packages such as spreadsheets and word processors and uploaded into the warehouse. Internal data is treated like any other source system data. It must be transformed, documented in metadata, and mapped between the source and target databases.
  46. External Data External data is important if you want to compare the performance of your business against others. There are many sources for external data: Periodicals and reports External syndicated data feeds Competitive analysis information Newspapers Purchased marketing, competitive, and customer related data Free data from the Web You must consider the following issues with external data: Frequency: There is no real pattern like that of internal data. Constant monitoring is required to determine when it is available. Format: The data may be different in format than internal data, and the granularity of the data may be an issue. In order to make it useful to the warehouse a certain amount of reformatting may be required. In addition, you may find that external data, particularly that available on the Web, comes with digital audio data, picture image data, and digital video data. These present an interesting challenge for storage and speed of access. Predictability: External data is not predictable; it can come from any source at any time, in any format, on any medium.
  47. Extraction, Transformation, and Loading (ETL) In the next few slides, we would be discussing the
  48. Extraction, Transformation, and Loading (ETL) These processes are fundamental to the creation of quality information in the data warehouse. You take data from source systems; clean, verify, validate, and convert it into a consistent state; then move it into the warehouse. Extraction: The process of selecting specific operational attributes from the various operational systems. Transformation: The process of integrating, verifying, validating, cleaning, and time stamping the selected data into a consistent and uniform format for the target databases. Rejected data is returned to the data owner for correction and reprocessing. Loading : The process of moving data from an intermediate storage area into the target warehouse database.
  49. Staging Models Out of the possible staging models listed in the slide, the model you choose depends on operational and warehouse requirements, system availability, connectivity bandwidth, gateway access, and volume of data to be moved or transformed.
  50. Remote Staging Model In this model, the staging area is not part of the Operational environment. You may choose to extract the data from the operational environment and transport it into the warehouse environment for transformation processing. The other option is to have a separate data staging area, which is neither a part of the operational system nor a part of the warehouse environment, which eliminates the negative impact on the performance of the warehouse.
  51. On-site Staging Model Alternatively, you may choose to perform the cleansing, transformation, and summarization processes locally in the operational environment and then load to the warehouse. This model may conflict with the day-to-day working of the operational system. If chosen, this model’s process should be executed when the operational system is idle or less heavily used.
  52. Extraction Methods The extraction method that you choose is highly dependent on the source system and the business needs in the target data warehouse environment. In addition, the estimated amount of the data to be extracted and the stage in the ETL process (initial load or maintenance of data) may also impact the decision of how to extract, from a logical and a physical perspective. Basically, you have to decide how to extract data logically and physically. Logical Extraction Methods Full Extraction: The data is extracted completely from the source system. Since this extraction reflects all the data currently available on the source system, there’s no need to keep track of changes to the data source since the last successful extraction. The source data will be provided as-is and no additional logical information (for example, timestamps) is necessary on the source site. Incremental Extraction: At a specific point of time, only the data that has changed since a well-defined event back in history will be extracted. This event may be the last time of extraction or a more complex business event like the last booking day of a fiscal period. To identify this delta change there must be a possibility to identify all the changed information since this specific time event.
  53. Physical Extraction Methods Depending on the chosen logical extraction method and the capabilities and restrictions on the source side, the extracted data can be physically extracted by two mechanisms. The data can either be extracted online from the source system or from an offline structure. Such an offline structure might already exist or it might be generated by an extraction routine. Online Extraction: The data is extracted directly from the source system. The extraction process can connect directly to the source system to access the source tables or to an intermediate system that stores the data in a preconfigured manner (for example, snapshot logs or change tables). Note that the intermediate system is not necessarily physically different from the source system Offline Extraction: The data is not extracted directly from the source system but is staged explicitly outside the original source system. The data may already have one of the following structures: flat files, redo and archive logs, (Oracle-specific) dump files, and so on.
  54. Extraction Techniques You can extract data from different source systems to the warehouse in different ways: Programmatically, using languages such as COBOL, C, C++, PL/ SQL or Java Using a gateway to access data sources: This method is acceptable only for small amounts of data; otherwise, the network traffic becomes unacceptably high. In-house developed tools. Alternatively, though it is expensive, you can use a vendor’s data extraction tool. Note: Oracle open Gateway product set (previously known as SQL*Connect) is used to access data from non-Oracle databases and even non-relational sources like flat files.
  55. Mapping Data Once you have determined your business subjects for the warehouse, you need to determine the required attributes from the source systems. On an attribute-by-attribute basis you must determine how the source data maps into the data warehouse, and what, if any, transformation rules to apply. This is known as mapping. There are mapping tools available. Mapping information should be maintained in the metadata that is server (RDBMS) resident, for ease of access, maintenance, and clarity.
  56. Transformation Routines Transformation process uses many transformation routines, to eliminate the inconsistencies and anomalies from the extracted data. These transformation routines are designed to perform the following tasks: Cleaning the data, also referred to as data cleansing or scrubbing Adding an element of time (timestamp) to the data, if it does not already exist Translating the formats of external and purchased data into something meaningful for the warehouse Merging rows or records in files Integrating all the data into files and formats to be loaded into the warehouse Transformation can be performed: Before the data is loaded into the warehouse In parallel (On larger databases, there is not enough time to perform this process as a single threaded process.) The transformation process should be self-documenting, generate summary statistics, and process exceptions. Note: The terms scrubbing , cleaning , cleansing , and data re-engineering are used interchangeably.
  57. Transforming Data: Problems and Solutions The factors listed in the slide can potentially cause problems in the transformation process. Each of these problems, and the probable solutions are discussed in the following pages.
  58. Data Anomalies Reasons for Data Anomalies One of the causes of inconsistencies within internal data is that in-house system development takes place over many years, often with different software and development standards for each implementation. There may be no consistent policy for the software used in the corporate environment. Systems may be upgraded or changed over the years. Each system may represent data in different ways. Source Data Anomalies Many potential problems can exist with source data: No unique key for individual records Anomalies within data fields, such as differences between naming and coding (data type) conventions Differences in the interpreted meaning of the data by different user groups Spelling errors and other textual inconsistencies (this is particularly relevant in the area of customer names and addresses)
  59. Transforming Data: Problems and Solutions Multipart Keys Problem Many older operational systems used record key structures that had a built-in meaning. Solution To allow for decision support reporting, these keys must be broken down into atomic values. In the example, the key contains four atomic values. Key Code: 12M65431345 Where: 12 is the country code M is the sales territory 654313 is the product code 45 is the salesperson
  60. Multiple Local Standards Problem This is particularly relevant for values entered in different countries. For example, some countries use imperial measurements and others metric; currencies and date formats differ; currency values and character sets may vary; and numeric precision values may differ. Currency values are often stored in two formats, a local currency such as sterling, French francs, or Australian dollars, and a global currency such as U.S. dollars. Solution Typically, you use tools or filters to preprocess this data into a suitable format for the database, with the logic needed to interpret and reconstitute a value. You might employ steps that are similar to those identified for multiple encoding. You may consider revising source applications to eliminate these inconsistencies early on.
  61. Multiple Files Problem The source of information may be one file for one condition, and a set of files for another. Logic (normally procedural) must be in place to detect the right source. The complexity of integrating data is greatly increased according to the number of data sources being integrated. For example, if you are integrating data from two sources, there is a single point of integration where conflicts must be resolved. Integrate from three sources, and there are three points of conflict. Four sources provide six conflict points. The problem is exponential. Solution This is a complex problem that requires the use of tools or well-documented transformation mechanisms. Try not to integrate all the sources in the first instance. Start with two or three and then enhance the program to incorporate more sources. Build on your learning experiences.
  62. Missing Values Problem Null and missing values are always an issue. NULL values may be valid entries where NULLs are allowed; otherwise, NULLs indicate missing values. Solution You must examine each occurrence of the condition to determine validity and decide whether these occurrences must be transformed; that is, identify whether a NULL is valid or invalid (missing data). You may choose to: Ignore the missing data. If the volume of records is relatively small, it may have little impact overall. Wait to extract the data until you are sure that missing values are entered from the operational system. Mark rows when extracted, so that on the next extract you can select only those rows not previously extracted. It does involve the overhead of SELECT and UPDATE , and if the extracted data forms the basis of a summary table, these need re-creating. Extract data only when it is time-stamped.
  63. Duplicate Values Problem You must eliminate duplicate values, which invariably exist. This can be time-consuming, although it is a simple task to perform. Solution You can use standard SQL self-join techniques or RDBMS constraint utilities to eliminate duplicates.
  64. Element Names Problem Individual attributes, columns, or fields may vary in their naming conventions from one source to another. These need to be eliminated to ensure that one naming convention is applied to the value in the warehouse. If you are employing independent data marts, then you should ensure that the ETL solution is mirrored; should you plan to employ the data marts dependently in the future, they will all refer to the same object. Solution You must obtain agreement from all relevant user groups on renaming conventions, and rename the elements accordingly. Document the changes in metadata. The programs you use determine the solution. For example, if you are using SQL CREATE TABLE AS SELECT (CTAS) statement, the new column name is used in that statement.
  65. Element Meaning Problem The element meaning is often interpreted differently by different user groups. The variations in naming conventions typically drive this misinterpretation. Solution It is a difficult problem, often political, but you must ensure that the meaning is clear. By documenting the meaning in metadata you can solve this problem
  66. Input Format Problem Input formats vary considerably. For example, one entry may accept alphanumeric data, so the format may be “123-73”. Another entry may accept numeric data only, so the format may be “12373”. You may also need to convert from ASCII to EBCDIC, or even convert complex character sets such as Hebrew, Arabic, or Japanese. Solution First, ensure that you document the original and the resulting formats. Your program (or tool) must then convert those data types either dynamically or through a series of transforms into one acceptable format. You can use Oracle SQL*Loader to perform certain transformations, such as EBCDIC to ASCII conversions and assigning values to default or NULL values.
  67. Referential Integrity Problem If the constraints at the application or database level have in the past been less than accurate, child and parent record relationships can suffer; orphaned records can exist. You must understand data relationships built into legacy systems. The biggest problem encountered here is that they are often undocumented. You must gain the support of users and technicians to help you with analysis and documentation of the source data. Solution This cleaning task is time-consuming and requires business experience to resolve the inconsistencies. You can use SQL anti-join query techniques, server constraint utilities, or dedicated tools to eliminate these inconsistencies.
  68. Name and Address Problem One of the largest areas of concern, with regard to data quality, is how name and address information is held, and how to transform it. Name and address information has historically suffered from a lack of legacy standards. This information has been stored in many different formats, sometimes dependent upon the software or even the data processing center used. Some of the following data inconsistencies may appear: No unique key Missing data values (NULLs) Personal and commercial names mixed Different addresses for same member Different names and spelling for same member Many names on one line One name on two lines The data may be in a single field of no fixed format (example shown in the slide) Each component of an address may be in a specific field (example shown in the slide)
  69. Transformation Timing and Location You need to consider carefully when and where you perform transformation. You must perform transformation before the data is loaded into the warehouse or in parallel; on larger databases, there is not enough time to perform this process as a single threaded process. Consider the different places and points in time where transformation may take place. Transformation Points On the operational platform: This approach transforms the data on the operational platform, where the source data resides. The negative impact of this approach is that the transformation operation conflicts with the day-to-day working of the operational system. If it is chosen, the process should be executed when the operational system is idle or less utilized. The impact of this approach is so great that is very unlikely to be employed. In a separate staging area: This approach transforms data on a separate computing environment, the staging area, where summary data may also be created. This is a common approach because it does not affect either the operational or warehouse environment.
  70. Adding a Date Stamp: Fact Tables and Dimensions Fact Table Data Assume that you need to add the next set of records from the source systems to your fact table. You need to determine which records are to be moved into the fact table. You have added data for March 2002. Now you need to add data for April 2002. You need to find a mechanism to stamp records so that you pick up only April 2002 records for the next refresh. You might choose from a number of techniques to time-stamp data: Code application or database triggers at the operational level, which can then be extracted using date selection criteria Perform a comparison of tables, original and new, to identify differences Maintain a table containing copies of changed records to be loaded You must decide which are the best techniques for you to use according to your current system implementations. Dimension Table Data The techniques discussed for fact tables can also be used with dimensions. Dimensions change and you can employ one of the different techniques to trap changes.
  71. Summarizing Data Creating summary data is essential for a data warehouse to perform well. Here summarization is classified under transformation because you are changing (transforming) the way the data exists in the source system, to be used with the data warehouse. You can summarize the data: At the time of extraction in batch routines: This reduces the amount of work performed by the data warehouse server, as all the effort is concentrated on the source systems. However, summarizing at this time increases: the complexity and time taken to perform the extract, the number of files created, the number of load routines, and the complexity of the scheduling process After the data is loaded into the warehouse server: In this process the fact data is queried, summarized, and placed into the requisite summary fact table. This method reduces the complexity and time taken for the extract tasks. However, it places all the CPU and I/O intensive work on the warehouse server, thus increasing the time that the warehouse is unavailable to the users.
  72. Loading Data into the Warehouse The acronym ETL for “Extraction, Transformation, and Loading” is perhaps too simplistic, because it omits the transportation phase and implies that each of these processes are essentially distinct. This may not be true always; sometimes, the entire process, including data loading, is referred to as ETL. The transportation process involves moving the data from source data stores or an intermediate staging area to the data warehouse database. The loading process loads the data into the target warehouse database in the target system server. Transportation is often one of the simpler portions of the ETL process, and is often integrated with the loading process. These processes comprise a series of actions, such as moving the data and loading data into tables. There may also be some processing of objects after the load, often referred to as post-load processing. Moving and loading of the data can be a time-consuming task, depending upon the volumes of data, the hardware, the connectivity setup, and whether parallel operations are in place. The time period within which the warehouse system can perform the load is called the load window. Loading should be scheduled and prioritized. You should also ensure that the loading process is automated as much as possible.
  73. Load Window Requirements The load window is the amount of time available to extract, transform, load, postload process data, and make the data warehouse available to the user. The load performs many sequential tasks that take time to execute. You must ensure that every event that occurs during the load window is planned, tested, proven, and constantly monitored. Poor planning will extend the load time and prevent users from accessing the data when it is needed. Careful planning, defining, testing, and scheduling is critical.
  74. Planning the Load Window Load Window Strategy The load time is dependent upon a number of factors, such as data volumes, network capacity, and load utility capabilities. You must not forget that the aim is to ensure the currency of data for the users, who require access to the data for analysis. To work out an effective load window strategy, consider the user requirements first, and then work out the load schedule backward from that point. Determining the Load Window It is usual to define the user access requirements first and work the load schedule backward from that point. Once the user access time is defined, you can establish the load cycles. Some of the processes overlap to enable all processes to run within the window. More realistically, almost 24-hour access is required. This means the load window is significantly smaller than the example shown here. In that event, you need to consider how to process the refresh and keep users presented with current realistic data. This is where you can use partitioning strategies.
  75. Initial Load and Refresh Initial Load: The initial load (also called first-time load) is a single event that occurs before implementation. It populates the data warehouse database with as much data as needed or available. The first-time load moves data in the same way as the regular refresh. However, the complexity of the task is greater then the refresh cycle. Data volumes that may be very large (Company decides to load the last five years of data, which may comprise millions of rows. The time taken to load the data may be in days rather than hours.) The task of populating all fact tables, all dimension tables, and any other ancillary tables that you may have created such as reference tables With all the issues surrounding the initial load, it is a task not to be considered lightly. Refresh: After the first time load, the refresh is performed on a regular basis according to a cycle determined by refresh policy. The cycle may be daily, weekly, monthly, quarterly, or any other business period. The refresh is a simpler task than first time load for these reasons: There is less fact data to load. You are moving a new snapshot of data but not all fact data into the data warehouse. There is no dimension data to load (unless your model has changed, which would be an exception). There may be some dimensional data changes to incorporate. This can be complex.
  76. Data Refresh Models: Extract Processing Environment First, to ensure that you understand how the warehouse data presentation differs from nonwarehouse data presentation, consider how up-to-date data is presented to users in two different decision support environments: a simple extract processing environment and a data warehouse environment. Extract Processing Environment A snapshot of operational data is taken at regular time intervals: T1, T2, and T3. At each interval a new snapshot of the database is created and presented to the user; the old snapshot is purged.
  77. Data Refresh Models: Warehouse Processing Environment Warehouse Environment An initial snapshot is taken and the database is loaded with data. At regular time intervals, T1, T2, and T3, a delta database or file is created and the warehouse is refreshed. A delta contains only the changes made to operational data that need to be reflected in the data warehouse. The warehouse fact data is refreshed according to the refresh cycle that is determined by user requirements analysis. The warehouse dimension data is updated to reflect the current state of the business, only when changes are detected in the source systems. The older snapshot of data is not removed, ensuring that the warehouse contains the historical data that is needed for analysis. The oldest snapshots are archived or purged only when the data is not required any longer.
  78. Post-Processing of Loaded Data You have seen how to extract data to an intermediate file store or staging area, where it is: Transformed into acceptable warehouse data Loaded to the warehouse server You have also seen how the ETL process is for: First-time load, which requires all data to be loaded once Refreshing, which requires only changed data to be loaded You now need to consider the different tasks that might take place once the data is loaded. There are various terms used for these tasks. In this course the choice of terms is post-processing . The post-processing tasks are not definitive; you may or may not have to perform them, depending on the volumes of data moved, the complexity of transformations, and the loading mechanism.
  79. Unique Indexes If the index you are creating is a unique index (that forces unique values in key columns without any duplicates), then it is usual to load the data with the database constraints disabled, and then enable the constraints after the load process. Then you build the index, which may find duplicate values and fail. Ensure that the action catches the errors so that you can correct and reindex.
  80. Creating Derived Keys A derived (sometimes referred as generalized or artificial key or synthetic key or a warehouse key) key may be used to guarantee that every row in the table is unique. Concatenate Operational Key with a Number For example, if a customer record key value contains six digits, such as 109908, the derived key may be 109908 01 . The last n digits are the sequential number generated automatically. The advantage of this method is that it is relatively easy to maintain and set up the necessary programs to manage number allocation. The disadvantage of this method is that the key may become long and cumbersome. There is no clean key value for retrieval of a record, unless you have another copy of the key. For example, if the operational Customer_Id is 109908 but the warehouse key is now 10990801, then extracting information about that customer from the warehouse using 109908 is impossible—unless the old value has been retained in another field such as: Customer_key Customer_id Customer_Name 10990801 109908 Acme Inc.
  81. Creating Derived Keys (continued) Assign a Number from a List You can also assign the key sequentially from a simple list of numbers. For example, operational Customer_Id is 109908, the warehouse key can be 100, the key corresponding to the Customer_Id 109909 can be 101 and so on. The advantage of this method is that it does not occupy much space. The disadvantage of this method is that the keys therefore have no semantic or intuitive meaning. But this is industry preferred methodology. Irrespective of using any of the above methods, you must ensure the metadata is updated to register the latest key allocations. The method you choose to create derived keys, depends upon the extract methods, the tools available, and the hardware and network capability and availability.
  82. Metadata Users In the warehouse, metadata is employed directly or indirectly by all warehouse users for many different tasks. End Users The decision support analyst (or user) uses metadata directly. The user does not have the high degree of knowledge that the IT professional has, and metadata is the map to the warehouse information. One measure of a successful warehouse is the strength and ease of use of enduser metadata. Developers and IT Professionals For the developer (or an IT professional), metadata contains information on the location, structure, and meaning of data, information on mappings, and a guide to the algorithms used for summarization between detail and summary data.
  83. Metadata Documentation Approaches Regardless of the tools that you use to create a data warehouse, metadata must play a central role in the design, development, and ongoing evolution of the warehouse. Automated: Data modeling tools record metadata information as you perform modeling activities with the tool. ETL tools can also generate metadata. These tools also use the metadata repository as a resource to generate build and load scripts for the warehouse. Each of the tools used in your warehouse environment might generate its own set of metadata. The management and integration of different metadata repositories is one of the biggest challenges for the warehouse administrator. Manual: The manual approach provides flexibility, however, it is severely hampered by the labor-intensive nature of managing a manual approach with the ongoing maintenance of metadata content.
  84. Data Warehouse Dimensional Model Phases: Identify the ‘Business Process’ Determine the ‘Grain’ Identify the ‘Facts’ Identify the ‘Dimensions
  85. Business Requirements Drive the Design Process The entire scope of the data warehouse initiative must be driven by business requirements. Business requirements determine: What data must be available in the warehouse How data is to be organized How often data is updated End-user application templates Maintenance and growth Primary Input The business requirements are the primary input to the design of the data warehouse. Information requirements as defined by the business people—the end users—will lay the foundation for the data warehouse content. Secondary Input Requirements gathered from user interviews can be clarified or augmented with existing source data information and further research regarding how data is currently used. Other sources may be: Legacy systems metadata, Source ERD from OLTP systems, Research
  86. Performing Strategic Analysis Performed at the enterprise level, strategic analysis identifies, prioritizes, and selects the major business processes (also called business events or subject areas) that are most important to the overall corporate strategy. Strategic analysis includes the following steps: Identify the business processes that are most important to the overall corporate strategy. Examples of business processes are orders, invoices, shipments, inventory, sales, account administration, and the general ledger. Understand the business processes by drilling down on the dimensions that characterize each business process. The creation of a business process matrix can aid in this effort. Prioritize and select the business process to implement in the warehouse, based on which one will provide the quickest and largest return on investment (ROI).
  87. Using a Business Process Matrix A useful tool to understand and quantify business processes is the business process matrix (also called the process/dimension matrix). This matrix establishes a blueprint for the data warehouse database design to ensure that the design is extensible over time. The business process matrix aids in the strategic analysis task in two ways: Helps identify high-level analytical information that is required to satisfy the analytical needs for each business process, and serves as a method of cross checking whether you have all of the required business dimensions for each business process. Helps identify common business dimensions shared by different business processes. Business dimensions that are shared by more than one business process should be modeled with particular rigor, so that the analytical requirements of all processes that depend on them are supported. This is true even if one or more of the potential business processes are not selected for the first increment of the warehouse. Model the shared business dimensions to support all processes, so that later increments of the warehouse will not require a redesign of these crucial dimensions. A sample business process matrix is developed and shown in the slide, with business processes across the top and dimensions down the column on the very left side.
  88. Conformed Dimensions Dimensions are conformed when they are exactly the same including the keys or one is a perfect subset of the other. Shared dimensions are in effect &amp;quot;conformed&amp;quot; dimensions that can be shared across the data marts in an organization.  One Date dimension or product dimension are such examples. DW bus architecture provides a standard set of conformed dimensions. Conformed dimensions will be replicated either physically or logically throughout the enterprise. However, they should be built once in the staging area.
  89. Determining Granularity When gathering more specific information about measures and analytic parameters (dimensions), it is also important to understand the level of detail that is required for analysis and business decisions. Granularity is defined as the level of summarization (or detail) that will be maintained by your warehouse. The greater the level of detail, the finer the level of granularity. Grain is defined as the lowest level of detail that is retained in the warehouse, such as the transaction level. Such data is highly detailed and can then be summarized to any level that is required by the users. During your interviews, you should discern the level of detail that users need for near-term future analysis. After that is determined, identify whether there is a lower level of grain available in the source data. If so, you should design for at least one grain finer, and perhaps even to the lowest level of grain. Remember that you can always aggregate upward, but you cannot decompose the aggregate lower than the data that is stored in the warehouse. The level of granularity for each dimension determines the grain for the atomic level of the warehouse, which in turn will be used for rollups.
  90. Granularity The single most important design consideration of a data warehouse is the issue of granularity. Granularity, or the grain, determines the lowest level of detail or summarization contained in the warehouse. A low level of granularity means greater detail; a high level of granularity mean less detail. Granularity affects the warehouse size and the types of queries that can be performed (the dimensionality of the warehouse). Additionally, the type of query that is required determines the granularity of the data. The data can be stored at different levels of granularity: individual transactions, daily snapshots, monthly snapshots, quarterly snapshots, or yearly snapshots. Maintaining a low level of grain in the warehouse is expensive and requires more disk space and more processing in access operations. Because the data may even exist at the transaction level, there is less demand during the ETT process than there would be with a higher level of granularity. A high level of granularity requires less space and fewer resources for access, but may prevent users from accessing the level of detail that they require to answer their business questions. Consider starting your data warehouse with the lowest granularity possible that is available from the source systems. After the warehouse is running, users can determine what level of granularity is necessary. Keep in mind however, that different users have different requirements.
  91. Defining Time Granularity The grain you choose for the time dimension can have a significant impact on the size of your database. It is important to know how your data has been defined with regard to time, to accurately determine the grain. This is particularly true with external data that is commonly aggregated data. Even when you think there is no gap in the granularity of your systems, you may find that basic definitions between systems differ. The primary consideration here is to keep a low enough level of data in your warehouse to be able to aggregate and match values with other data that has a predetermined aggregate. The most common use of partitioning for a data warehouse is to range-partition the fact table of the star schema by time providing huge benefits in querying, manageability, and performance when loading. Note: A good practice is to store data at one level of granularity lower than your business user has requested. For example, if the user requests monthly data, you may want to store weekly data. However, the impact of the increased storage can be prohibitive.
  92. Identifying Measures and Dimensions Measures A measure (or fact) contains a numeric value that measures an aspect of the business. Typical examples are gross sales dollars, total cost, profit, margin dollars, or quantity sold. A measure can be additive or partially additive across dimensions. Dimensions A dimension is an attribute by which measures can be characterized or analyzed. Dimensions bring meaning to raw data. Typical examples are customer name, date of order, or product brand. During the warehouse design, you must decide whether a piece of data is a measure or a dimension. You can use the following as a guide: If the data regularly changes value, it is a measure; for example, units sold or account balances. If the data is constant or it takes only a discrete number of values, it is a dimension. For example, the color of a product and the address of a customer are unlikely to change frequently. A need or capability to summarize often identifies a measure. These rules are not definitive but act as a guide where there is indecision.
  93. Data Warehouse Environment Data Structures Warehouse environment table structures can take on a number of forms. The data modeling structures that are commonly encountered in a data warehouse environment are: Third normal form (3NF) Star schema Snowflake schema Hybrid schema Note: Today, most of the very large data warehouses are mixing 3NF and star schema. Normalized structures store the greatest amount of data in the least amount of space. Entity-relationship modeling (ERM) also seeks to eliminate data redundancy. This is immensely beneficial to transaction processing, OLTP systems. Dimensional modeling (DM) is a design that presents the data in an intuitive manner and allows for high-performance access. For these two reasons, dimensional modeling, such as, star and snowflake schemas, has become the standard design for data marts and data warehouses.
  94. Basic Form A star schema (dimensional model) is a typical warehouse model. It is portrayed as a star, with a central table containing fact data, and multiple dimensional tables radiating out from it, all connected by keys. Dimensional models are represented by two main types of data: Fact data Dimension data (This data is denormalized, unlike that in a typical relational model.) Instructor Note The main types of data are discussed later in greater length in the dimensional modeling lesson.
  95. Star Schema Model A star schema model can be depicted as a simple star; a central table contains fact data, and multiple tables radiate out from it, connected by database primary and foreign keys. Unlike other database structures, a star schema has denormalized dimensions. The star schema is emerging as the predominant model for data warehouses/marts. Star dimensional modeling is a logical design technique that seeks to present the data in a standard framework that is intuitive and provides high performance. Every dimensional model is composed of one table called the fact table, and a set of smaller tables called dimension tables. This characteristic (denormalized, star-like structure) is commonly known as a star model. Within this star model, redundant data is posted from one object to another for performance considerations. A fact table has a multipart primary key composed of two or more foreign keys and expresses a many-to-many relationship. Each dimension table has a single-part primary key that corresponds exactly to one of the components of the multipart key in the fact table.
  96. Fact Table Characteristics Facts are the numerical measures of the business. The fact table is the largest table in the star schema and is composed of large volumes of data, usually making up 90% or more of the total database size. It can be viewed in two parts: Multipart primary key Business metrics Numeric Additive (usually) Often a measure may be required in the warehouse, but it may not appear to be additive. These are known as semiadditive facts. Inventory and room temperature are two such numerical measurements. It does not make sense to add these numerical measurements over time, but they can be aggregated by using an SQL function other than sum, for example average. Although a star schema typically contains one fact table, other DSS schemas can contain multiple fact tables.
  97. Dimension Table Characteristics Dimensions are the textual descriptions of the business. Dimension tables are typically smaller than fact tables and the data changes much less frequently. Dimension tables give perspective regarding the whys and hows of the business and element transactions. Although dimensions generally contain relatively static data, customer dimensions are updated more frequently. Dimensions Are Essential for Analysis The key to a powerful dimensional model lies in the richness of the dimension attributes because they determine how facts can be analyzed. Dimensions can be considered as the entry point into “fact space.” Always name attributes in the users’ vocabulary. That way, the dimension will document itself and its expressive power will be apparent.
  98. Advantages of Using a Star Dimensional Schema Provides rapid analysis across different dimensions for drilling down, rotation, and analytical calculations for the multidimensional cube. Creates a database design that improves performance Enables database optimizers to work with a more simple database design to yield better execution plans Provides an extensible design which supports changing business requirements Broadens the choices for data access tools, because some products require a star schema design
  99. Base and Derived Data The fields in your fact tables are not just source data columns. There may be many more columns that the business wants to analyze, such as year-to-date sales or the percentage difference in sales from a year ago. You must keep track of the derived facts as well as the base facts. Derived Data Derived data is data that is calculated or created from two or more sources of data. A derived value can be more efficiently stored for access, rather than calculating the value at execution time. Derived data in the warehouse is very important because of its inherent support for queries. When you store derived data in the database, values are available immediately for analysis through queries. In this example, you see the employees’ monthly salary and monthly commission figures. In the OLTP system, you would not store the monthly compensation, but you might do so for the warehouse. Derived values can be calculated at an instance in time, as with an OLTP system. More derived values feature prominently in data warehouses.
  100. Translating Business Measures into Fact Tables The slide shows the translation of some of the business measures in a business model into an Order fact table. In addition, each measure in the fact table is identified as either a base or derived measure, for example, Profit = Order_amount – Order_cost. Derived values can be created during the extraction or transformation processes.
  101. Snowflake Schema Model According to Ralph Kimball “a dimension is said to be snowflaked when the low cardinality fields in the dimension have been removed to separate tables and linked back into the original table with artificial keys.” A snowflake model is closer to an entity relationship diagram than the classic star model because the dimension data is more normalized. Developing a snowflake model means building class hierarchies out of each dimension (normalizing the data). A snowflake model: Results in severe performance degradation because of its greater number of table joins Provides a structure that is easier to change as requirements change Is quicker at loading data into its smaller normalized tables, compared to loading into a star schema’s larger denormalized tables Allows using history tables for changing data, rather than level fields (indicators) Has a complex metadata structure that is harder for end user tools to support One of the major reasons why the star schema model has become more predominant than the snowflake model is its query performance advantage. In a warehouse environment, the snowflake’s quicker load performance is much less important than its slower query performance.
  102. Snowflake Model Is an extension of a star design Is a normalized star model Can cause potential performance degradation A dimension is said to be snowflaked when the low cardinality fields in the dimension have been removed to separate tables and linked back to the original tables with artificial keys. The snowflake model is closer to an entity relationship diagram than the classic star model, because the dimension data is more normalized. Developing a snowflake model means building hierarchies from each dimension. The snowflake model is an extended, more normalized star model. The dimensions are normalized to form class hierarchies. For example, a Location Dimension may be normalized to a hierarchy showing Countries, Counties, and Cities; or Region, District, and Office. The slide shows an example where the Order dimension is normalized to a hierarchy of Channel and Web. The two tables are then joined together. Note: We do not recommend using snowflaking because usually this model almost always interferes with the user’s comprehension. The exception is when a user access tool requires the use of a snowflake model.
  103. Snowflake Schema Model (continued) Besides the star and snowflake schemas, there are other models that can be considered. Constellation A constellation model (also called galaxy model) simply comprises a series of star models. Constellations are a useful design feature if you have a primary fact table, and summary tables of a different dimensionality. It can simplify design by allowing you to share dimensions among many fact tables. Third Normal Form Warehouse Some data warehouses consist of a set of relational tables that have been normalized to third normal form (3NF). Their data can be directly accessed by using SQL code. They may have more efficient data storage, at the price of slower query performance due to extensive table joins. Some large companies build a 3NF central data warehouse feeding dependent star data marts for specific lines of business.
  104. Constellation (Summary) Configuration The constellation configuration consists of a central star surrounded by other stars. The central star comprises base-level or atomic data. The surrounding stars can be stars that share dimensions or summary data. The surrounding stars can share dimension attributes with the atomic star. A galaxy is composed of those constellations that do not share dimensions and a universe is composed of galaxies.
  105. Fact Table Measures Additive: Additive data in a fact table can be added across all of the dimensions to provide an answer to your query. Additive facts are fundamental components of most queries. Typically, when you query the database you are requesting that data be summed to a particular level to meet the constraints of your query. Additive facts are numeric and can be logically used in calculations across all dimensions. Additive data examples are units sold, customer account balance, and shipping charge. Semiadditive: Semiadditive data can be added along some but not all of the dimensions such as a bank account balance. The bank records balances at the end of each banking day for customers, by account, over time. This allows the bank to study deposits, as well as individual customers. In some cases, the account balance measure is additive. If a customer holds both checking and savings accounts, you can add together the balances of each account at the end of the day and get a meaningful combined balance. You can also add balances together across accounts at the same branch for a picture of the total deposits at each location; however, you cannot add together account balances for a single customer over multiple days. Nonadditive: Nonadditive data cannot logically be added between records. Nonadditive data can be numeric and is usually combined in a computation with other facts (to make it additive) before being added across records. Margin percent is an example of a nonadditive value. Nonadditive are also factless fact tables.
  106. More on Factless Fact Tables The factless fact table represents the many-to-many relationships between the dimensions, so that the characteristics of the event can be analyzed. Materialized views are generally built on factless fact tables to create summaries. Examples Retail store — Promotions are typical within the retail environment. An upscale retail chain wants to compare its customers who do not respond to direct mail promotion to those who make a purchase. A factless fact table supports the relationship between the customer, product, promotion, and time dimensions. Student attendance — Factless fact tables can be used to record student class attendance in a college or school system. There is no fact associated with this; it is a matter of whether the students attended. Note: FK = foreign and PK = primary key
  107. Factless Fact Tables A factless fact table is a fact table that does not contain numeric additive values, but is composed exclusively of keys. There are two types of factless fact tables: event-tracking and coverage. Event-tracking records and tracks events that have occurred, such as college student class attendance, whereas coverage factless tables support the dimensional model when the primary fact table is sparse, for example, a sales promotion factless table. In the latter case, the events did not occur. Although promotional fact data can be stored within the fact table itself, creating a coverage factless fact table is far more efficient because the complex many-to-many relationship formed through the dimensions for a promotion require massive amounts of detail with zero detail.
  108. Bracketed Dimensions To enhance performance and analytical capabilities, create bracketed dimensions for categorical attributes. Creating groups of values for attributes with many unique values, such as income, reduces the cardinality (fewer bitmap indexes) and creates ranges that are more meaningful to the end user. For example, even though the actual income amount is available at the customer level, the business requirements do not need discrete values for analysis. Income can then be grouped in $10,000 ranges. Similarly for ages you can have a 21–30 bracket and a 31–40 bracket. To extract a person’s actual age, you can perform a calculation on birth date. Attributes can be grouped into brackets as well, creating unique identifiers for combinations of attributes. For some queries, pre-aggregating the data can minimize the need for full table scans. The bracketed definition is selected as the query constraint and the index for the table is scanned for records that satisfy the query. If the data was not pre-aggregated into brackets, the table must be scanned multiple times to select each range of values. Assumptions: There are far fewer brackets than actual occurrences of the value bracketed The ranges are agreed upon Values that will constitute summary tables should be preaggregated .
  109. Bracketing Dimensions Bracketed dimensions are typically used to support complex analytical models, because they: Contain information used to classify, categorize, or group multiple attributes Are frequently used in end-user queries where the actual value is too discrete to be meaningful Therefore, consider preaggregating to enhance performance and analytical power. If you have the data, you can use a query tool or data mining tool to determine and define the brackets based on actual values. If connected to the fact table, the Bracket_PK field must be part of the fact table composite key; otherwise, no modifications to the fact table are required. If the bracketed dimension is not connected to the fact table, it can be built at anytime, as required. In this example, the bracketed dimension is created from a combination of attributes. The bracketed table is related to the fact table, requiring that the Bracket_PK field be part of the composite key. The example also presupposes that these brackets have been agreed upon during requirement analysis. Examples of other applications for which you might use bracketing include: Marketing applications with customer salary ranges Risk assessment applications containing credit card limits Note: Bracketing data in this way makes it more difficult to manage, load, or access with drill capabilities.
  110. Example In the example on the slide, a store dimension has one hierarchy, the organization hierarchy.
  111. Multiple Hierarchies Multiple hierarchies can be stored in a single dimension, using the flat model approach. The definition of the relationship between the data in this model must be well maintained and stored in the metadata. All the levels of the hierarchy have been collapsed into one table. Example: Multiple hierarchies within the same dimension are so common that they may be the rule rather than the exception. In this example, the store dimension has two hierarchies: Organization Geography These relationships were most likely created to manage a particular business process. The organization hierarchy may support the evaluation of sales groups or personnel, whereas the geography hierarchy may be used to manage distributors or estimate tax exposure.
  112. Multiple Time Hierarchies Representation of time is critical in the data warehouse. You may decide to store multiple hierarchies in the data warehouse to satisfy the varied definitions of units of time. If you are using external data, you may find that you create a hierarchy or translation table simply to be able to integrate the data. Matching the granularity of time defined in external data to the time dimension in your own warehouse may be quite difficult. A simple time hierarchy corresponds to a calendar approach: days, months, quarters, years. A hierarchy based on weeks seems fairly simple as well: weeks, four-week period. What is the definition of a week? Does the week start on Sunday or Monday? Internally, you may define it one way; however, when you try to integrate external data that is defined in a different way, you may get unexpected or misleading results. Are there not 13 weeks in a quarter? Why can I not map 13-week periods to a quarter? Typically the start and stop date of a quarter corresponds to a calendar date—the first of the month, the last day of a month. Thirteen-week periods may start at any time but are usually consistent with the start day of the week definition. Example: The time dimension is described by multiple hierarchies and can be used to support both calendar and fiscal year.
  113. Drilling Up and Drilling Down Drilling refers to the investigation of data to greater or lesser detail from a starting point. Typically, in an analytical environment you start with less detail, at a higher level within a hierarchy, and investigate down through a hierarchy to greater detail. This process is drilling down (to more detailed data). Drilling down means retrieving data at a greater level of granularity. Using the market hierarchy as an example, you might start your analysis at the group level and drill down to the region or store level of detail. Drilling up is the reverse of that process. Consider the market hierarchy example. If your starting point were an analysis of districts, drilling up would mean looking at a lesser level of detail, such as the region or group level. Drilling can occur in three ways: Through a single hierarchy From a single hierarchy to nonhierarchical attributes From a single hierarchy to another hierarchy The example of the market hierarchy displays one drill path.
  114. Drilling Across Beginning the analysis with the group level of the market hierarchy, you drill down to a region within the same hierarchy. Your analysis then leads you to find all stores (part of the market hierarchy) with greater than 20,000 square feet (a nonhierarchical attribute). Having identified those stores, you now want to identify the cities in which those stores are located. In this analysis you have gone from drilling in one hierarchy, to an independent attribute, and finally to another hierarchy. The risk in drilling across is that you are not necessarily going to enter another hierarchy at the same level, so the totals may be different. Therefore, you would not try to balance the results from your final query, which looked at the city level within the geography hierarchy, with totals at different levels within the market hierarchy. Drilling across can be a very powerful approach to analysis. If dimensions have been conformed, for example, the grain and detail are identical, reliability for drilling across is greatly enhanced.
  115. Using Time in the Data Warehouse Though it may seem obvious, real-life aggregations based on time can be quite complex. Which weeks roll up to which quarters? Is the first quarter the calendar months of January, February, and March, or the first 13 weeks of the year that begin on Monday? Some causes for nonstandardization are: Some countries start the work week on Mondays, others on Sunday. Weeks do not cleanly roll up to years, because a calendar year is one day longer than 52 weeks (one day longer in leap years). There are differences between calendar and fiscal periods. Consider a warehouse that includes data from multiple organizations, each with its own calendars. Holidays are not the same for all organizations and all locations. Representing time is critical in the data warehouse. You may decide to store multiple hierarchies in the data warehouse to satisfy the varied definitions of units of time. If you are using external data, you may find that you create a hierarchy or translation table simply to be able to integrate the data. Matching the granularity of time defined in external data to the time dimension in your own warehouse may be quite difficult.
  116. Date Dimension Automated:
  117. Applying the Changes to Data There are a number of methods for applying changes to existing data in dimension tables: Overwrite a record Add a new record Add a current field Maintain history records Versioning of records
  118. OLAP Models ROLAP — a two-dimensional table where queries are posed and run without the assistance of cubes, providing greater flexibility for drilling down, across, and pivoting results. Each row in the table holds data that pertains to a specific thing or a portion of that thing. Each column of the table contains data regarding an attribute. MOLAP — a cube (the equivalent of a table in a relational database) stores a pre-calculated data set which contains all possible answers to a given range of questions. Each cube has several dimensions (equivalent to index fields in relational tables). The cube acts like an array in a conventional programming language. Logically, the space for the entire cube is pre-allocated. To find or insert data, you use the dimension values to calculate the position. Summarized tables are quite common in MOLAP configurations. HOLAP —a hybrid of the MOLAP and ROLAP, combines the capabilities of both by utilizing both precalculated cubes and relations data sources. DOLAP — Desktop OLAP. It downloads a relatively small hypercube from a central point (usually a data mart or data warehouse) and perform multidimensional analyses while disconnected from the source. This functionality is particularly useful for mobile users who can’t always connect to the data warehouse. DOLAP is often contrasted with MOLAP (i.e., cube-based) and ROLAP (i.e., star schema based) business intelligence tools. DOLAP product does everything on the desktop after extracting the data).
  119. Slowly Changing Dimensions (SCDs) Data changes in the fields for levels or attributes is one of the most challenging design issues for multidimensional (star schema) data modeling. This issue is often referred to as the handling of Slowly Changing Dimensions. There are three ways outlined by Ralph Kimball (and others) to handle this situation: Type-1 : Overwrite the data in the dimension table. It is easy and fast to implement but loss of history is the drawback. Type-2 : Add new records to the dimension table that contain the new data. Perfect partition history but requires maintenance of surrogate keys Type-3 : Add new fields to the dimension table to contain the values before and after the change.
  120. Overwriting a Record This method is easy to implement, but it is useful only if you are not interested in maintaining the history of data. If the data you are changing is critical to the context of information and analysis of the business, then overwriting a record is to be avoided at all costs. For example, by overwriting dimension data, you lose all track of history—you can never see that John Doe was single if the value “Single” is overwritten with the value “Married” from the operational system. The Customer_Id for John Doe remains constant throughout the life of the warehouse, because only one record for John Doe is stored.
  121. Adding a New Record Using this method, you add another dimension record for John Doe. One record shows that he was “single” until December 31, 2000, another that he was “married” from January 1, 2001. Using this method, history is accurately preserved, but the dimension tables get bigger. A generalized (or derived) key is created for the second John Doe record. The generalized key is a derived value that ensures that a record remains unique. However, you now have more than one key to manage. You also need to ensure that the record keeps track of when the change has occurred. The Customer_Id for John Doe does not remain constant throughout the life of the warehouse, because each record added for John Doe contains a unique key. The key value is usually a combination of the operational system identifier with characters or digits appended to it. Consider using real data keys. The example here shows a method that is commonly identified in warehouse reference material.
  122. Adding a Current Field In this method, you add a new field to the dimension table to hold the current value of the attribute. Using this method, you can keep some track of history. You know that John Doe was “single” before he was “married.” Each time John’s marital status changes, the two status attributes are updated and a new Effective Date is entered. However, what you cannot see from this method is what changes have taken place between the two records you are storing for John Doe—intermediate values are lost. Consider using an Effective Date attribute to show when the status changed. Partitioning of data can then be performed by effective date. The method you choose is again determined by the business requirements. If you want to maintain history, this method is a logical choice that can be enhanced by using a generalized key.
  123. History Tables and One-to-Many Relationships Using this method, you keep one current record of the customer and many history records in the customer history table (a one-to-many relationship between the tables), thus maintaining history in a more normalized data model. In the CUSTOMER table the customer operational unique identifier is retained in the CUSTOMER.Id column. In the HIST_CUST table, the operational key is maintained in the HIST_CUST.Id column and the generalized key in the HIST_CUST.G_Id column. You can use this to keep all the keys needed and multiple records for the customer. The data in the table might appear.
  124. Versioning You can also maintain a version number for the customer in the Customer dimension: You must ensure that the measures in the fact table, such as sales figures, also contain the customer version number to avoid any double counting: For Comer Version 1, the sales total is $36,000. For Comer Version 2, the sales total is $87,000.
  125. Rapidly Changing Dimensions (RCDs) A dimension that changes too rapidly. This issue is often referred to as the Rapidly Changing Dimensions. One powerful technique is to break off the frequently changing attributes into a separate dimension referred to as mini dimension. For example, if we have a large customer dimension with demographic attributes like income level, age, marital status, education, job title, no of children. There would be one row in mini dimension table for each unique combination of the frequently changing attributes. Every time we build a fact table row, we include two foreign keys related to the customer: customer key from customer dimension and demographics key from the mini dimension. The most recent value of the demographics key can exist as a foreign key in the customer dimension. If you embed the most recent demographics key in the Customer dimension, you must treat it as Type 1 attribute. If you treat it as Type 2, you have reintroduced the RCD problem. Mini dimension terminology refers to when the mini dimension key is a part of the fact table (demographics key is part of the fact table composite key). Outrigger terminology refers to when the mini dimension key is a part of the dimension table (demographics key is foreign key in the customer dimension table). You may choose to store the mini dimension key both in the dimension and fact table.
  126. Junk Dimension Junk dimension is an abstract dimension with the decodes for a group of low cardinality flags and indicators, thereby removing them from fact table. Remove all the flags and indicators from the fact table. Create a separate dimension table referred to as Junk dimension. E.g: Order processing system ::::; :::::: ::::: ::::: Fax Urgent Credit 4 Fax Normal Credit 3 Web Urgent Cash 2 Web Normal Cash 1 Order Mode Order type Payment Type Junk Key
  127. Secret of Success Your eventual goal may be the enterprisewide solution, but take small steps to achieve it. The enterprisewide warehouse is not a realistic objective for your first pass. Always use the proven low-risk incremental approach.