• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Building an Effective Data Warehouse Architecture
 

Building an Effective Data Warehouse Architecture

on

  • 10,078 views

Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon ...

Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.

Statistics

Views

Total Views
10,078
Views on SlideShare
10,000
Embed Views
78

Actions

Likes
7
Downloads
672
Comments
3

4 Embeds 78

http://www.linkedin.com 61
https://twitter.com 10
https://www.linkedin.com 5
http://ams.activemailservice.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

13 of 3 previous next Post a comment

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Excellent deck James! Hope we get to meet at PASS next week
    Are you sure you want to
    Your message goes here
    Processing…
  • Thanks for the feedback Emil! Some excellent points.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi James,

    Nice to see someone's else point of view. I agree with majority of it and below are some of my thoughts where I might have slightly different point of view.

    Kimball vs Inmon in my opinion is not that different. I believe main difference is how they share their approach. I've used Kimball for over 5 years and not long ago read Inmon DW 2.0 Architecture and actually I found it very similar.

    Centralized / Decentralized? Yes Kimball can be decentralized and it can also be centralized and the same is with Inmon as Inmon himself says build DW piece by piece but main difference is that Inmon puts more focus on Metadata at all levels and is more defined than Kimball but Kimball builds one DW via Bus Matrix and I think the confusion is in Data Mart definition as I believe in Kimball Books I've read Data Mart is a subset of Bus Matrix which is different to Inmon Data Mart as Inmon stores only granual data and aggregation are done in Data Marts so he splits it into DW + Data Mart where in Kimball everything is in DW and Data Mart is just a convenient way to split DW into concrete subjects.

    So from my point of view Slide 14 kimball model is actually slide 15.

    What I like about Inmon is that he recognized the importance of Operational & Exploration needs and recommends to build it outside of 'core' dw which he describes in his book DW 2.0 which is what I would see as 'inmon' in slide 15.

    What I think is also worth mentioning is the problem of reconstructing the DW and this is where Data Vault can help and it aims to simplify it by adding extra step (more effort) but main benefit is that you don't design DW upfront so impact of changing design decision or evolution of business is minimized.

    My BI architecture view changed after I learn more about Inmon DW 2.0 and Data Vault but I see various elements from all methodologies working together.

    What I still need to work out is how to make it work with MDM as there are various methods which differ depending on methodology used.

    I think main challenge is not design/methodology but educating business users and managers and trying to agree a strategy as very often a project without structure (ITIL or similar) will struggle a lot.

    I'll post a link soon so you can critique my view of BI Architecture ;)

    Take care
    Emil
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • You’re a DBA, and your boss asks you to determine if a data warehouse would help the company. So many questions pop into your head: Why use a data warehouse? What’s the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What’s the difference between the Kimball and Inmon methodologies? Does the new Tabular Model in SQL Server 2012 change things? What’s the difference between a data warehouse and a data mart? Is there any hardware I can purchase that is optimized for a data warehouse? What if I have a ton of data? Join this session for the answers to all these questions. You’ll leave with information that will amaze your boss and lead to a big raise… – or at least lead you down the correct path to adding business value to your organization!Company does not have DW and need to understand the benefits and best approach to build one  Company has what they call a DW, but really just a dumping ground for tables from various sources...not much design put into it. Started because a business user wanted to create a report using data from multiple systems and a quick an dirty ETL was created. Grew into a jumbled mess of SP's and SSIS. Needs to be replaced
  • Fluff, but point is I bring real work experience to the session
  • Questions to ask audience: How many use a data warehouse? How many use/know about an appliance or fast track DW? How many use cubes? Who are technical/developers/dba’s, or managers, or BA’s?Removed:End-User Microsoft BI ToolsSQL Server 2012 Tabular Model
  • Started because a business user wanted to create a report using data from multiple systems and a quick an dirty ETL was created. Grew into a jumbled mess of SP's and SSIS. Needs to be replaced
  • Wayne EckersonBusiness value increases as you move along the stages.  Most clients I have seen are between the child and teenager years, with a surprising number still stuck in the infant stage.Stage 1 – Prenatal; Structure: Management Reports; Scope: System; Executive Perception: Cost Center.  This is simply canned static reports that come from the source systems (i.e ERP, CRM, etc).  These reports are usually printed or emailed to a bunch of employees on a regular basis.  If the end-user requests custom reports, it is up to IT to create these reports within the source system, which usually has very limited capabilities for doing so.  And IT usually is backed up in filling these requests, frustrating the users.Stage 2 – Infant; Structure: Spreadmarts; Scope: Individual; Executive Perception: Inform Executives.  Those frustrated users from the previous stage (who are usually business analysts or power users) take matters into their own hands and circumvent IT by extracting data from the source systems and loading it into spreadsheets or desktop databases.  These are like personal, isolated data marts that don’t align with any other data marts.  Because spreadsheets are easy and quick to create, they spread like weeds.  I have seem some companies with thousands of these spreadsheets.  They prevent companies from having accurate and consistent reports, but they are very difficult to eliminate as you can imagine.Stage 3 – Child; Structure: Data Marts; Scope: Department; Executive Perception: Empower Knowledge Workers.  This stage is when departments see the need to give all business users timely data, not just the business analysts and executives.  A data mart is a shared database that usually includes one subject area (finance, sales, etc) that is tailored to meet the needs of one department (see Data Warehouse vs Data Mart).  From the data mart an SSAS cube may be built, and the business user will be given the use of a reporting tool to go against the cube or data mart.  Many times the data mart is in a star schema format.  But data marts fall prey to the same problems that afflict spreadmarts: each data mart has its own definitions and rules and extracts data directly from source systems.  These independent data marts are great at supporting local needs, but their data can’t be aggregated to support cross-departmental analysis.Stage 4 – Teenager; Structure: Data Warehouses; Scope: Division; Executive Perception: Monitor Business Processes.  Data warehouses are a way to integrate data marts without jeopardizing local autonomy.  After a company builds a bunch of data marts, they recognize the need to standardize definitions, rules, and dimensions to prevent integration problems later on.  The most common way to standardize data marts is to create a centralized data warehouse with dependent data marts built from the data warehouse.  This is usually called a hub-and-spoke data warehouse.  A data warehouse encourages deeper levels of analysis because users can now submit queries across multiple subject areas, such as finance and operations, and gain new insights not possible with single-subject data marts.Stage 5 – Adult; Structure: Enterprise DW; Scope: Enterprise; Executive Perception: Drive the Business. Just like the problem of having overlapping and inconsistent data caused by spreadmarts and multiple independent data marts, a large company can have a similar problem with multiple data warehouses.  Many organizations today have multiple data warehouses acquired through internal development, mergers or acquisitions, creating barriers to the free flow of information within and between business groups and the processes they manage.  The solution is to create an enterprise data warehouse (EDW) that will be the single version of the truth.  This EDW serves as an integration machine that continuously consolidates all other analytic structures into itself.Stage 6 – Sage; Structure: BI Services; Scope: Inter-Enterprise; Executive Perception: Drive the Market. At this stage you are extending and integrating the value of a EDW by opening the EDW to customers and suppliers to drive new market opportunities.  Customers and suppliers are provided simple yet powerful interactive reporting tools to compare and benchmark their activity and performance to other groups across a multitude of dimensions.  At the same time, EDW development teams are turning analytical data and BI functionality into Web services that developers — both internal and external to the organization — can leverage with proper authorization. The advent of BI services turns the EDW and its applications into a market-wide utility that can be readily embedded into any application.I would probably add a stage 7 dealing with SQL Server Analysis Services (SSAS) data mining.  SSAS features nine different data mining algorithms that looks for specific types of patterns in trends in order to make predictions about your data.Whether you already exhibit the characteristics of a sage or you’re still trying to hurdle the gulf between the infant and child stages, this maturity model can provide guidance and perspective as you continue your journey.  The model can show you where you are, how far you’ve come and where you need to go.  It provides guideposts to help keep you sane and calm amidst the chaos and strife we contend with each day.
  • One version of truth story: different departments using different financial formulas to help bonusThis leads to reasons to use BI. This is used to convince your boss of need for DWNote that you still want to do some reporting off of source system (i.e. current inventory counts).It’s important to know upfront if data warehouse needs to be updated in real-time or very frequently as that is a major architectural decisionJD Edwards has tables names like T117
  • many sources and many data marts (spaghetti code), different update of frequency, different variation of dimensions
  • Reference configuration can be built on own or Dell can put it together for you.Once you convinced your boss you need a DW, what is the hardware to place the DW? Instead of just going to the Dell site and configuring your own: use FTDW or AppliancesFTDW created via lessons learned with PDW
  • Question: How many know what PDW is?HP: A one rack system has 17 servers, 22 processors/132 cores, and 125TB and can be scaled out to a four rack system with 47 servers, 82 processors/492 cores, and 500TBHP Business Decision Appliance (BDA): Made specifically for BI.  HP and Microsoft have delivered the first ever self-service business intelligence appliance, optimized for SQL Server 2008 R2 and SharePoint Server 2010.  Ideal for managed self-service BI with PowerPivot.  Developed for mid-market, enterprise department and remote offices.  The server has 2 CPU’s (12 cores) and 96GB memory.  Configuration of the appliance is integrated into SharePoint.  The Windows Server OS, plus all of the required server components, such as SQL Server and SharePoint, are already loaded on the appliance.  There’s no need to perform any software installationsRemoved:HP Business Data Warehouse Appliance (FT 3.0, 5TB)HP Business Decision Appliance (BI, SharePoint 2010, SQL Server 2008R2, PP)HP Database Consolidation Appliance (virtual environment, Windows2008R2)
  • Who knows about these methodologies?
  • Direct from Kimball: We don't know why people call our approach bottom-up. We spend much time at the beginning choosing the appropriate data sources to answer key business questions, and then after building the BUS matrix showing all the possible business processes (data sources), we then implement those processes that address the most important needs of the business. This is more top-down than anything the CIF does, where they barely mention the need to interview the end users and stand back from the whole project. People like to put Kimball (and Inmon) under convenient labels, but many times these labels are nonsensical. To describe our approach as top-down, or supporting pure analytics just isn't correct.Direct from Inmon: “We have stated - from the very beginning of data warehousing - that the way to build data warehouses is to build them iteratively. In fact we have gone so far to say that the first and foremost critical success factor in the building of a data warehouse is to NOT build the data warehouse using the Big Bang approach. The primary untruth they have told is that it takes a long time and lots of resources to build an Inmon style architecture. ”BDW: Building the Data Warehouse, fourth edition, Inmon, 2005TTA: A Tale of Two Architectures, Inmon, 2010Imhoff: Mastering Data Warehouse Design, Imhoff, 2003Survey: Which Data Warehouse Architecture Is Most Successful? (2006), Ariyachandra, https://cours.etsmtl.ca/mti820/public_docs/lectures/WhichDWArchitectureIsMostSuccessful.pdf
  • Normalize to eliminate redundant data and setup table relationshipsFact tables contain metrics, while dimension tables contain attributes of the metrics in the fact tables
  • The main difference between the two approaches is that the normalized version is easier to build if the source system is already normalized; but the dimensional version is easier to use for the business users and will generally perform better for analytic queries.
  • Direct from Inmon: “We have stated - from the very beginning of data warehousing - that the way to build data warehouses is to build them iteratively. In fact we have gone so far to say that the first and foremost critical success factor in the building of a data warehouse is to NOT build the data warehouse using the Big Bang approach. The primary untruth they have told is that it takes a long time and lots of resources to build an Inmon style architecture. ”A common misperception of Inmon’s architecture is that a data warehouse must be built in its entirety first.  He said this is not so.  An Inmon data warehouse can be built over time.  He likened it to the growth of a city – you start out with certain districts and services and as the city grows the architecture of the city grows with it.  You certainly don’t go out to build a complete city overnight; likewise with an enterprise data warehouse. http://www.biprofessional.com/tag/inmon-2/Page 21 of “Mastering Data Warehouse Design” by Claudia Imhoff: “Nowhere do we recommend that you build an entire data warehouse containing all the strategic enterprise data you will ever need before building the first analytical capability (data mart). Each successive business program solved by another data mart implementation will add the growing set of data serving as the foundation in your data warehouse. Eventually, the amount of data that must be added to the data warehouse to support a new data mart will be negligible because most of it will already be present in the data warehouse.”Hybrid: Data Vault (hub, link, satellite; always uses views; tracks history)
  • Columnstore removes need for cubeThis non-persistent staging area caches operational data to be later populated to the warehouse bus. From The Data Warehouse Toolkit, page 9: A normalized database for data staging storage is acceptable. The one negative is now you have the same data in two places: in the staging area and in the data martIf don’t count cubes as “storing” data, then only storing the data once (or zero if using a dimensionalized view).Kimball becomes very much like Inmon’s CIF if you add MDMFrom Ralph:Generally we don't emphasize the word "data mart" because that invokes Inmon's concept of a data mart that is a custom-built incompatible, aggregated subset of data requested by a specific department, and built from an otherwise inaccessible back room data warehouse based on highly normalized data. Instead we talk about "business process subject areas" (a mouthful) or just subject areas. Each of these subject areas is ultimately drawn from a separate legacy source and deployed through one or more similar dimensional models (star schemas). There is nothing in all of this that mandates the data be stored in a single database. No matter where the subject area schemas are stored, in order to combine data from separate subject areas, you must "drill across" using conformed dimensions. The best architecture for implementing drill across is to perform completely separate queries and then combine the results in the BI layer. If you do this, it doesn't matter where the separate subject area databases are located. We have written extensively about this method of achieving integration across separate data sets, and it is a signature achievement of the dimension approach. One recent Kimball Design Tip that implements these issues with an actual spreadsheet interface is http://www.kimballgroup.com/2013/06/02/design-tip-156-an-excel-macro-for-drilling-across/
  • The one negative with this approach is you could have the same data copied in three places: staging area, CIF, data mart.If don’t count cubes as “storing” data, then storing data twice (data warehouse and data marts).Hub-and-spoke architectureDW 2.0 (unstructured data, ODS, CDC)
  • Hire a BI professional to help you! If you lack expertise and bandwidthKimball model used because developers knew of nothing elseMy opinion, not set in stone, based on source size, use cubesDon't lock into one way of doing everything - depends on sources
  • Only difference between Hybrid model and Inmon is data marts are star schema, not NFFor an extremely large number of sources, you can even add another layer to this: a EDW containing all star schemas in between the CIF and DW Bus Architecture.Can have an ODS to act as the staging area for the data warehouse, at least for the data that it maintains. The data in the ODS is more real time than the data required by the data warehouse. Reconcile and cleanse that data, populating the ODS near real time providing value to the operational and tactical decision makers who need it and also use it to populate the data warehouse, honoring the less demanding load cycles typical of the DW. You will still need the staging area for DW required data not hosted in the ODS, but in following this recommendation you economize on your ETL flows and staging area volumes without sacrificing value or function.Benefit of viewsCan be modified by anyone, even outside of BIDS/SSDTCan provide default values when neededSimple computation can be carried out by viewsRenaming fields leads to better understanding of the flowCan present a star schema, even if the underlying structure is much more complexCan be analyzed by third party tools to get dependency trackingCan be optimized without ever opening BIDS/SSDTBenefit of views in SSIS:Simpler code inside SSIS packagesNo need to open the package to understand what it is readingEasily query the database for debugging purposesQuery optimizations can be carried out separatelyBenefit of views in SSAS:Renaming database columns to SSAS attributesClearly exposing all the transformations to DBASimplifying handling of fast variationsFull control on JOINs sent to SQL ServerExposing a start schema, even if the underlying structure is not a simple star schemaHybrid: 3NF data warehouse feeds dimensional presentation layerthe creation of an OLTP Mirror has always been a successful strategy for several reasons:  We use the “real” OLTP for a small amount of time during mirroring. Then we free it, so it can do any kind of database maintenance needed.  The ability to modify the keys, relations and views of the mirror lead to simpler ETL processes that are cheaper both to build and to maintain over time  The ability to build particular indexes in OLTP Mirror could improve performance of other steps of ETL processes, without worrying about index maintenance on the original OLTP  The mirror is normally much smaller than the complete OLTP system as we do not have to load all the tables of the OLTP and, for each table, we can decide to mirror only a subset of the columns
  • When people say the use the Kimball model, most times they really mean they are using the Kimball Methodology and/or are using dimensional modeling. The word “Kimball” is synonymous with dimensional modeling. Ralph didn’t invent the original basic concepts of facts and dimensions, however, he established an extensive portfolio of dimensional techniques and vocabulary, including conformed dimensions, slowly changing dimensions, junk dimensions, mini-dimensions, bridge tables, periodic and accumulating snapshot fact tables, and the list goes on. Over the nearly 20 years,  Ralph and his Kimball Group colleagues have written hundreds of articles and Design Tips on dimensional modeling, as well as the seminal text, The Data Warehouse Toolkit, Second Edition (John Wiley, 2002).While Ralph led the charge, dimensional modeling is appropriate for organizations who embrace the Kimball architecture, as well as those who follow the Corporate Information Factory (CIF) hub-and-spoke architecture espoused by Bill Inmon and others.
  • Question: How many people know what surrogate keys are?
  • Question: How many people know what SSAS cubes are?
  • Dell Microsoft Analytics Platform System (v2, SQL 2012, 15TB-6PB)HP AppSystem for SQL 2012 Parallel Data Warehouse (v2, SQL 2012, 15TB-6PB)Quanta Microsoft Analytics Platform System (v2, SQL 2012, 15TB-6PB)
  • Other non-Microsoft: Qlikview, Tableau

Building an Effective Data Warehouse Architecture Building an Effective Data Warehouse Architecture Presentation Transcript

  • Building an Effective Data Warehouse Architecture James Serra, PDW Technology Solution Professional Microsoft May 7-9, 2014 | San Jose, CA
  • Please silence cell phones
  • About Me  Business Intelligence Consultant, in IT for 28 years  Microsoft, PDW Technology Solution Professional (TSP)  Owner of Serra Consulting Services, specializing in end-to-end Business Intelligence and Data Warehouse solutions using the Microsoft BI stack  Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW developer  Been perm, contractor, consultant, business owner  Presenter at PASS Business Analytics Conference and PASS Summit  MCSE for SQL Server 2012: Data Platform and BI  SME for SQL Server 2012 certs  Contributing writer for SQL Server Pro magazine  Blog at JamesSerra.com  SQL Server MVP  Author of book “Reporting with Microsoft SQL Server 2012” 3
  • Agenda  What a Data Warehouse is not  What is a Data Warehouse and why use one?  Fast Track Data Warehouse (FTDW)  Appliances  Data Warehouse vs Data Mart  Kimball and Inmon Methodologies  Populating a Data Warehouse  ETL vs ELT  Surrogate Keys  SSAS Cubes  Modern Data Warehouse 4
  • What a Data Warehouse is not • A data warehouse is not a copy of a source database with the name prefixed with “DW” • It is not a copy of multiple tables (i.e. customer) from various sources systems unioned together in a view • It is not a dumping ground for tables from various sources with not much design put into it 5
  • Data Warehouse Maturity Model 6 Courtesy of Wayne Eckerson
  • What is a Data Warehouse and why use one? All these reasons are for data warehouses only (not OLTP):  Reduce stress on production system  Optimized for read access, sequential disk scans  Integrate many sources of data  Keep historical records (no need to save hardcopy reports)  Restructure/rename tables and fields, model data  Protect against source system upgrades  Use Master Data Management, including hierarchies  No IT involvement needed for users to create reports  Improve data quality and plugs holes in source systems  One version of the truth  Easy to create BI solutions on top of it (i.e. SSAS Cubes) 7
  • Why use a Data Warehouse? 8 Legacy applications + databases = chaos Production Control MRP Inventory Control Parts Management Logistics Shipping Raw Goods Order Control Purchasing Marketing Finance Sales Accounting Management Reporting Engineering Actuarial Human Resources Continuity Consolidation Control Compliance Collaboration Enterprise data warehouse = order Single version of the truth Enterprise Data Warehouse Every question = decision Two purposes of data warehouse: 1) save time building reports; 2) slice in dice in ways you could not do before
  • Hardware Solutions  Fast Track Data Warehouse - A reference configuration optimized for data warehousing. This saves an organization from having to commit resources to configure and build the server hardware. Fast Track Data Warehouse hardware is tested for data warehousing which eliminates guesswork and is designed to save you months of configuration, setup, testing and tuning. You just need to install the OS and SQL Server  Appliances - Microsoft has made available SQL Server appliances (SMP and MPP) that allow customers to deploy data warehouse (DW), business intelligence (BI) and database consolidation solutions in a very short time, with all the components pre-configured and pre-optimized. These appliances include all the hardware, software and services for a complete, ready-to-run, out-of-the-box, high performance, energy-efficient solutions 9
  • Fast Track Data Warehouse 10 FT Version 4.0 Benefits: - Pre-Balanced Architectures - Choice of HW platforms - Lower TCO - High Scale - Reduced Risk - Taking the guess work out of building a server
  • Appliances  Dell Quickstart Data Warehouse Appliance 1000 (FT 4.0, 5TB)  Dell Quickstart Data Warehouse Appliance 2000 (FT 4.0, 12TB)  IBM Fast Track Data Warehouse (FT 4.0, 3 versions: 24TB, 60TB, 112TB)  Microsoft SQL Server 2012 Fast Track Data Warehouse Reference Architecture for HP 11
  • Data Warehouse vs Data Mart  Data Warehouse: A single organizational repository of enterprise wide data across many or all subject areas  Holds multiple subject areas  Holds very detailed information  Works to integrate all data sources  Feeds data mart  Data Mart: Subset of the data warehouse that is usually oriented to specific subject (finance, marketing, sales) • The logical combination of all the data marts is a data warehouse In short, a data warehouse as contains many subject areas, and a data mart contains just one of those subject areas 12
  • Kimball and Inmon Methodologies Two approaches for building data warehouses 13
  • Kimball and Inmon Myths  Myth: Kimball is a bottom-up approach without enterprise focus  Really top-down: BUS matrix (choose business processes/data sources), conformed dimensions, MDM  Myth: Inmon requires a ton of up-front design that takes a long time  Inmon says to build DW iteratively, not big bang approach (p. 91 BDW, p. 21 Imhoff)  Myth: Star schema data marts are not allowed in Inmon’s model  Inmon says they are good for direct end-user access of data (p. 365 BDW), good for data marts (p. 12 TTA)  Myth: Very few companies use the Inmon method  Survey had 39% Inmon vs 26% Kimball. Many have a EDW  Myth: The Kimball and Inmon architectures are incompatible  They can work together to provide a better solution 14
  • Kimball and Inmon Methodologies  Relational (Inmon) vs Dimensional (Kimball)  Relational Modeling:  Entity-Relationship (ER) model  Normalization rules  Many tables using joins  History tables, natural keys  Good for indirect end-user access of data  Dimensional Modeling:  Facts and dimensions, star schema  Less tables but have duplicate data (de-normalized)  Easier for user to understand (but strange for IT people used to relational)  Slowly changing dimensions, surrogate keys  Good for direct end-user access of data 15
  • Relational Model vs Dimensional Model 16 Relational Model Dimensional Model If you are a business user, which model is easier to use?
  • Kimball and Inmon Methodologies • Kimball: • Logical data warehouse (BUS), made up of subject areas (data marts) • Business driven, users have active participation • Decentralized data marts (not required to be a separate physical data store) • Independent dimensional data marts optimized for reporting/analytics • Integrated via Conformed Dimensions (provides consistency across data sources) • 2-tier (data mart, cube), less ETL, no data duplication • Inmon: • Enterprise data model (CIF) that is a enterprise data warehouse (EDW) • IT Driven, users have passive participation • Centralized atomic normalized tables (off limit to end users) • Later create dependent data marts that are separate physical subsets of data and can be used for multiple purposes • Integration via enterprise data model • 3-tier (data warehouse, data mart, cube), duplication of data 17
  • Kimball Model 18 Why staging: Limit source contention (ELT), Recoverability, Backup, Auditing
  • Inmon Model 19
  • Reasons to add a Enterprise Data Warehouse  Single version of the truth  May make building dimensions easier using lightly denormalized tables in EDW instead of going directly from the OLTP source  Normalized EDW results in enterprise-wide consistency which makes it easier to spawn-off the data marts at the expense of duplicated data  Less daily ETL refresh and reconciliation if have many sources and many data marts in multiple databases  One place to control data (no duplication of effort and data)  Reason not to: If have a few sources that need reporting quickly 20
  • Which model to use?  Models are not that different, having become similar over the years, and can compliment each other  Boils down to Inmon creates a normalized DW before creating a dimensional data mart and Kimball skips the normalized DW  With tweaks to each model, they look very similar (adding a normalized EDW to Kimball, dimensionally structured data marts to Inmon)  Bottom line: Understand both approaches and pick parts from both for your situation – no need to just choose just one approach  BUT, no solution will be effective unless you possess soft skills (leadership, communication, planning, and interpersonal relationships) 21
  • Hybrid Model 22Advice: Use SQL Server Views to interface between each level in the model In the DW Bus Architecture each data mart could be a schema (broken out by business process subject areas), all in one database. Some companies will have one database and then build separate data mart tables from it
  • Kimball Methodology 23 From: Kimball’s The Microsoft Data Warehouse Toolkit Kimball defines a development lifecycle, where Inmon is just about the data warehouse (not “how” used)
  • Populating a Data Warehouse  Determine frequency of data pull (daily, weekly, etc)  Full Extraction – All data (usually dimension tables)  Incremental Extraction – Only data changed from last run (fact tables)  How to determine data that has changed  Timestamp - Last Updated  Change Data Capture (CDC)  Partitioning by date  Triggers on tables  MERGE SQL Statement  Column DEFAULT value populated with date  Online Extraction – Data from source. First create copy of source:  Replication  Database Snapshot  Availability Groups  Offline Extraction – Data from flat file 24
  • ETL vs ELT • Extract, Transform, and Load (ETL) • Transform while hitting source system • No staging tables • Processing done by ETL tools (SSIS) • Extract, Load, Transform (ELT) • Uses staging tables • Processing done by target database engine (SSIS: Execute T-SQL Statement task instead of Data Flow Transform tasks) • Use for big volumes of data • Use when source and target databases are the same • Use with Parallel Data Warehouse (PDW) ELT is better since database engine is more efficient than SSIS • Best use of database engine: Transformations • Best use of SSIS: Data pipeline and workflow management 25
  • Surrogate Keys Surrogate Keys – Unique identifier not derived from source system • Embedded in fact tables as foreign keys to dimension tables • Allows integrating data from multiple source systems • Protect from source key changes in the source system • Allows for slowly changing dimensions • Allows you to create rows in the dimension that don’t exist in the source (-1 in fact table for unassigned) • Improves performance (joins) and database size by using integer type instead of text • Implemented via identity column on dimension tables 26
  • SSAS Cubes Reasons to report off cubes instead of the data warehouse:  Aggregating (Summarizing) the data for performance  Multidimensional analysis – slice, dice, drilldown, show details  Can store Hierarchies  Built-in support for KPI’s  Security: You can use the security setting to give end-users access to only those parts (slices) of the cube relevant to them  Built-in advanced time-calculations – i.e. 12-month rolling average  Easily use Excel to view data via Pivot Tables  Automatically handles Slowly Changing Dimensions (SCD)  Required for PerformancePoint, Power View, and SSAS data mining 27
  • Data Warehouse Architecture 28
  • Modern Data Warehouse Think about future needs: • Increasing data volumes • Real-time performance • New data sources and types • Cloud-born data Solution – Microsoft Analytics Platform System: • Scalable • MPP architecture • HDInsight • Polybase 29
  • Resources  Data Warehouse Architecture – Kimball and Inmon methodologies: http://bit.ly/SrzNHy  SQL Server 2012: Multidimensional vs tabular: http://bit.ly/SrzX1x  Data Warehouse vs Data Mart: http://bit.ly/SrAi4p  Fast Track Data Warehouse Reference Guide for SQL Server 2012: http://bit.ly/SrAwsj  Complex reporting off a SSAS cube: http://bit.ly/SrAEYw  Surrogate Keys: http://bit.ly/SrAIrp  Normalizing Your Database: http://bit.ly/SrAHnc  Difference between ETL and ELT: http://bit.ly/SrAKQa  Microsoft’s Data Warehouse offerings: http://bit.ly/xAZy9h  Microsoft SQL Server Reference Architecture and Appliances: http://bit.ly/y7bXY5  Methods for populating a data warehouse: http://bit.ly/SrARuZ  Great white paper: Microsoft EDW Architecture, Guidance and Deployment Best Practices: http://bit.ly/SrAZug  End-User Microsoft BI Tools – Clearing up the confusion: http://bit.ly/SrBMLT  Microsoft Appliances: http://bit.ly/YQIXzM  Why You Need a Data Warehouse: http://bit.ly/1fwEq0j  Data Warehouse Maturity Model: http://bit.ly/xl4mGM 32
  • Q & A ? James Serra, Technology Solution Professional Email me at: JamesSerra3@gmail.com Follow me at: @JamesSerra Link to me at: www.linkedin.com/in/JamesSerra Visit my blog at: JamesSerra.com (where this slide deck will be posted)
  • Session Evaluations Submit by 5pmFriday May 9 to WIN prizes Your feedback is important and valuable. ways to access Go to passbac2014/evals Download the PASS EVENT App from your App Store and search: PASS BAC 2014 Follow the QR code link displayed on session signage throughout the conference venue and in the program guide
  • for attending this session and the PASS Business Analytics Conference 2014 Thank You May 7-9, 2014 | San Jose, CA