SlideShare a Scribd company logo
1 of 21
Fathers of Data Warehousing Concepts<br />William H. Inmon BiographyBill Inmon, is recognized as the quot;
father of the data warehousequot;
 and co-creator of the quot;
Corporate Information Factory.quot;
 He has 35 years of experience in database technology management and data warehouse design. He is known globally for his seminars on developing data warehouses and has been a keynote speaker for every major computing association and many industry conferences, seminars, and tradeshows.As an author, Bill has written about a variety of topics on the building, usage, and maintenance of the data warehouse and the Corporate Information Factory. He has written more than 650 articles, many of them have been published in major computer journals such as Datamation, ComputerWorld, and Byte Magazine. Bill is currently a columnist with Data Management Review, and has been since its inception. He has published 45 books; one sold over half a million copies, 21 have been book club selections with publishers such as Prentice-Hall, John Wiley, and QED. Translations of various books have been done in Chinese, Dutch, French, German, Japanese, Korean, Portuguese, Russian, and Spanish. Ralph Kimball BiographyRalph Kimball is known worldwide as an innovator, writer, educator, speaker and consultant in the field of data warehousing. He has remained steadfast in his long-term conviction that data warehouses must be designed to be understandable and fast. His books on dimensional design techniques have become the all time best sellers in data warehousing. To date Ralph has written more than 100 articles and columns for Intelligent Enterprise and its predecessors, winning the Readers Choice Award five years in a row.After receiving a Ph.D. in 1972 from Stanford in electrical engineering (specializing in man-machine systems), Ralph joined the Xerox Palo Alto Research Center (PARC). At PARC Ralph co-invented the Xerox Star Workstation, the first commercial product to use mice, icons and windows.Ralph then became vice president of applications at Metaphor Computer Systems, pioneering decision support software and services provider. As a hands-on manager, he developed the Capsule Facility in 1982. The Capsule was a graphical programming technique which connected icons together in a logical flow, allowing a very visual style of programming for non-programmers. The Capsule was used to build reporting and analysis applications at Metaphor.Ralph founded Red Brick Systems in 1986, serving as CEO until 1992. Red Brick Systems, now owned by IBM, was known for its lightning fast relational database optimized for data warehousing. Ralph Kimball Associates incorporated in 1992 to provide data warehouse consulting and education.  <br />Ralph Kimball Vs. Bill Inmon's Paradigm of Data Warehouse<br />In data warehousing field, we often hear about discussion on whether a person/organization’s philosophy falls into Bill Inmon's camp or into Ralph Kimball's camp. Below is the difference between two philosophies:<br />Bill Inmon's paradigm<br />Data warehouse is one part of the overall business intelligence system. An enterprise has one data warehouse, and data marts source their information from the data warehouse. In the data warehouse, information is stored in 3rd normal form. <br />Ralph Kimball's paradigm<br />Data warehouse is the conglomerate of all data marts within the enterprise. Information is always stored in the dimensional model. <br />There is no right or wrong between these two ideas, as they represent different data warehousing philosophies. In reality, the data warehouse in most enterprises is closer to Ralph Kimball's idea. This is because most data warehouses started out as a departmental effort, and hence they originated as a data mart. Only when more data marts are built later do they evolve into a data warehouse.<br />Informatica Software Architecture illustrated<br />Informatica ETL product, known as Informatica Power Center consists of 3 main components.<br />1. Informatica PowerCenter Client Tools: <br />These are the development tools installed at developer end. These tools enable a developer to <br />Define transformation process, known as mapping. (Designer) <br />Define run-time properties for a mapping, known as sessions (Workflow Manager) <br />Monitor execution of sessions (Workflow Monitor) <br />Manage repository, useful for administrators (Repository Manager) <br />Report Metadata (Metadata Reporter) <br />2. Informatica PowerCenter Repository:<br />Repository is the heart of Informatica tools. Repository is a kind of data inventory where all the data related to mappings, sources, targets etc is kept. This is the place where all the metadata for your application is stored. All the client tools and Informatica Server fetch data from Repository. Informatica client and server without repository is same as a PC without memory/harddisk, which has got the ability to process data but has no data to process. This can be treated as backend of Informatica. <br />3. Informatica PowerCenter Server: Server is the place, where all the executions take place. Server makes physical connections to sources/targets, fetches data, applies the transformations mentioned in the mapping and loads the data in the target system.<br />This architecture is visually explained in diagram below:<br />SourcesStandard: RDBMS, Flat Files, XML, ODBC Applications: SAP R/3, SAP BW, PeopleSoft, Siebel, JD Edwards, i2 EAI: MQ Series, Tibco, JMS, Web ServicesLegacy: Mainframes (DB2, VSAM, IMS, IDMS, Adabas)AS400 (DB2, Flat File) Remote Sources TargetsStandard: RDBMS, Flat Files, XML, ODBCApplications: SAP R/3, SAP BW, PeopleSoft, Siebel, JD Edwards, i2EAI: MQ Series, Tibco, JMS, Web ServicesLegacy: Mainframes (DB2)AS400 (DB2)Remote Targets <br />This is the sufficient knowledge to start with Informatica. So lets go straight to development in Informatica.<br />Informatica Product Line<br />Informatica is a powerful ETL tool from Informatica Corporation, a leading provider of enterprise data integration software and ETL softwares. <br />The important products provided by Informatica Corporation is provided below:<br />Power Center Power Mart Power Exchange Power Center Connect Power Channel Metadata Exchange Power Analyzer Super Glue <br />Power Center & Power Mart: Power Mart is a departmental version of Informatica for building, deploying, and managing data warehouses and data marts. Power center is used for corporate enterprise data warehouse and power mart is used for departmental data warehouses like data marts. Power Center supports global repositories and networked repositories and it can be connected to several sources. Power Mart supports single repository and it can be connected to fewer sources when compared to Power Center. Power Mart can extensibily grow to an enterprise implementation and it is easy for developer productivity through a codeless environment. <br />Power Exchange: Informatica Power Exchange as a stand alone service or along with Power Center, helps organizations leverage data by avoiding manual coding of data extraction programs. Power Exchange supports batch, real time and changed data capture options in main frame(DB2, VSAM, IMS etc.,), mid range (AS400 DB2 etc.,), and for relational databases (oracle, sql server, db2 etc) and flat files in unix, linux and windows systems. <br />Power Center Connect: This is add on to Informatica Power Center. It helps to extract data and metadata from ERP systems like IBM's MQSeries, Peoplesoft, SAP, Siebel etc. and other third party applications.<br />Power Channel: This helps to transfer large amount of encrypted and compressed data over LAN, WAN, through Firewalls, tranfer files over FTP, etc.<br />Meta Data Exchange: Metadata Exchange enables organizations to take advantage of the time and effort already invested in defining data structures within their IT environment when used with Power Center. For example, an organization may be using data modeling tools, such as Erwin, Embarcadero, Oracle designer, Sybase Power Designer etc for developing data models. Functional and technical team should have spent much time and effort in creating the data model's data structures(tables, columns, data types, procedures, functions, triggers etc). By using meta deta exchange, these data structures can be imported into power center to identifiy source and target mappings which leverages time and effort. There is no need for informatica developer to create these data structures once again.<br />Power Analyzer: Power Analyzer provides organizations with reporting facilities. PowerAnalyzer makes accessing, analyzing, and sharing enterprise data simple and easily available to decision makers. PowerAnalyzer enables to gain insight into business processes and develop business intelligence.<br />With PowerAnalyzer, an organization can extract, filter, format, and analyze corporate information from data stored in a data warehouse, data mart, operational data store, or otherdata storage models. PowerAnalyzer is best with a dimensional data warehouse in a relational database. It can also run reports on data in any table in a relational database that do not conform to the dimensional model.<br />Super Glue: Superglue is used for loading metadata in a centralized place from several sources. Reports can be run against this superglue to analyze meta data.<br />Informatica Transformations<br />A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data.<br />Transformations can be of two types: <br />Active Transformation<br />An active transformation can change the number of rows that pass through the transformation, change the transaction boundary, can change the row type. For example, Filter, Transaction Control and Update Strategy are active transformations. <br />The key point is to note that Designer does not allow you to connect multiple active transformations or an active and a passive transformation to the same downstream transformation or transformation input group because the Integration Service may not be able to concatenate the rows passed by active transformations However, Sequence Generator transformation(SGT) is an exception to this rule. A SGT does not receive data. It generates unique numeric values. As a result, the Integration Service does not encounter problems concatenating rows passed by a SGT and an active transformation. <br />Passive Transformation. <br />A passive transformation does not change the number of rows that pass through it, maintains the transaction boundary, and maintains the row type. <br />The key point is to note that Designer allows you to connect multiple transformations to the same downstream transformation or transformation input group only if all transformations in the upstream branches are passive. The transformation that originates the branch can be active or passive. <br />Transformations can be Connected or UnConnected to the data flow.<br />Connected TransformationConnected transformation is connected to other transformations or directly to target table in the mapping. <br />UnConnected Transformation<br />An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation.<br />Informatica Transformations<br />Following are the list of Transformations available in Informatica:<br />Aggregator Transformation Application Source Qualifier Transformation Custom Transformation Data Masking Transformation Expression Transformation External Procedure Transformation Filter Transformation HTTP Transformation Input Transformation Java Transformation Joiner Transformation Lookup Transformation Normalizer Transformation Output Transformation Rank Transformation Reusable Transformation Router Transformation Sequence Generator Transformation Sorter Transformation Source Qualifier Transformation SQL Transformation Stored Procedure Transformation Transaction Control Transaction Union Transformation Unstructured Data Transformation Update Strategy Transformation XML Generator Transformation XML Parser Transformation XML Source Qualifier Transformation Advanced External Procedure Transformation External Transformation <br />In the following pages, we will explain all the above Informatica Transformations and their significances in the ETL process in detail. <br />Informatica Transformations<br />Aggregator Transformation<br />Aggregator transformation performs aggregate funtions like average, sum, count etc. on multiple rows or groups. The Integration Service performs these calculations as it reads and stores data group and row data in an aggregate cache. It is an Active & Connected transformation.<br />Difference b/w Aggregator and Expression Transformation? Expression transformation permits you to perform calculations row by row basis only. In Aggregator you can perform calculations on groups.<br />Aggregator transformation has following ports State, State_Count, Previous_State and State_Counter. <br />Components: Aggregate Cache, Aggregate Expression, Group by port, Sorted input.<br />Aggregate Expressions: are allowed only in aggregate transformations. can include conditional clauses and non-aggregate functions. can also include one aggregate function nested into another aggregate function. <br />Aggregate Functions: AVG, COUNT, FIRST, LAST, MAX, MEDIAN, MIN, PERCENTILE, STDDEV, SUM, VARIANCE<br />Application Source Qualifier Transformation Represents the rows that the Integration Service reads from an application, such as an ERP source, when it runs a session.It is an Active & Connected transformation.<br />Custom Transformation<br />It works with procedures you create outside the designer interface to extend PowerCenter functionality. calls a procedure from a shared library or DLL. It is active/passive & connected type. <br />You can use CT to create T. that require multiple input groups and multiple output groups.<br />Custom transformation allows you to develop the transformation logic in a procedure. Some of the PowerCenter transformations are built using the Custom transformation. Rules that apply to Custom transformations, such as blocking rules, also apply to transformations built using Custom transformations. PowerCenter provides two sets of functions called generated and API functions. The Integration Service uses generated functions to interface with the procedure. When you create a Custom transformation and generate the source code files, the Designer includes the generated functions in the files. Use the API functions in the procedure code to develop the transformation logic. <br />Difference between Custom and External Procedure Transformation? In Custom T, input and output functions occur separately.The Integration Service passes the input data to the procedure using an input function. The output function is a separate function that you must enter in the procedure code to pass output data to the Integration Service. In contrast, in the External Procedure transformation, an external procedure function does both input and output, and its parameters consist of all the ports of the transformation.<br />Data Masking Transformation <br />Passive & Connected. It is used to change sensitive production data to realistic test data for non production environments. It creates masked data for development, testing, training and data mining. Data relationship and referential integrity are maintained in the masked data.<br />For example: It returns masked value that has a realistic format for SSN, Credit card number, birthdate, phone number, etc. But is not a valid value. Masking types: Key Masking, Random Masking, Expression Masking, Special Mask format. Default is no masking. <br />Expression Transformation <br />Passive & Connected. are used to perform non-aggregate functions, i.e to calculate values in a single row. Example: to calculate discount of each product or to concatenate first and last names or to convert date to a string field.<br />You can create an Expression transformation in the Transformation Developer or the Mapping Designer. Components: Transformation, Ports, Properties, Metadata Extensions. <br />External Procedure<br />Passive & Connected or Unconnected. It works with procedures you create outside of the Designer interface to extend PowerCenter functionality. You can create complex functions within a DLL or in the COM layer of windows and bind it to external procedure transformation. To get this kind of extensibility, use the Transformation Exchange (TX) dynamic invocation interface built into PowerCenter. You must be an experienced programmer to use TX and use multi-threaded code in external procedures. <br />Filter Transformation<br />Active & Connected. It allows rows that meet the specified filter condition and removes the rows that do not meet the condition. For example, to find all the employees who are working in NewYork or to find out all the faculty member teaching Chemistry in a state. The input ports for the filter must come from a single transformation. You cannot concatenate ports from more than one transformation into the Filter transformation. Components: Transformation, Ports, Properties, Metadata Extensions. <br />HTTP Transformation<br />Passive & Connected. It allows you to connect to an HTTP server to use its services and applications. With an HTTP transformation, the Integration Service connects to the HTTP server, and issues a request to retrieves data or posts data to the target or downstream transformation in the mapping.<br />Authentication types: Basic, Digest and NTLM. Examples: GET, POST and SIMPLE POST. <br />Java Transformation<br />Active or Passive & Connected. It provides a simple native programming interface to define transformation functionality with the Java programming language. You can use the Java transformation to quickly define simple or moderately complex transformation functionality without advanced knowledge of the Java programming language or an external Java development environment. <br />Joiner Transformation<br />Active & Connected. It is used to join data from two related heterogeneous sources residing in different locations or to join data from the same source. In order to join two sources, there must be at least one or more pairs of matching column between the sources and a must to specify one source as master and the other as detail. For example: to join a flat file and a relational source or to join two flat files or to join a relational source and a XML source. The Joiner transformation supports the following types of joins: <br />Normal <br />Normal join discards all the rows of data from the master and detail source that do not match, based on the condition.<br />Master Outer <br />Master outer join discards all the unmatched rows from the master source and keeps all the rows from the detail source and the matching rows from the master source.<br />Detail Outer <br />Detail outer join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detail source.<br />Full Outer <br />Full outer join keeps all rows of data from both the master and detail sources.<br />Limitations on the pipelines you connect to the Joiner transformation:*You cannot use a Joiner transformation when either input pipeline contains an Update Strategy transformation.*You cannot use a Joiner transformation if you connect a Sequence Generator transformation directly before the Joiner transformation. <br />Lookup Transformation<br />Passive & Connected or UnConnected. It is used to look up data in a flat file, relational table, view, or synonym. It compares lookup transformation ports (input ports) to the source column values based on the lookup condition. Later returned values can be passed to other transformations. You can create a lookup definition from a source qualifier and can also use multiple Lookup transformations in a mapping. <br />You can perform the following tasks with a Lookup transformation:*Get a related value. Retrieve a value from the lookup table based on a value in the source. For example, the source has an employee ID. Retrieve the employee name from the lookup table.*Perform a calculation. Retrieve a value from a lookup table and use it in a calculation. For example, retrieve a sales tax percentage, calculate a tax, and return the tax to a target.*Update slowly changing dimension tables. Determine whether rows exist in a target.<br />Lookup Components: Lookup source, Ports, Properties, Condition.Types of Lookup:1) Relational or flat file lookup.2) Pipeline lookup.3) Cached or uncached lookup.4) connected or unconnected lookup<br />Informatica Transformations<br />Normalizer Transformation<br />Active & Connected. The Normalizer transformation processes multiple-occurring columns or multiple-occurring groups of columns in each source row and returns a row for each instance of the multiple-occurring data. It is used mainly with COBOL sources where most of the time data is stored in de-normalized format. <br />You can create following Normalizer transformation:*VSAM Normalizer transformation. A non-reusable transformation that is a Source Qualifier transformation for a COBOL source. VSAM stands for Virtual Storage Access Method, a file access method for IBM mainframe.*Pipeline Normalizer transformation. A transformation that processes multiple-occurring data from relational tables or flat files. This is default when you create a normalizer transformation.<br />Components: Transformation, Ports, Properties, Normalizer, Metadata Extensions. <br />Rank Transformation <br />Active & Connected. It is used to select the top or bottom rank of data. You can use it to return the largest or smallest numeric value in a port or group or to return the strings at the top or the bottom of a session sort order. For example, to select top 10 Regions where the sales volume was very high or to select 10 lowest priced products. As an active transformation, it might change the number of rows passed through it. Like if you pass 100 rows to the Rank transformation, but select to rank only the top 10 rows, passing from the Rank transformation to another transformation. You can connect ports from only one transformation to the Rank transformation. You can also create local variables and write non-aggregate expressions. <br />Router Transformation<br />Active & Connected. It is similar to filter transformation because both allow you to apply a condition to test data. The only difference is, filter transformation drops the data that do not meet the condition whereas router has an option to capture the data that do not meet the condition and route it to a default output group.If you need to test the same input data based on multiple conditions, use a Router transformation in a mapping instead of creating multiple Filter transformations to perform the same task. The Router transformation is more efficient. <br />Sequence Generator Transformation<br />Passive & Connected transformation. It is used to create unique primary key values or cycle through a sequential range of numbers or to replace missing primary keys. <br />It has two output ports: NEXTVAL and CURRVAL. You cannot edit or delete these ports. Likewise, you cannot add ports to the transformation. NEXTVAL port generates a sequence of numbers by connecting it to a transformation or target. CURRVAL is the NEXTVAL value plus one or NEXTVAL plus the Increment By value. You can make a Sequence Generator reusable, and use it in multiple mappings. You might reuse a Sequence Generator when you perform multiple loads to a single target.<br />For non-reusable Sequence Generator transformations, Number of Cached Values is set to zero by default, and the Integration Service does not cache values during the session.For non-reusable Sequence Generator transformations, setting Number of Cached Values greater than zero can increase the number of times the Integration Service accesses the repository during the session. It also causes sections of skipped values since unused cached values are discarded at the end of each session. <br />For reusable Sequence Generator transformations, you can reduce Number of Cached Values to minimize discarded values, however it must be greater than one. When you reduce the Number of Cached Values, you might increase the number of times the Integration Service accesses the repository to cache values during the session.<br />Sorter Transformation<br />Active & Connected transformation. It is used sort data either in ascending or descending order according to a specified sort key. You can also configure the Sorter transformation for case-sensitive sorting, and specify whether the output rows should be distinct. When you create a Sorter transformation in a mapping, you specify one or more ports as a sort key and configure each sort key port to sort in ascending or descending order. <br />Source Qualifier Transformation<br />Active & Connected transformation. When adding a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier is used to join data originating from the same source database, filter rows when the Integration Service reads source data, Specify an outer join rather than the default inner join and to specify sorted ports.It is also used to select only distinct values from the source and to create a custom query to issue a special SELECT statement for the Integration Service to read source data <br />SQL Transformation<br />Active/Passive & Connected transformation. The SQL transformation processes SQL queries midstream in a pipeline. You can insert, delete, update, and retrieve rows from a database. You can pass the database connection information to the SQL transformation as input data at run time. The transformation processes external SQL scripts or SQL queries that you create in an SQL editor. The SQL transformation processes the query and returns rows and database errors. <br />Stored Procedure Transformation<br />Passive & Connected or UnConnected transformation. It is useful to automate time-consuming tasks and it is also used in error handling, to drop and recreate indexes and to determine the space in database, a specialized calculation etc. The stored procedure must exist in the database before creating a Stored Procedure transformation, and the stored procedure can exist in a source, target, or any database with a valid connection to the Informatica Server. Stored Procedure is an executable script with SQL statements and control statements, user-defined variables and conditional statements. <br />Transaction Control Transformation<br />Active & Connected. You can control commit and roll back of transactions based on a set of rows that pass through a Transaction Control transformation. Transaction control can be defined within a mapping or within a session.Components: Transformation, Ports, Properties, Metadata Extensions. <br />Union Transformation<br />Active & Connected. The Union transformation is a multiple input group transformation that you use to merge data from multiple pipelines or pipeline branches into one pipeline branch. It merges data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements. Similar to the UNION ALL statement, the Union transformation does not remove duplicate rows.Rules1) You can create multiple input groups, but only one output group.2) All input groups and the output group must have matching ports. The precision, datatype, and scale must be identical across all groups.3) The Union transformation does not remove duplicate rows. To remove duplicate rows, you must add another transformation such as a Router or Filter transformation.4) You cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.5) The Union transformation does not generate transactions.Components: Transformation tab, Properties tab, Groups tab, Group Ports tab. <br />Unstructured Data Transformation<br />Active/Passive and connected. The Unstructured Data transformation is a transformation that processes unstructured and semi-structured file formats, such as messaging formats, HTML pages and PDF documents. It also transforms structured formats such as ACORD, HIPAA, HL7, EDI-X12, EDIFACT, AFP, and SWIFT.Components: Transformation, Properties, UDT Settings, UDT Ports, Relational Hierarchy. <br />Update Strategy Transformation<br />Active & Connected transformation. It is used to update data in target table, either to maintain history of data or recent changes. It flags rows for insert, update, delete or reject within a mapping. <br />XML Generator Transformation<br />Active & Connected transformation. It lets you create XML inside a pipeline. The XML Generator transformation accepts data from multiple ports and writes XML through a single output port. <br />XML Parser Transformation<br />Active & Connected transformation. The XML Parser transformation lets you extract XML data from messaging systems, such as TIBCO or MQ Series, and from other sources, such as files or databases. The XML Parser transformation functionality is similar to the XML source functionality, except it parses the XML in the pipeline. <br />XML Source Qualifier Transformation<br />Active & Connected transformation. XML Source Qualifier is used only with an XML source definition. It represents the data elements that the Informatica Server reads when it executes a session with XML sources. has one input or output port for every column in the XML source. <br />External Procedure Transformation<br />Active & Connected/UnConnected transformation. Sometimes, the standard transformations such as Expression transformation may not provide the functionality that you want. In such cases External procedure is useful to develop complex functions within a dynamic link library (DLL) or UNIX shared library, instead of creating the necessary Expression transformations in a mapping.<br />Advanced External Procedure Transformation<br />Active & Connected transformation. It operates in conjunction with procedures, which are created outside of the Designer interface to extend PowerCenter/PowerMart functionality. It is useful in creating external transformation applications, such as sorting and aggregation, which require all input rows to be processed before emitting any output rows.<br />Quick Reference Guide to Dimensional Modeling<br />Dimensional modeling is the design concept used by many data warehouse designers to build their data warehouse. Dimensional model is the underlying data model used by many of the commercial OLAP products available today in the market. Designing a data warehouse is very different from designing an online transaction processing (OLTP) system. In contrast to an OLTP system in which the purpose is to capture high rates of data changes and additions, the purpose of a data warehouse is to organize large amounts of stable data for ease of analysis and retrieval. Because of these differing purposes, there are many considerations in data warehouse design that differ from OLTP database design. In dimensional model, all data is contained in two types of tables called Fact Table and Dimension Table.<br />Fact Table<br />Each data warehouse or data mart includes one or more fact tables. The fact table captures the data that measures the organizations business operations. A fact table might contain business sales events such as cash register transactions or the contributions and expenditures of a nonprofit organization. Fact tables usually contain large numbers of rows, sometimes in the hundreds of millions of records when they contain one or more years of history for a large organization. A key characteristic of a fact table is that it contains numerical data (facts) that can be summarized to provide information about the history of the operation of the organization. Each fact table also includes a multipart index that contains as foreign keys the primary keys of related dimension tables, which contain the attributes of the fact records. Fact tables should not contain descriptive information or any data other than the numerical measurement fields and the index fields that relate the facts to corresponding entries in the dimension tables. An example of fact table is Sales_Fact table that might contain the information like sale_amount, unit_price, discount, etc.<br />Dimension Table<br />Dimension tables contain attributes that describe fact records in the fact table. Some of these attributes provide descriptive information; others are used to specify how fact table data should be summarized to provide useful information to the analyst. Dimension tables contain hierarchies of attributes that aid in summarization. For example, a dimension containing product information would often contain a hierarchy that separates products into categories such as food, drink, and non-consumable items, with each of these categories further subdivided a number of times until the individual product is reached at the lowest level.<br />Dimensional modeling produces dimension tables in which each table contains fact attributes that are independent of those in other dimensions. For example, a customer dimension table contains data about customers, a product dimension table contains information about products, and a store dimension table contains information about stores. Queries use attributes in dimensions to specify a view into the fact information. For example, a query might use the product, store, and time dimensions to ask the question quot;
What was the cost of non-consumable goods sold in the northeast region in 1999?quot;
 Subsequent queries might drill down along one or more dimensions to examine more detailed data, such as quot;
What was the cost of kitchen products in New York City in the third quarter of 1999?quot;
 In these examples, the dimension tables are used to specify how a measure (sale_amount) in the fact table is to be summarized.<br />Consider an example of Sales_Fact table and the various attributes that describe this fact are Store, Product, Time and say Sales Person. In this case we will have four dimension tables, viz. Store_Dimension, Product_Dimension, Time_Dimension and Sales_Person_Dimension.<br />Figure 1<br />You may notice that all of these dimensions contain a Key field. This is called Surrogate Key. This key is substitute for a natural key in dimensions (e.g., in Sales_Person_Dimension, we have natural key as ID). In a data warehouse a surrogate key is a generalization of the natural production key and is one of the basic elements of data warehouse.<br />As a fact table is described by the four dimension tables described above, it will contain the Surrogate Keys of all these dimensions. This is how the Sales_Fact table will look like:<br />Figure 2<br />Now if you carefully look at the structure of above tables and how they are linked the schema will look like this:<br />Figure 3<br />You can easily tell that this looks like a STAR. Hence its known as Star Schema.<br />Advantages of having Star Schema<br />Star Schema is very easy to understand, even for non technical business managers <br />Star Schema provides better performance and smaller query times <br />Star schema is easily extensible and will handle future changes easily <br />Slowly Changing Dimensions<br />Handling changes to dimensional data across time is the most important aspect in designing a data warehouse. In dimensional modeling, there is a very rare chance that a dimension will remain static over time. For example, a customer address may change; a company may phase out old products and introduce new products. What if a customer name changes, sales person changes his region of sale or a company assigns new sales territory. How to record the history or preserve the old version of history? Here comes the concept of Slowly Changing Dimensions. The term Slowly Changing Dimension is about variation in dimensional attributes over time. The word slowly, in this context, might seem incorrect. A sales person may change his territory rapidly. But in general, when compared to measures in fact table, the changes in dimensions occur slowly.<br />Types of Slowly Changing Dimensions<br />In reference to Figure 3 above, lets say a sales person changes his region of sale. We may handle this change in several ways. These methods fall in various categories based on companys need to preserve an accurate history of dimensional changes. Ralph Kimball categorized the dimensional changes into three categories<br />Type One: Changes that overwrite history <br />Type Two: Preserve history <br />Type Three: Preserve a version of history <br />Type One (Overwrite History)<br />A type one change overwrites existing dimensional attribute with new information. In Sales Person Region change example, the old region name will be overwritten by the new region. Say, a sales person Rob, has territory as ASIA.<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeASIA...<br />Now, if he starts looking after NorthWest Region, by implementing Type 1 dimension, the dimension table will look like:<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeNorthWest...<br />Advantages:<br />This is the easiest way to handle the Slowly Changing Dimension problem, since there is no need to keep track of the old information. <br />Disadvantages:<br />All history is lost. By applying this methodology, it is not possible to trace back in history. For example, in this case, the company would not be able to know that Christina lived in Illinois before. <br />Type Two (Preserve History)<br />A Type Two change writes a record with the new attribute information and preserves a record of the old dimensional data. Type Two changes let you preserve historical data. Implementing Type Two changes within a data warehouse might require significant analysis and development. Type Two changes accurately partition history across time more effectively than other types. However, because Type Two changes add records, they can significantly increase the database's size.<br />In our example, lets say we identify Region as Type Two attribute. This can be handled in this way using:<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeASIA...153203234Rob DoeNorthWest...<br />Advantages:<br />This allows us to accurately keep all historical information. <br />Disadvantages:<br />This will cause the size of the table to grow fast. In cases where the number of rows for the table is very high to start with, storage and performance can become a concern. <br />This necessarily complicates the ETL process. <br />Type Three (Preserve a Version of History)<br />You usually implement Type Three changes only if you have a limited need to preserve and accurately describe history, such as when someone gets married and you need to retain the previous name. Instead of creating a new dimensional record to hold the attribute change, a Type Three change places a value for the change in the original dimensional record. You can create multiple fields to hold distinct values for separate points in time. In the case of a region change example, you could create an OLD_REGION and NEW_REGION field and a REGION_CHANGE_EFF_DATE field to record when the change occurs. This method preserves the change. But how would you handle a second name change, or a third, and so on? The side effects of this method are increased table size and, more important, increased complexity of the queries that analyze historical values from these old fields. After more than a couple of iterations, queries become impossibly complex, and ultimately you're constrained by the maximum number of attributes allowed on a table.<br />This is how the table will look like in Type Three change:<br />Sales_Person_DimensionSales_Person_KeyIDNameOld RegionNew Region...100203234Rob DoeASIANorthWest...<br />Advantages:<br />This does not increase the size of the table, since new information is updated. <br />This allows us to keep some part of history. <br />Disadvantages:<br />Type 3 will not be able to keep all history where an attribute is changed more than once. For example, if Christina later moves to Texas on December 15, 2003, the California information will be lost. <br />Because most business requirements include tracking changes over time, data warehouse architects commonly implement Type Two changes. A data warehouse might use Type Two changes for all attributes in all tables. As an alternative, you can implement a mix of Type One and Type Two changes at an attribute level by implementing Type 2 changes for only attributes whose historical values are important when you're slicing and dicing. For example, users might not need to know an individual's previous name if a name change occurs, so a Type One change would suffice. Users might want the system to show only the person's current name. However, if the company reassigns sales territories, users might need to track who sold what, at what time, and in what territory, necessitating a Type Two change.<br />Although most data warehouses include Type Two changes, you need to seriously examine the business need to record historical data. Implementing Type Two changes might be necessary, but those changes will increase the database size, degrade performance, and lengthen the development time. You need to carefully evaluate using a Type Two implementation, a Type One implementation, or a hybrid implementation.<br />
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc
Informatica doc

More Related Content

What's hot

Data Warehouse Modeling
Data Warehouse ModelingData Warehouse Modeling
Data Warehouse Modeling
vivekjv
 
Implementing bi in proof of concept techniques
Implementing bi in proof of concept techniquesImplementing bi in proof of concept techniques
Implementing bi in proof of concept techniques
Ranjith Ramanan
 
Components of a Data-Warehouse
Components of a Data-WarehouseComponents of a Data-Warehouse
Components of a Data-Warehouse
Abdul Aslam
 
Gartner magic quadrant for data warehouse database management systems
Gartner magic quadrant for data warehouse database management systemsGartner magic quadrant for data warehouse database management systems
Gartner magic quadrant for data warehouse database management systems
paramitap
 

What's hot (20)

Are Data Lakes the new Core DWHs?
Are Data Lakes the new Core DWHs?Are Data Lakes the new Core DWHs?
Are Data Lakes the new Core DWHs?
 
Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems
 
Data junction tool
Data junction toolData junction tool
Data junction tool
 
Module 02 teradata basics
Module 02 teradata basicsModule 02 teradata basics
Module 02 teradata basics
 
Data Warehouse
Data WarehouseData Warehouse
Data Warehouse
 
Data ware house architecture
Data ware house architectureData ware house architecture
Data ware house architecture
 
Data Warehouse Modeling
Data Warehouse ModelingData Warehouse Modeling
Data Warehouse Modeling
 
Tera data
Tera dataTera data
Tera data
 
White paper making an-operational_data_store_(ods)_the_center_of_your_data_...
White paper   making an-operational_data_store_(ods)_the_center_of_your_data_...White paper   making an-operational_data_store_(ods)_the_center_of_your_data_...
White paper making an-operational_data_store_(ods)_the_center_of_your_data_...
 
Hybrid Data Warehouse Hadoop Implementations
Hybrid Data Warehouse Hadoop ImplementationsHybrid Data Warehouse Hadoop Implementations
Hybrid Data Warehouse Hadoop Implementations
 
Roland bouman modern_data_warehouse_architectures_data_vault_and_anchor_model...
Roland bouman modern_data_warehouse_architectures_data_vault_and_anchor_model...Roland bouman modern_data_warehouse_architectures_data_vault_and_anchor_model...
Roland bouman modern_data_warehouse_architectures_data_vault_and_anchor_model...
 
Data Integration: the Beginner's Guide
Data Integration: the Beginner's GuideData Integration: the Beginner's Guide
Data Integration: the Beginner's Guide
 
Managing Data Integration Initiatives
Managing Data Integration InitiativesManaging Data Integration Initiatives
Managing Data Integration Initiatives
 
Implementing bi in proof of concept techniques
Implementing bi in proof of concept techniquesImplementing bi in proof of concept techniques
Implementing bi in proof of concept techniques
 
Lecture4 big data technology foundations
Lecture4 big data technology foundationsLecture4 big data technology foundations
Lecture4 big data technology foundations
 
Data integration ppt-bhawani nandan prasad - iim calcutta
Data integration ppt-bhawani nandan prasad - iim calcuttaData integration ppt-bhawani nandan prasad - iim calcutta
Data integration ppt-bhawani nandan prasad - iim calcutta
 
Components of a Data-Warehouse
Components of a Data-WarehouseComponents of a Data-Warehouse
Components of a Data-Warehouse
 
Introduction to data warehousing
Introduction to data warehousingIntroduction to data warehousing
Introduction to data warehousing
 
SAP HANA Integrated with Microstrategy
SAP HANA Integrated with MicrostrategySAP HANA Integrated with Microstrategy
SAP HANA Integrated with Microstrategy
 
Gartner magic quadrant for data warehouse database management systems
Gartner magic quadrant for data warehouse database management systemsGartner magic quadrant for data warehouse database management systems
Gartner magic quadrant for data warehouse database management systems
 

Similar to Informatica doc

Informatica
InformaticaInformatica
Informatica
mukharji
 
Dbms and it infrastructure
Dbms and  it infrastructureDbms and  it infrastructure
Dbms and it infrastructure
projectandppt
 

Similar to Informatica doc (20)

Informatica Interview Questions & Answers
Informatica Interview Questions & AnswersInformatica Interview Questions & Answers
Informatica Interview Questions & Answers
 
Informatica
InformaticaInformatica
Informatica
 
Technical Skillwise
Technical SkillwiseTechnical Skillwise
Technical Skillwise
 
Skillwise Consulting -Technical competency
Skillwise Consulting -Technical competencySkillwise Consulting -Technical competency
Skillwise Consulting -Technical competency
 
Sap Interview Questions - Part 1
Sap Interview Questions - Part 1Sap Interview Questions - Part 1
Sap Interview Questions - Part 1
 
Informatica training
Informatica trainingInformatica training
Informatica training
 
Data Engineering
Data EngineeringData Engineering
Data Engineering
 
Technology
TechnologyTechnology
Technology
 
Big Data Hadoop ,Business Analytics& Data Warehousing Online Training
Big Data Hadoop ,Business Analytics& Data Warehousing Online TrainingBig Data Hadoop ,Business Analytics& Data Warehousing Online Training
Big Data Hadoop ,Business Analytics& Data Warehousing Online Training
 
Dbms and it infrastructure
Dbms and  it infrastructureDbms and  it infrastructure
Dbms and it infrastructure
 
Sybase: Power Designer
Sybase: Power DesignerSybase: Power Designer
Sybase: Power Designer
 
The Anywhere Enterprise – How a Flexible Foundation Opens Doors
The Anywhere Enterprise – How a Flexible Foundation Opens DoorsThe Anywhere Enterprise – How a Flexible Foundation Opens Doors
The Anywhere Enterprise – How a Flexible Foundation Opens Doors
 
Informix warehouse and accelerator overview
Informix warehouse and accelerator overviewInformix warehouse and accelerator overview
Informix warehouse and accelerator overview
 
Microsoft Fabric Introduction
Microsoft Fabric IntroductionMicrosoft Fabric Introduction
Microsoft Fabric Introduction
 
business analysis-Data warehousing
business analysis-Data warehousingbusiness analysis-Data warehousing
business analysis-Data warehousing
 
Data Engineering A Deep Dive into Databricks
Data Engineering A Deep Dive into DatabricksData Engineering A Deep Dive into Databricks
Data Engineering A Deep Dive into Databricks
 
Summary introduction to data engineering
Summary introduction to data engineeringSummary introduction to data engineering
Summary introduction to data engineering
 
Why do Data Warehousing & Business Intelligence go hand in hand?
Why do Data Warehousing & Business Intelligence go hand in hand? Why do Data Warehousing & Business Intelligence go hand in hand?
Why do Data Warehousing & Business Intelligence go hand in hand?
 
jagadeesh updated
jagadeesh updatedjagadeesh updated
jagadeesh updated
 
Business Information SystemsTopic Infomatica-CloudDescription Info.docx
Business Information SystemsTopic Infomatica-CloudDescription Info.docxBusiness Information SystemsTopic Infomatica-CloudDescription Info.docx
Business Information SystemsTopic Infomatica-CloudDescription Info.docx
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 

Recently uploaded (20)

Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptx
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Tatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf artsTatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf arts
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 

Informatica doc

  • 1. Fathers of Data Warehousing Concepts<br />William H. Inmon BiographyBill Inmon, is recognized as the quot; father of the data warehousequot; and co-creator of the quot; Corporate Information Factory.quot; He has 35 years of experience in database technology management and data warehouse design. He is known globally for his seminars on developing data warehouses and has been a keynote speaker for every major computing association and many industry conferences, seminars, and tradeshows.As an author, Bill has written about a variety of topics on the building, usage, and maintenance of the data warehouse and the Corporate Information Factory. He has written more than 650 articles, many of them have been published in major computer journals such as Datamation, ComputerWorld, and Byte Magazine. Bill is currently a columnist with Data Management Review, and has been since its inception. He has published 45 books; one sold over half a million copies, 21 have been book club selections with publishers such as Prentice-Hall, John Wiley, and QED. Translations of various books have been done in Chinese, Dutch, French, German, Japanese, Korean, Portuguese, Russian, and Spanish. Ralph Kimball BiographyRalph Kimball is known worldwide as an innovator, writer, educator, speaker and consultant in the field of data warehousing. He has remained steadfast in his long-term conviction that data warehouses must be designed to be understandable and fast. His books on dimensional design techniques have become the all time best sellers in data warehousing. To date Ralph has written more than 100 articles and columns for Intelligent Enterprise and its predecessors, winning the Readers Choice Award five years in a row.After receiving a Ph.D. in 1972 from Stanford in electrical engineering (specializing in man-machine systems), Ralph joined the Xerox Palo Alto Research Center (PARC). At PARC Ralph co-invented the Xerox Star Workstation, the first commercial product to use mice, icons and windows.Ralph then became vice president of applications at Metaphor Computer Systems, pioneering decision support software and services provider. As a hands-on manager, he developed the Capsule Facility in 1982. The Capsule was a graphical programming technique which connected icons together in a logical flow, allowing a very visual style of programming for non-programmers. The Capsule was used to build reporting and analysis applications at Metaphor.Ralph founded Red Brick Systems in 1986, serving as CEO until 1992. Red Brick Systems, now owned by IBM, was known for its lightning fast relational database optimized for data warehousing. Ralph Kimball Associates incorporated in 1992 to provide data warehouse consulting and education.  <br />Ralph Kimball Vs. Bill Inmon's Paradigm of Data Warehouse<br />In data warehousing field, we often hear about discussion on whether a person/organization’s philosophy falls into Bill Inmon's camp or into Ralph Kimball's camp. Below is the difference between two philosophies:<br />Bill Inmon's paradigm<br />Data warehouse is one part of the overall business intelligence system. An enterprise has one data warehouse, and data marts source their information from the data warehouse. In the data warehouse, information is stored in 3rd normal form. <br />Ralph Kimball's paradigm<br />Data warehouse is the conglomerate of all data marts within the enterprise. Information is always stored in the dimensional model. <br />There is no right or wrong between these two ideas, as they represent different data warehousing philosophies. In reality, the data warehouse in most enterprises is closer to Ralph Kimball's idea. This is because most data warehouses started out as a departmental effort, and hence they originated as a data mart. Only when more data marts are built later do they evolve into a data warehouse.<br />Informatica Software Architecture illustrated<br />Informatica ETL product, known as Informatica Power Center consists of 3 main components.<br />1. Informatica PowerCenter Client Tools: <br />These are the development tools installed at developer end. These tools enable a developer to <br />Define transformation process, known as mapping. (Designer) <br />Define run-time properties for a mapping, known as sessions (Workflow Manager) <br />Monitor execution of sessions (Workflow Monitor) <br />Manage repository, useful for administrators (Repository Manager) <br />Report Metadata (Metadata Reporter) <br />2. Informatica PowerCenter Repository:<br />Repository is the heart of Informatica tools. Repository is a kind of data inventory where all the data related to mappings, sources, targets etc is kept. This is the place where all the metadata for your application is stored. All the client tools and Informatica Server fetch data from Repository. Informatica client and server without repository is same as a PC without memory/harddisk, which has got the ability to process data but has no data to process. This can be treated as backend of Informatica. <br />3. Informatica PowerCenter Server: Server is the place, where all the executions take place. Server makes physical connections to sources/targets, fetches data, applies the transformations mentioned in the mapping and loads the data in the target system.<br />This architecture is visually explained in diagram below:<br />SourcesStandard: RDBMS, Flat Files, XML, ODBC Applications: SAP R/3, SAP BW, PeopleSoft, Siebel, JD Edwards, i2 EAI: MQ Series, Tibco, JMS, Web ServicesLegacy: Mainframes (DB2, VSAM, IMS, IDMS, Adabas)AS400 (DB2, Flat File) Remote Sources TargetsStandard: RDBMS, Flat Files, XML, ODBCApplications: SAP R/3, SAP BW, PeopleSoft, Siebel, JD Edwards, i2EAI: MQ Series, Tibco, JMS, Web ServicesLegacy: Mainframes (DB2)AS400 (DB2)Remote Targets <br />This is the sufficient knowledge to start with Informatica. So lets go straight to development in Informatica.<br />Informatica Product Line<br />Informatica is a powerful ETL tool from Informatica Corporation, a leading provider of enterprise data integration software and ETL softwares. <br />The important products provided by Informatica Corporation is provided below:<br />Power Center Power Mart Power Exchange Power Center Connect Power Channel Metadata Exchange Power Analyzer Super Glue <br />Power Center & Power Mart: Power Mart is a departmental version of Informatica for building, deploying, and managing data warehouses and data marts. Power center is used for corporate enterprise data warehouse and power mart is used for departmental data warehouses like data marts. Power Center supports global repositories and networked repositories and it can be connected to several sources. Power Mart supports single repository and it can be connected to fewer sources when compared to Power Center. Power Mart can extensibily grow to an enterprise implementation and it is easy for developer productivity through a codeless environment. <br />Power Exchange: Informatica Power Exchange as a stand alone service or along with Power Center, helps organizations leverage data by avoiding manual coding of data extraction programs. Power Exchange supports batch, real time and changed data capture options in main frame(DB2, VSAM, IMS etc.,), mid range (AS400 DB2 etc.,), and for relational databases (oracle, sql server, db2 etc) and flat files in unix, linux and windows systems. <br />Power Center Connect: This is add on to Informatica Power Center. It helps to extract data and metadata from ERP systems like IBM's MQSeries, Peoplesoft, SAP, Siebel etc. and other third party applications.<br />Power Channel: This helps to transfer large amount of encrypted and compressed data over LAN, WAN, through Firewalls, tranfer files over FTP, etc.<br />Meta Data Exchange: Metadata Exchange enables organizations to take advantage of the time and effort already invested in defining data structures within their IT environment when used with Power Center. For example, an organization may be using data modeling tools, such as Erwin, Embarcadero, Oracle designer, Sybase Power Designer etc for developing data models. Functional and technical team should have spent much time and effort in creating the data model's data structures(tables, columns, data types, procedures, functions, triggers etc). By using meta deta exchange, these data structures can be imported into power center to identifiy source and target mappings which leverages time and effort. There is no need for informatica developer to create these data structures once again.<br />Power Analyzer: Power Analyzer provides organizations with reporting facilities. PowerAnalyzer makes accessing, analyzing, and sharing enterprise data simple and easily available to decision makers. PowerAnalyzer enables to gain insight into business processes and develop business intelligence.<br />With PowerAnalyzer, an organization can extract, filter, format, and analyze corporate information from data stored in a data warehouse, data mart, operational data store, or otherdata storage models. PowerAnalyzer is best with a dimensional data warehouse in a relational database. It can also run reports on data in any table in a relational database that do not conform to the dimensional model.<br />Super Glue: Superglue is used for loading metadata in a centralized place from several sources. Reports can be run against this superglue to analyze meta data.<br />Informatica Transformations<br />A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data.<br />Transformations can be of two types: <br />Active Transformation<br />An active transformation can change the number of rows that pass through the transformation, change the transaction boundary, can change the row type. For example, Filter, Transaction Control and Update Strategy are active transformations. <br />The key point is to note that Designer does not allow you to connect multiple active transformations or an active and a passive transformation to the same downstream transformation or transformation input group because the Integration Service may not be able to concatenate the rows passed by active transformations However, Sequence Generator transformation(SGT) is an exception to this rule. A SGT does not receive data. It generates unique numeric values. As a result, the Integration Service does not encounter problems concatenating rows passed by a SGT and an active transformation. <br />Passive Transformation. <br />A passive transformation does not change the number of rows that pass through it, maintains the transaction boundary, and maintains the row type. <br />The key point is to note that Designer allows you to connect multiple transformations to the same downstream transformation or transformation input group only if all transformations in the upstream branches are passive. The transformation that originates the branch can be active or passive. <br />Transformations can be Connected or UnConnected to the data flow.<br />Connected TransformationConnected transformation is connected to other transformations or directly to target table in the mapping. <br />UnConnected Transformation<br />An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation.<br />Informatica Transformations<br />Following are the list of Transformations available in Informatica:<br />Aggregator Transformation Application Source Qualifier Transformation Custom Transformation Data Masking Transformation Expression Transformation External Procedure Transformation Filter Transformation HTTP Transformation Input Transformation Java Transformation Joiner Transformation Lookup Transformation Normalizer Transformation Output Transformation Rank Transformation Reusable Transformation Router Transformation Sequence Generator Transformation Sorter Transformation Source Qualifier Transformation SQL Transformation Stored Procedure Transformation Transaction Control Transaction Union Transformation Unstructured Data Transformation Update Strategy Transformation XML Generator Transformation XML Parser Transformation XML Source Qualifier Transformation Advanced External Procedure Transformation External Transformation <br />In the following pages, we will explain all the above Informatica Transformations and their significances in the ETL process in detail. <br />Informatica Transformations<br />Aggregator Transformation<br />Aggregator transformation performs aggregate funtions like average, sum, count etc. on multiple rows or groups. The Integration Service performs these calculations as it reads and stores data group and row data in an aggregate cache. It is an Active & Connected transformation.<br />Difference b/w Aggregator and Expression Transformation? Expression transformation permits you to perform calculations row by row basis only. In Aggregator you can perform calculations on groups.<br />Aggregator transformation has following ports State, State_Count, Previous_State and State_Counter. <br />Components: Aggregate Cache, Aggregate Expression, Group by port, Sorted input.<br />Aggregate Expressions: are allowed only in aggregate transformations. can include conditional clauses and non-aggregate functions. can also include one aggregate function nested into another aggregate function. <br />Aggregate Functions: AVG, COUNT, FIRST, LAST, MAX, MEDIAN, MIN, PERCENTILE, STDDEV, SUM, VARIANCE<br />Application Source Qualifier Transformation Represents the rows that the Integration Service reads from an application, such as an ERP source, when it runs a session.It is an Active & Connected transformation.<br />Custom Transformation<br />It works with procedures you create outside the designer interface to extend PowerCenter functionality. calls a procedure from a shared library or DLL. It is active/passive & connected type. <br />You can use CT to create T. that require multiple input groups and multiple output groups.<br />Custom transformation allows you to develop the transformation logic in a procedure. Some of the PowerCenter transformations are built using the Custom transformation. Rules that apply to Custom transformations, such as blocking rules, also apply to transformations built using Custom transformations. PowerCenter provides two sets of functions called generated and API functions. The Integration Service uses generated functions to interface with the procedure. When you create a Custom transformation and generate the source code files, the Designer includes the generated functions in the files. Use the API functions in the procedure code to develop the transformation logic. <br />Difference between Custom and External Procedure Transformation? In Custom T, input and output functions occur separately.The Integration Service passes the input data to the procedure using an input function. The output function is a separate function that you must enter in the procedure code to pass output data to the Integration Service. In contrast, in the External Procedure transformation, an external procedure function does both input and output, and its parameters consist of all the ports of the transformation.<br />Data Masking Transformation <br />Passive & Connected. It is used to change sensitive production data to realistic test data for non production environments. It creates masked data for development, testing, training and data mining. Data relationship and referential integrity are maintained in the masked data.<br />For example: It returns masked value that has a realistic format for SSN, Credit card number, birthdate, phone number, etc. But is not a valid value. Masking types: Key Masking, Random Masking, Expression Masking, Special Mask format. Default is no masking. <br />Expression Transformation <br />Passive & Connected. are used to perform non-aggregate functions, i.e to calculate values in a single row. Example: to calculate discount of each product or to concatenate first and last names or to convert date to a string field.<br />You can create an Expression transformation in the Transformation Developer or the Mapping Designer. Components: Transformation, Ports, Properties, Metadata Extensions. <br />External Procedure<br />Passive & Connected or Unconnected. It works with procedures you create outside of the Designer interface to extend PowerCenter functionality. You can create complex functions within a DLL or in the COM layer of windows and bind it to external procedure transformation. To get this kind of extensibility, use the Transformation Exchange (TX) dynamic invocation interface built into PowerCenter. You must be an experienced programmer to use TX and use multi-threaded code in external procedures. <br />Filter Transformation<br />Active & Connected. It allows rows that meet the specified filter condition and removes the rows that do not meet the condition. For example, to find all the employees who are working in NewYork or to find out all the faculty member teaching Chemistry in a state. The input ports for the filter must come from a single transformation. You cannot concatenate ports from more than one transformation into the Filter transformation. Components: Transformation, Ports, Properties, Metadata Extensions. <br />HTTP Transformation<br />Passive & Connected. It allows you to connect to an HTTP server to use its services and applications. With an HTTP transformation, the Integration Service connects to the HTTP server, and issues a request to retrieves data or posts data to the target or downstream transformation in the mapping.<br />Authentication types: Basic, Digest and NTLM. Examples: GET, POST and SIMPLE POST. <br />Java Transformation<br />Active or Passive & Connected. It provides a simple native programming interface to define transformation functionality with the Java programming language. You can use the Java transformation to quickly define simple or moderately complex transformation functionality without advanced knowledge of the Java programming language or an external Java development environment. <br />Joiner Transformation<br />Active & Connected. It is used to join data from two related heterogeneous sources residing in different locations or to join data from the same source. In order to join two sources, there must be at least one or more pairs of matching column between the sources and a must to specify one source as master and the other as detail. For example: to join a flat file and a relational source or to join two flat files or to join a relational source and a XML source. The Joiner transformation supports the following types of joins: <br />Normal <br />Normal join discards all the rows of data from the master and detail source that do not match, based on the condition.<br />Master Outer <br />Master outer join discards all the unmatched rows from the master source and keeps all the rows from the detail source and the matching rows from the master source.<br />Detail Outer <br />Detail outer join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detail source.<br />Full Outer <br />Full outer join keeps all rows of data from both the master and detail sources.<br />Limitations on the pipelines you connect to the Joiner transformation:*You cannot use a Joiner transformation when either input pipeline contains an Update Strategy transformation.*You cannot use a Joiner transformation if you connect a Sequence Generator transformation directly before the Joiner transformation. <br />Lookup Transformation<br />Passive & Connected or UnConnected. It is used to look up data in a flat file, relational table, view, or synonym. It compares lookup transformation ports (input ports) to the source column values based on the lookup condition. Later returned values can be passed to other transformations. You can create a lookup definition from a source qualifier and can also use multiple Lookup transformations in a mapping. <br />You can perform the following tasks with a Lookup transformation:*Get a related value. Retrieve a value from the lookup table based on a value in the source. For example, the source has an employee ID. Retrieve the employee name from the lookup table.*Perform a calculation. Retrieve a value from a lookup table and use it in a calculation. For example, retrieve a sales tax percentage, calculate a tax, and return the tax to a target.*Update slowly changing dimension tables. Determine whether rows exist in a target.<br />Lookup Components: Lookup source, Ports, Properties, Condition.Types of Lookup:1) Relational or flat file lookup.2) Pipeline lookup.3) Cached or uncached lookup.4) connected or unconnected lookup<br />Informatica Transformations<br />Normalizer Transformation<br />Active & Connected. The Normalizer transformation processes multiple-occurring columns or multiple-occurring groups of columns in each source row and returns a row for each instance of the multiple-occurring data. It is used mainly with COBOL sources where most of the time data is stored in de-normalized format. <br />You can create following Normalizer transformation:*VSAM Normalizer transformation. A non-reusable transformation that is a Source Qualifier transformation for a COBOL source. VSAM stands for Virtual Storage Access Method, a file access method for IBM mainframe.*Pipeline Normalizer transformation. A transformation that processes multiple-occurring data from relational tables or flat files. This is default when you create a normalizer transformation.<br />Components: Transformation, Ports, Properties, Normalizer, Metadata Extensions. <br />Rank Transformation <br />Active & Connected. It is used to select the top or bottom rank of data. You can use it to return the largest or smallest numeric value in a port or group or to return the strings at the top or the bottom of a session sort order. For example, to select top 10 Regions where the sales volume was very high or to select 10 lowest priced products. As an active transformation, it might change the number of rows passed through it. Like if you pass 100 rows to the Rank transformation, but select to rank only the top 10 rows, passing from the Rank transformation to another transformation. You can connect ports from only one transformation to the Rank transformation. You can also create local variables and write non-aggregate expressions. <br />Router Transformation<br />Active & Connected. It is similar to filter transformation because both allow you to apply a condition to test data. The only difference is, filter transformation drops the data that do not meet the condition whereas router has an option to capture the data that do not meet the condition and route it to a default output group.If you need to test the same input data based on multiple conditions, use a Router transformation in a mapping instead of creating multiple Filter transformations to perform the same task. The Router transformation is more efficient. <br />Sequence Generator Transformation<br />Passive & Connected transformation. It is used to create unique primary key values or cycle through a sequential range of numbers or to replace missing primary keys. <br />It has two output ports: NEXTVAL and CURRVAL. You cannot edit or delete these ports. Likewise, you cannot add ports to the transformation. NEXTVAL port generates a sequence of numbers by connecting it to a transformation or target. CURRVAL is the NEXTVAL value plus one or NEXTVAL plus the Increment By value. You can make a Sequence Generator reusable, and use it in multiple mappings. You might reuse a Sequence Generator when you perform multiple loads to a single target.<br />For non-reusable Sequence Generator transformations, Number of Cached Values is set to zero by default, and the Integration Service does not cache values during the session.For non-reusable Sequence Generator transformations, setting Number of Cached Values greater than zero can increase the number of times the Integration Service accesses the repository during the session. It also causes sections of skipped values since unused cached values are discarded at the end of each session. <br />For reusable Sequence Generator transformations, you can reduce Number of Cached Values to minimize discarded values, however it must be greater than one. When you reduce the Number of Cached Values, you might increase the number of times the Integration Service accesses the repository to cache values during the session.<br />Sorter Transformation<br />Active & Connected transformation. It is used sort data either in ascending or descending order according to a specified sort key. You can also configure the Sorter transformation for case-sensitive sorting, and specify whether the output rows should be distinct. When you create a Sorter transformation in a mapping, you specify one or more ports as a sort key and configure each sort key port to sort in ascending or descending order. <br />Source Qualifier Transformation<br />Active & Connected transformation. When adding a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier is used to join data originating from the same source database, filter rows when the Integration Service reads source data, Specify an outer join rather than the default inner join and to specify sorted ports.It is also used to select only distinct values from the source and to create a custom query to issue a special SELECT statement for the Integration Service to read source data <br />SQL Transformation<br />Active/Passive & Connected transformation. The SQL transformation processes SQL queries midstream in a pipeline. You can insert, delete, update, and retrieve rows from a database. You can pass the database connection information to the SQL transformation as input data at run time. The transformation processes external SQL scripts or SQL queries that you create in an SQL editor. The SQL transformation processes the query and returns rows and database errors. <br />Stored Procedure Transformation<br />Passive & Connected or UnConnected transformation. It is useful to automate time-consuming tasks and it is also used in error handling, to drop and recreate indexes and to determine the space in database, a specialized calculation etc. The stored procedure must exist in the database before creating a Stored Procedure transformation, and the stored procedure can exist in a source, target, or any database with a valid connection to the Informatica Server. Stored Procedure is an executable script with SQL statements and control statements, user-defined variables and conditional statements. <br />Transaction Control Transformation<br />Active & Connected. You can control commit and roll back of transactions based on a set of rows that pass through a Transaction Control transformation. Transaction control can be defined within a mapping or within a session.Components: Transformation, Ports, Properties, Metadata Extensions. <br />Union Transformation<br />Active & Connected. The Union transformation is a multiple input group transformation that you use to merge data from multiple pipelines or pipeline branches into one pipeline branch. It merges data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements. Similar to the UNION ALL statement, the Union transformation does not remove duplicate rows.Rules1) You can create multiple input groups, but only one output group.2) All input groups and the output group must have matching ports. The precision, datatype, and scale must be identical across all groups.3) The Union transformation does not remove duplicate rows. To remove duplicate rows, you must add another transformation such as a Router or Filter transformation.4) You cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.5) The Union transformation does not generate transactions.Components: Transformation tab, Properties tab, Groups tab, Group Ports tab. <br />Unstructured Data Transformation<br />Active/Passive and connected. The Unstructured Data transformation is a transformation that processes unstructured and semi-structured file formats, such as messaging formats, HTML pages and PDF documents. It also transforms structured formats such as ACORD, HIPAA, HL7, EDI-X12, EDIFACT, AFP, and SWIFT.Components: Transformation, Properties, UDT Settings, UDT Ports, Relational Hierarchy. <br />Update Strategy Transformation<br />Active & Connected transformation. It is used to update data in target table, either to maintain history of data or recent changes. It flags rows for insert, update, delete or reject within a mapping. <br />XML Generator Transformation<br />Active & Connected transformation. It lets you create XML inside a pipeline. The XML Generator transformation accepts data from multiple ports and writes XML through a single output port. <br />XML Parser Transformation<br />Active & Connected transformation. The XML Parser transformation lets you extract XML data from messaging systems, such as TIBCO or MQ Series, and from other sources, such as files or databases. The XML Parser transformation functionality is similar to the XML source functionality, except it parses the XML in the pipeline. <br />XML Source Qualifier Transformation<br />Active & Connected transformation. XML Source Qualifier is used only with an XML source definition. It represents the data elements that the Informatica Server reads when it executes a session with XML sources. has one input or output port for every column in the XML source. <br />External Procedure Transformation<br />Active & Connected/UnConnected transformation. Sometimes, the standard transformations such as Expression transformation may not provide the functionality that you want. In such cases External procedure is useful to develop complex functions within a dynamic link library (DLL) or UNIX shared library, instead of creating the necessary Expression transformations in a mapping.<br />Advanced External Procedure Transformation<br />Active & Connected transformation. It operates in conjunction with procedures, which are created outside of the Designer interface to extend PowerCenter/PowerMart functionality. It is useful in creating external transformation applications, such as sorting and aggregation, which require all input rows to be processed before emitting any output rows.<br />Quick Reference Guide to Dimensional Modeling<br />Dimensional modeling is the design concept used by many data warehouse designers to build their data warehouse. Dimensional model is the underlying data model used by many of the commercial OLAP products available today in the market. Designing a data warehouse is very different from designing an online transaction processing (OLTP) system. In contrast to an OLTP system in which the purpose is to capture high rates of data changes and additions, the purpose of a data warehouse is to organize large amounts of stable data for ease of analysis and retrieval. Because of these differing purposes, there are many considerations in data warehouse design that differ from OLTP database design. In dimensional model, all data is contained in two types of tables called Fact Table and Dimension Table.<br />Fact Table<br />Each data warehouse or data mart includes one or more fact tables. The fact table captures the data that measures the organizations business operations. A fact table might contain business sales events such as cash register transactions or the contributions and expenditures of a nonprofit organization. Fact tables usually contain large numbers of rows, sometimes in the hundreds of millions of records when they contain one or more years of history for a large organization. A key characteristic of a fact table is that it contains numerical data (facts) that can be summarized to provide information about the history of the operation of the organization. Each fact table also includes a multipart index that contains as foreign keys the primary keys of related dimension tables, which contain the attributes of the fact records. Fact tables should not contain descriptive information or any data other than the numerical measurement fields and the index fields that relate the facts to corresponding entries in the dimension tables. An example of fact table is Sales_Fact table that might contain the information like sale_amount, unit_price, discount, etc.<br />Dimension Table<br />Dimension tables contain attributes that describe fact records in the fact table. Some of these attributes provide descriptive information; others are used to specify how fact table data should be summarized to provide useful information to the analyst. Dimension tables contain hierarchies of attributes that aid in summarization. For example, a dimension containing product information would often contain a hierarchy that separates products into categories such as food, drink, and non-consumable items, with each of these categories further subdivided a number of times until the individual product is reached at the lowest level.<br />Dimensional modeling produces dimension tables in which each table contains fact attributes that are independent of those in other dimensions. For example, a customer dimension table contains data about customers, a product dimension table contains information about products, and a store dimension table contains information about stores. Queries use attributes in dimensions to specify a view into the fact information. For example, a query might use the product, store, and time dimensions to ask the question quot; What was the cost of non-consumable goods sold in the northeast region in 1999?quot; Subsequent queries might drill down along one or more dimensions to examine more detailed data, such as quot; What was the cost of kitchen products in New York City in the third quarter of 1999?quot; In these examples, the dimension tables are used to specify how a measure (sale_amount) in the fact table is to be summarized.<br />Consider an example of Sales_Fact table and the various attributes that describe this fact are Store, Product, Time and say Sales Person. In this case we will have four dimension tables, viz. Store_Dimension, Product_Dimension, Time_Dimension and Sales_Person_Dimension.<br />Figure 1<br />You may notice that all of these dimensions contain a Key field. This is called Surrogate Key. This key is substitute for a natural key in dimensions (e.g., in Sales_Person_Dimension, we have natural key as ID). In a data warehouse a surrogate key is a generalization of the natural production key and is one of the basic elements of data warehouse.<br />As a fact table is described by the four dimension tables described above, it will contain the Surrogate Keys of all these dimensions. This is how the Sales_Fact table will look like:<br />Figure 2<br />Now if you carefully look at the structure of above tables and how they are linked the schema will look like this:<br />Figure 3<br />You can easily tell that this looks like a STAR. Hence its known as Star Schema.<br />Advantages of having Star Schema<br />Star Schema is very easy to understand, even for non technical business managers <br />Star Schema provides better performance and smaller query times <br />Star schema is easily extensible and will handle future changes easily <br />Slowly Changing Dimensions<br />Handling changes to dimensional data across time is the most important aspect in designing a data warehouse. In dimensional modeling, there is a very rare chance that a dimension will remain static over time. For example, a customer address may change; a company may phase out old products and introduce new products. What if a customer name changes, sales person changes his region of sale or a company assigns new sales territory. How to record the history or preserve the old version of history? Here comes the concept of Slowly Changing Dimensions. The term Slowly Changing Dimension is about variation in dimensional attributes over time. The word slowly, in this context, might seem incorrect. A sales person may change his territory rapidly. But in general, when compared to measures in fact table, the changes in dimensions occur slowly.<br />Types of Slowly Changing Dimensions<br />In reference to Figure 3 above, lets say a sales person changes his region of sale. We may handle this change in several ways. These methods fall in various categories based on companys need to preserve an accurate history of dimensional changes. Ralph Kimball categorized the dimensional changes into three categories<br />Type One: Changes that overwrite history <br />Type Two: Preserve history <br />Type Three: Preserve a version of history <br />Type One (Overwrite History)<br />A type one change overwrites existing dimensional attribute with new information. In Sales Person Region change example, the old region name will be overwritten by the new region. Say, a sales person Rob, has territory as ASIA.<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeASIA...<br />Now, if he starts looking after NorthWest Region, by implementing Type 1 dimension, the dimension table will look like:<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeNorthWest...<br />Advantages:<br />This is the easiest way to handle the Slowly Changing Dimension problem, since there is no need to keep track of the old information. <br />Disadvantages:<br />All history is lost. By applying this methodology, it is not possible to trace back in history. For example, in this case, the company would not be able to know that Christina lived in Illinois before. <br />Type Two (Preserve History)<br />A Type Two change writes a record with the new attribute information and preserves a record of the old dimensional data. Type Two changes let you preserve historical data. Implementing Type Two changes within a data warehouse might require significant analysis and development. Type Two changes accurately partition history across time more effectively than other types. However, because Type Two changes add records, they can significantly increase the database's size.<br />In our example, lets say we identify Region as Type Two attribute. This can be handled in this way using:<br />Sales_Person_DimensionSales_Person_KeyIDNameRegion...100203234Rob DoeASIA...153203234Rob DoeNorthWest...<br />Advantages:<br />This allows us to accurately keep all historical information. <br />Disadvantages:<br />This will cause the size of the table to grow fast. In cases where the number of rows for the table is very high to start with, storage and performance can become a concern. <br />This necessarily complicates the ETL process. <br />Type Three (Preserve a Version of History)<br />You usually implement Type Three changes only if you have a limited need to preserve and accurately describe history, such as when someone gets married and you need to retain the previous name. Instead of creating a new dimensional record to hold the attribute change, a Type Three change places a value for the change in the original dimensional record. You can create multiple fields to hold distinct values for separate points in time. In the case of a region change example, you could create an OLD_REGION and NEW_REGION field and a REGION_CHANGE_EFF_DATE field to record when the change occurs. This method preserves the change. But how would you handle a second name change, or a third, and so on? The side effects of this method are increased table size and, more important, increased complexity of the queries that analyze historical values from these old fields. After more than a couple of iterations, queries become impossibly complex, and ultimately you're constrained by the maximum number of attributes allowed on a table.<br />This is how the table will look like in Type Three change:<br />Sales_Person_DimensionSales_Person_KeyIDNameOld RegionNew Region...100203234Rob DoeASIANorthWest...<br />Advantages:<br />This does not increase the size of the table, since new information is updated. <br />This allows us to keep some part of history. <br />Disadvantages:<br />Type 3 will not be able to keep all history where an attribute is changed more than once. For example, if Christina later moves to Texas on December 15, 2003, the California information will be lost. <br />Because most business requirements include tracking changes over time, data warehouse architects commonly implement Type Two changes. A data warehouse might use Type Two changes for all attributes in all tables. As an alternative, you can implement a mix of Type One and Type Two changes at an attribute level by implementing Type 2 changes for only attributes whose historical values are important when you're slicing and dicing. For example, users might not need to know an individual's previous name if a name change occurs, so a Type One change would suffice. Users might want the system to show only the person's current name. However, if the company reassigns sales territories, users might need to track who sold what, at what time, and in what territory, necessitating a Type Two change.<br />Although most data warehouses include Type Two changes, you need to seriously examine the business need to record historical data. Implementing Type Two changes might be necessary, but those changes will increase the database size, degrade performance, and lengthen the development time. You need to carefully evaluate using a Type Two implementation, a Type One implementation, or a hybrid implementation.<br />