This document compares Oracle Exadata and Netezza TwinFin appliances for data warehousing. It begins by noting that Netezza focuses exclusively on data warehousing needs while Oracle adapted its OLTP database for Exadata. The document then discusses key differences between online transaction processing (OLTP) and data warehousing workloads. Specifically, it notes that OLTP involves many short reads and writes of small records, while data warehousing focuses on large analytic queries across huge datasets. It also distinguishes between first-generation overnight data warehouses and newer second-generation warehouses that support constant updates and advanced analytics.
CXAIR is a search-based business analytics tool that uses the same principles as internet search engines like Google to provide reporting and analysis capabilities. Unlike traditional BI tools, CXAIR indexes data from multiple sources into a search engine for fast retrieval through natural language searches. It can report on both structured and unstructured data at scale without SQL or pre-aggregations. CXAIR provides an alternative to established BI approaches like OLAP and in-memory analytics through its use of search technology.
This document discusses building an enterprise data hub using MapR's Hadoop distribution. It describes how unprecedented data volumes from online transactions and clickstream data are driving up data warehousing costs. MapR enhances Hadoop to support features like high availability, disaster recovery and workload management, making it suitable for an enterprise data hub. This data hub would include a data landing zone and data refinery for cleaning, integrating and analyzing data at lower cost than data warehousing. It would also act as a long-term data store and archive to supply new insights to analytical platforms throughout the enterprise.
White paper making an-operational_data_store_(ods)_the_center_of_your_data_...Eric Javier Espino Man
The document discusses implementing an operational data store (ODS) to centralize data from multiple source systems. An ODS integrates disparate data for reporting and analytics while insulating operational systems. The document recommends selling an ODS internally by highlighting benefits like reduced workload for ETL developers and improved access to real-time data for business users. It also provides best practices like using automation tools that simplify ODS creation and maintenance.
Teradata is an American company that sells analytic data platforms and related services. It was originally a division of NCR Corporation but spun off in 2007. Teradata's products consolidate data from different sources and make it available for analysis. It uses a massively parallel processing architecture that allows for linear scalability. Major customers include Walmart, AT&T, and Continental Airlines. Teradata competes with other data warehousing solutions from Oracle, IBM, and Microsoft.
This document provides an overview of data warehousing and online analytical processing (OLAP). It defines a data warehouse as a single, consistent store of subject-oriented data obtained from various sources to support end-user business analysis and decision-making. OLAP allows users to easily perform complex multidimensional analyses of data in areas such as comparisons, aggregations, and rankings. The document also discusses key aspects of data warehousing such as extraction, transformation, loading, and management of data from operational systems into the warehouse to support OLAP and decision support.
This IT 812 business intelligence and data warehousing looks into the various factors including data warehousing, data mining and business intelligence as well the use and benefit of these for the modern day business organizations.
This TDWI EU 2012 presentation looks at the various options for implementing a data store for analytical purposes and shows that there's no 'one size fits all' solution available
This document provides a summary of the Gartner Cool Vendors in In-Memory Computing Technologies report from 2014. It identifies four vendors as cool vendors: Diablo Technologies, GridGain, MemSQL, and Relex. For each vendor, it provides a brief overview of the company and technology, as well as challenges they may face. It recommends IT leaders consider these vendors' in-memory computing solutions for opportunities like hybrid transaction/analytical processing, big data analytics, and supply chain planning. The report evaluates these vendors' innovations in in-memory technologies and how they can help organizations leverage digital business opportunities through improved agility and fast data processing.
CXAIR is a search-based business analytics tool that uses the same principles as internet search engines like Google to provide reporting and analysis capabilities. Unlike traditional BI tools, CXAIR indexes data from multiple sources into a search engine for fast retrieval through natural language searches. It can report on both structured and unstructured data at scale without SQL or pre-aggregations. CXAIR provides an alternative to established BI approaches like OLAP and in-memory analytics through its use of search technology.
This document discusses building an enterprise data hub using MapR's Hadoop distribution. It describes how unprecedented data volumes from online transactions and clickstream data are driving up data warehousing costs. MapR enhances Hadoop to support features like high availability, disaster recovery and workload management, making it suitable for an enterprise data hub. This data hub would include a data landing zone and data refinery for cleaning, integrating and analyzing data at lower cost than data warehousing. It would also act as a long-term data store and archive to supply new insights to analytical platforms throughout the enterprise.
White paper making an-operational_data_store_(ods)_the_center_of_your_data_...Eric Javier Espino Man
The document discusses implementing an operational data store (ODS) to centralize data from multiple source systems. An ODS integrates disparate data for reporting and analytics while insulating operational systems. The document recommends selling an ODS internally by highlighting benefits like reduced workload for ETL developers and improved access to real-time data for business users. It also provides best practices like using automation tools that simplify ODS creation and maintenance.
Teradata is an American company that sells analytic data platforms and related services. It was originally a division of NCR Corporation but spun off in 2007. Teradata's products consolidate data from different sources and make it available for analysis. It uses a massively parallel processing architecture that allows for linear scalability. Major customers include Walmart, AT&T, and Continental Airlines. Teradata competes with other data warehousing solutions from Oracle, IBM, and Microsoft.
This document provides an overview of data warehousing and online analytical processing (OLAP). It defines a data warehouse as a single, consistent store of subject-oriented data obtained from various sources to support end-user business analysis and decision-making. OLAP allows users to easily perform complex multidimensional analyses of data in areas such as comparisons, aggregations, and rankings. The document also discusses key aspects of data warehousing such as extraction, transformation, loading, and management of data from operational systems into the warehouse to support OLAP and decision support.
This IT 812 business intelligence and data warehousing looks into the various factors including data warehousing, data mining and business intelligence as well the use and benefit of these for the modern day business organizations.
This TDWI EU 2012 presentation looks at the various options for implementing a data store for analytical purposes and shows that there's no 'one size fits all' solution available
This document provides a summary of the Gartner Cool Vendors in In-Memory Computing Technologies report from 2014. It identifies four vendors as cool vendors: Diablo Technologies, GridGain, MemSQL, and Relex. For each vendor, it provides a brief overview of the company and technology, as well as challenges they may face. It recommends IT leaders consider these vendors' in-memory computing solutions for opportunities like hybrid transaction/analytical processing, big data analytics, and supply chain planning. The report evaluates these vendors' innovations in in-memory technologies and how they can help organizations leverage digital business opportunities through improved agility and fast data processing.
Teradata Corporation is an American company that sells data warehousing hardware and software. It was founded in 1979 and spun off from NCR Corporation in 2007. Teradata's products include integrated data warehouse appliances and software that allow customers to consolidate data from various sources and perform analysis. The company has over 10,000 employees and annual revenue of over $2.6 billion.
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.[1] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for knowledge workers throughout the enterprise.
This document defines a data warehouse as a collection of corporate information derived from operational systems and external sources to support business decisions rather than operations. It discusses the purpose of data warehousing to realize the value of data and make better decisions. Key components like staging areas, data marts, and operational data stores are described. The document also outlines evolution of data warehouse architectures and best practices for implementation.
These are the slides from my talk at Data Day Texas 2016 (#ddtx16).
The world of data warehousing has changed! With the advent of Big Data, Streaming Data, IoT, and The Cloud, what is a modern data management professional to do? It may seem to be a very different world with different concepts, terms, and techniques. Or is it? Lots of people still talk about having a data warehouse or several data marts across their organization. But what does that really mean today in 2016? How about the Corporate Information Factory (CIF), the Data Vault, an Operational Data Store (ODS), or just star schemas? Where do they fit now (or do they)? And now we have the Extended Data Warehouse (XDW) as well. How do all these things help us bring value and data-based decisions to our organizations? Where do Big Data and the Cloud fit? Is there a coherent architecture we can define? This talk will endeavor to cut through the hype and the buzzword bingo to help you figure out what part of this is helpful. I will discuss what I have seen in the real world (working and not working!) and a bit of where I think we are going and need to go in 2016 and beyond.
The document discusses data warehousing and describes its key characteristics and components. It defines a data warehouse as a copy of transaction data structured for querying and reporting to support strategic decision making. It outlines the stages of constructing a data warehouse including extraction, integration, and dimensional analysis to design the data warehouse database.
Data Integration Alternatives: When to use Data Virtualization, ETL, and ESBDenodo
Data integration is paramount, in this presentation you will find three different paradigms: using client-side tools, creating traditional data warehouses and the data virtualization solution - the logical data warehouse, comparing each other and positioning data virtualization as an integral part of any future-proof IT infrastructure.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/1q94Ka.
Teradata specializes in storing and analyzing structured, relational data. It has recently purchased Aster Data Systems, Inc. in order to extend its platform to include the capability of handling what is often called ‘big’, ‘semi-structured’ or multi-structured (see below) data.
This document discusses data warehousing, including its definition, importance, components, strategies, ETL processes, and considerations for success and pitfalls. A data warehouse is a collection of integrated, subject-oriented, non-volatile data used for analysis. It allows more effective decision making through consolidated historical data from multiple sources. Key components include summarized and current detailed data, as well as transformation programs. Common strategies are enterprise-wide and data mart approaches. ETL processes extract, transform and load the data. Clean data and proper implementation, training and maintenance are important for success.
Rohit Sharma presented a seminar on a project that discussed data warehousing, data mining, and how to apply data warehousing concepts to project data. The presentation covered terminology, pulling together and correctly using data from multiple sources, software requirements including PHP and MySQL, and screenshots of the admin panel and user interfaces.
RDBMS gave us table schemas. A table schema, which is an essential metadata component, gave us the power to validate data types, and enforce constraints. In the age of varying data and schema-less data stores, how can we enforce these rules and how can we leverage metadata (even in RDBMS) to empower data validity, code checks, and automation.
This is a brief background into Big data (data lake) to put in context the importance of metadata from a governance perspective and more especially in todays heterogeneous big data platforms.
This document outlines a course on data warehousing and data mining. It introduces key concepts like relational databases, data warehouses, dimensional modeling, and data mining techniques. It also details the course objectives, schedule, assignments, and policies. The goal is for students to gain experience applying data mining methods and understanding the relationship between data mining and other fields.
The document provides an overview of the key components and considerations for building a data warehouse. It discusses 7 main components: 1) the data warehouse database, 2) sourcing, acquisition, cleanup and transformation tools, 3) metadata, 4) access (query) tools, 5) data marts, 6) data warehouse administration and management, and 7) information delivery systems. It also outlines important design considerations, technical considerations, and implementation considerations that must be addressed when building a data warehouse environment.
The document discusses the purpose and history of data warehousing. It defines a data warehouse as a centralized, well-managed environment for storing high-value data from various sources. The data warehouse processes this data into a format optimized for analysis and information processing. The data warehouse has evolved from mainframe-based systems in the 1970s to today's cost-effective solutions embedded in software. A data warehouse is not defined by its size but by its functionality and ability to meet business objectives through consolidated, consistent data.
THE FUTURE OF DATA: PROVISIONING ANALYTICS-READY DATA AT SPEEDwebwinkelvakdag
Data lakes & data warehouses, whether on-premises or in the cloud promise to provide a centralized, cost-effective and scalable foundation for modern analytics. However, organisations continue to struggle to deliver accurate, current and analytics-ready data sets in a timely fashion. Traditional ingestion tools weren’t designed to handle hundreds or even thousands of data sources and the lack of lineage forces data consumers to manually aggregate information from sources they trust. In this session, you’ll learn how to future-proof your modern data environment to meet the needs of the business for the long term. We'll examine how to overcome common challenges, the related must-have technology solutions in the data lake/ data warehousing world, using real-world success stories and even a few architecture tips from industry experts.
Analyst View of Data Virtualization: Conversations with Boulder Business Inte...Denodo
In this presentation, executives from Denodo preview the new Denodo Platform 6.0 release that delivers Dynamic Query Optimizer, cloud offering on Amazon Web Services, and self-service data discovery and search. Over 30 analysts, led by Claudia Imhoff, provide input on strategic direction and benefits of Denodo 6.0 to the data virtualization and the broader data integration market.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DR6r3m.
The document provides an overview of data warehousing concepts introduced by Bill Inmon, who is considered the father of data warehousing. It defines a data warehouse as a collection of integrated subject-oriented databases designed to support decision-making. An operational data store feeds raw data to the data warehouse. Data marts contain targeted subsets of data for specific user groups. Metadata provides data about the structure and meaning of data within the warehouse.
Data Warehouse Interview Questions And Answers | Data Warehouse Tutorial | Ed...Edureka!
This Data Warehouse Interview Questions And Answers tutorial will help you prepare for Data Warehouse interviews. It also contains a videos where our instructor has explained the topics in a detailed manner with examples that will help you to understand this concepts better.
The document discusses 4th generation data warehousing and analytics. It describes integrating both structured and unstructured data from internal and external sources using a data supply framework. This framework includes an operational data store, data warehouse, analytics data store, and data marts to support strategic, tactical and operational decision making. It also discusses using descriptive and predictive analytics from big data and measuring the outcomes and benefits.
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
In this paper, Impetus focuses at why organizations need to design an Enterprise Data Warehouse (EDW) to support the business analytics derived from the Big Data.
Este documento describe un proyecto que incluye la simulación de un circuito POV en Proteus, el diseño del impreso en Ares, la visualización 3D del proyecto POV y el montaje real del proyecto POV.
Teradata Corporation is an American company that sells data warehousing hardware and software. It was founded in 1979 and spun off from NCR Corporation in 2007. Teradata's products include integrated data warehouse appliances and software that allow customers to consolidate data from various sources and perform analysis. The company has over 10,000 employees and annual revenue of over $2.6 billion.
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.[1] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for knowledge workers throughout the enterprise.
This document defines a data warehouse as a collection of corporate information derived from operational systems and external sources to support business decisions rather than operations. It discusses the purpose of data warehousing to realize the value of data and make better decisions. Key components like staging areas, data marts, and operational data stores are described. The document also outlines evolution of data warehouse architectures and best practices for implementation.
These are the slides from my talk at Data Day Texas 2016 (#ddtx16).
The world of data warehousing has changed! With the advent of Big Data, Streaming Data, IoT, and The Cloud, what is a modern data management professional to do? It may seem to be a very different world with different concepts, terms, and techniques. Or is it? Lots of people still talk about having a data warehouse or several data marts across their organization. But what does that really mean today in 2016? How about the Corporate Information Factory (CIF), the Data Vault, an Operational Data Store (ODS), or just star schemas? Where do they fit now (or do they)? And now we have the Extended Data Warehouse (XDW) as well. How do all these things help us bring value and data-based decisions to our organizations? Where do Big Data and the Cloud fit? Is there a coherent architecture we can define? This talk will endeavor to cut through the hype and the buzzword bingo to help you figure out what part of this is helpful. I will discuss what I have seen in the real world (working and not working!) and a bit of where I think we are going and need to go in 2016 and beyond.
The document discusses data warehousing and describes its key characteristics and components. It defines a data warehouse as a copy of transaction data structured for querying and reporting to support strategic decision making. It outlines the stages of constructing a data warehouse including extraction, integration, and dimensional analysis to design the data warehouse database.
Data Integration Alternatives: When to use Data Virtualization, ETL, and ESBDenodo
Data integration is paramount, in this presentation you will find three different paradigms: using client-side tools, creating traditional data warehouses and the data virtualization solution - the logical data warehouse, comparing each other and positioning data virtualization as an integral part of any future-proof IT infrastructure.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/1q94Ka.
Teradata specializes in storing and analyzing structured, relational data. It has recently purchased Aster Data Systems, Inc. in order to extend its platform to include the capability of handling what is often called ‘big’, ‘semi-structured’ or multi-structured (see below) data.
This document discusses data warehousing, including its definition, importance, components, strategies, ETL processes, and considerations for success and pitfalls. A data warehouse is a collection of integrated, subject-oriented, non-volatile data used for analysis. It allows more effective decision making through consolidated historical data from multiple sources. Key components include summarized and current detailed data, as well as transformation programs. Common strategies are enterprise-wide and data mart approaches. ETL processes extract, transform and load the data. Clean data and proper implementation, training and maintenance are important for success.
Rohit Sharma presented a seminar on a project that discussed data warehousing, data mining, and how to apply data warehousing concepts to project data. The presentation covered terminology, pulling together and correctly using data from multiple sources, software requirements including PHP and MySQL, and screenshots of the admin panel and user interfaces.
RDBMS gave us table schemas. A table schema, which is an essential metadata component, gave us the power to validate data types, and enforce constraints. In the age of varying data and schema-less data stores, how can we enforce these rules and how can we leverage metadata (even in RDBMS) to empower data validity, code checks, and automation.
This is a brief background into Big data (data lake) to put in context the importance of metadata from a governance perspective and more especially in todays heterogeneous big data platforms.
This document outlines a course on data warehousing and data mining. It introduces key concepts like relational databases, data warehouses, dimensional modeling, and data mining techniques. It also details the course objectives, schedule, assignments, and policies. The goal is for students to gain experience applying data mining methods and understanding the relationship between data mining and other fields.
The document provides an overview of the key components and considerations for building a data warehouse. It discusses 7 main components: 1) the data warehouse database, 2) sourcing, acquisition, cleanup and transformation tools, 3) metadata, 4) access (query) tools, 5) data marts, 6) data warehouse administration and management, and 7) information delivery systems. It also outlines important design considerations, technical considerations, and implementation considerations that must be addressed when building a data warehouse environment.
The document discusses the purpose and history of data warehousing. It defines a data warehouse as a centralized, well-managed environment for storing high-value data from various sources. The data warehouse processes this data into a format optimized for analysis and information processing. The data warehouse has evolved from mainframe-based systems in the 1970s to today's cost-effective solutions embedded in software. A data warehouse is not defined by its size but by its functionality and ability to meet business objectives through consolidated, consistent data.
THE FUTURE OF DATA: PROVISIONING ANALYTICS-READY DATA AT SPEEDwebwinkelvakdag
Data lakes & data warehouses, whether on-premises or in the cloud promise to provide a centralized, cost-effective and scalable foundation for modern analytics. However, organisations continue to struggle to deliver accurate, current and analytics-ready data sets in a timely fashion. Traditional ingestion tools weren’t designed to handle hundreds or even thousands of data sources and the lack of lineage forces data consumers to manually aggregate information from sources they trust. In this session, you’ll learn how to future-proof your modern data environment to meet the needs of the business for the long term. We'll examine how to overcome common challenges, the related must-have technology solutions in the data lake/ data warehousing world, using real-world success stories and even a few architecture tips from industry experts.
Analyst View of Data Virtualization: Conversations with Boulder Business Inte...Denodo
In this presentation, executives from Denodo preview the new Denodo Platform 6.0 release that delivers Dynamic Query Optimizer, cloud offering on Amazon Web Services, and self-service data discovery and search. Over 30 analysts, led by Claudia Imhoff, provide input on strategic direction and benefits of Denodo 6.0 to the data virtualization and the broader data integration market.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DR6r3m.
The document provides an overview of data warehousing concepts introduced by Bill Inmon, who is considered the father of data warehousing. It defines a data warehouse as a collection of integrated subject-oriented databases designed to support decision-making. An operational data store feeds raw data to the data warehouse. Data marts contain targeted subsets of data for specific user groups. Metadata provides data about the structure and meaning of data within the warehouse.
Data Warehouse Interview Questions And Answers | Data Warehouse Tutorial | Ed...Edureka!
This Data Warehouse Interview Questions And Answers tutorial will help you prepare for Data Warehouse interviews. It also contains a videos where our instructor has explained the topics in a detailed manner with examples that will help you to understand this concepts better.
The document discusses 4th generation data warehousing and analytics. It describes integrating both structured and unstructured data from internal and external sources using a data supply framework. This framework includes an operational data store, data warehouse, analytics data store, and data marts to support strategic, tactical and operational decision making. It also discusses using descriptive and predictive analytics from big data and measuring the outcomes and benefits.
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
In this paper, Impetus focuses at why organizations need to design an Enterprise Data Warehouse (EDW) to support the business analytics derived from the Big Data.
Este documento describe un proyecto que incluye la simulación de un circuito POV en Proteus, el diseño del impreso en Ares, la visualización 3D del proyecto POV y el montaje real del proyecto POV.
The document provides information about an International Mission Board (IMB) Central and Eastern Europe (CEE) Regional Key Partner Consultation meeting in 2006. The consultation aims to facilitate partnerships between the IMB CEE region and attendees and share how God is working in the 25 countries of the CEE region. Over 60 million people in the region currently do not have access to the gospel. The consultation will include presentations on the CEE region and fields, opportunities for partnership, and a special worship event to praise God for his work in the region.
Kate bryan (work samples) slideshare uploadKatyKate3320
This document provides a summary of Kate Bryan's portfolio work samples organized by employer and project. It includes summaries of marketing work done for Earthtalk Studios including website design, videos, newsletters and more. Work samples for ThisIsBozeman.com include the website design, videos and content management. Projects for Montana Manufacturing Center included training materials, videos and an online training system. Work for RightNow Technologies featured customer communications, case studies, presentations and surveys. Miscellaneous projects included work for various clients.
15 premisas en relacion con la evaluacion educativa 2mies
The document outlines 15 premises related to educational evaluation. It states that educational evaluation should be formative, continuous, and aimed at improving the educational process and student learning. Evaluation should consider both quantitative and qualitative data to provide a comprehensive view of students' academic development and educational needs.
This document outlines the learning areas, topics, and outcomes for an ICT (Information and Communications Technology) education program for Years 1 and 4. In Year 1, students will learn about safety and rules in computer labs, basic computer parts and their functions, various computer devices and their uses, how to operate a computer including starting/closing applications, and using educational CD-ROMs. In Year 4, students will learn advanced skills in graphic design and desktop publishing applications, such as manipulating images, adding text, and printing files. The goal is for students to develop practical digital literacy and skills.
Este documento describe un proyecto de diseño de impresos en Ares y Proteus que incluye la visualización en 3D y el montaje real de un proyecto POV realizado por Rafael Cristancho Ramírez en 2014.
This document discusses several authoring tools and resources that can be used to integrate multimedia content and interactive activities into educational materials. It describes tools such as JClic, GIMP, ExeLearning, CmapTools, Ardora, and Educaplay, which allow teachers to create activities like cloze questions, matching, crosswords, and more. It also discusses Moodle, a free virtual learning environment and course management system designed to help educators create online learning communities and allow students to build knowledge in a student-centered way.
This document outlines the scientific method in 6 steps: 1) State the problem, 2) Gather information through research, 3) Form a hypothesis, 4) Perform an experiment to test the hypothesis, 5) Analyze the experimental data using charts and graphs, 6) Draw conclusions on whether the hypothesis was correct or if a new experiment is needed.
Snowflake and Oracle Autonomous Data Warehouse are two leading cloud data warehouse services. While both aim to simplify data warehousing, Oracle Autonomous Data Warehouse provides more complete automation through its self-driving, self-securing, and self-repairing capabilities. The document finds that Oracle Autonomous Data Warehouse outperforms Snowflake in several key areas including simplicity, automation, performance, security, flexibility and cost. Specifically, Oracle Autonomous Data Warehouse requires less manual intervention, achieves better performance through full automation, and offers significantly lower costs through its superior performance and elasticity controls.
This document is about Data Warehouse Tools such as:
OLAP (On – line Analytical Processing)
OLTP (On – Line Transaction Processing)
Business Intelligence
Driving Force
Data Mart
Meta Data
This document provides an overview of dynamic data fabrics and trusted data meshes using the Oracle GoldenGate platform. It discusses the limitations of traditional monolithic data architectures and how a decentralized data mesh approach enabled by GoldenGate can empower faster innovation, lower costs, and drive more value from enterprise data. The key attributes of this new architecture include data product thinking, decentralized and modular data management, enterprise data ledgers for consistency and trust, and dynamic data pipelines.
Big data analytics beyond beer and diapersKai Zhao
This document discusses big data analytics. It begins with background on traditional business intelligence and defines big data in terms of volume, variety, and velocity. It then outlines the big data analytics technology stack, including ETL/ELT, MPP data warehouses, MapReduce, NoSQL, web services, data analytics tools, data visualization, and BI tools. Finally, it discusses big data analytics platform architectures.
This document outlines the key issues in data warehousing and online analytical processing (OLAP) in e-business. It defines data warehousing as a centralized repository that extracts and integrates relevant information from multiple sources to support decision making. OLAP enables multi-dimensional analysis of data stored in a database. The problem is that operational databases are not optimized for analytical reporting. This leads to slow response times for reports needed by top management. The research aims to develop a tool to convert operational databases into a star schema structure for faster multi-dimensional analysis and reporting.
Future Trends in the Modern Data Stack LandscapeCiente
As we embrace the future, staying abreast of emerging technologies will be crucial for organizations seeking to harness the full potential of their data.
The document summarizes an expert panel discussion on Oracle Exadata. It provides background on Exadata's history starting in 2008. It discusses Gartner and Forrester research reports that found Exadata achieved significant adoption and customers reported large performance improvements. The document also outlines Oracle's goals to triple Exadata installations and that over 1,000 Exadatas are currently installed supporting a variety of workloads. It concludes with introductions to the panelists.
This document provides a comparative study of various ETL (Extraction, Transformation, Loading) tools. It first discusses the ETL process and concepts. It then reviews some popular ETL tools, including Pentaho Data Integration, Talend Open Studio, Informatica Power Center, Oracle Warehouse Builder, IBM Information Server, and Microsoft SQL Server Integration Services. The document establishes criteria for comparing ETL tools, such as architecture, functionality, usability, reusability, connectivity, and interoperability. It then provides a comparative analysis and graphs ranking the tools based on their support for each criteria. The analysis finds that Informatica Power Center and Oracle Warehouse Builder provide strong architectural support, while Informatica and SQL Server Integration Services
This document summarizes key aspects of enterprise data management systems. It discusses how these systems need to handle different data sources, transactional (OLTP) and analytical (OLAP) query types, and the drawbacks of separating OLTP and OLAP systems. Modern systems aim to combine OLTP and OLAP by running both types of queries directly on the same transactional data store, eliminating ETL processes and enabling real-time analytics on the latest data. The document also notes that enterprise data tends to be sparse, with many unused or low-cardinality columns.
ETL extracts raw data from sources, transforms it on a separate server, and loads it into a target database. ELT loads raw data directly into a data warehouse, where data cleansing, enrichment, and transformations occur. While ETL has been used longer and has more supporting tools, ELT allows for faster queries, greater flexibility, and takes advantage of cloud data warehouse capabilities by performing transformations within the warehouse. However, ELT can present greater security risks and increased latency compared to ETL.
- A data warehouse is a central repository for an organization's historical data that is used to support management reporting and decision making. It contains data from multiple sources integrated into a consistent structure.
- Data warehouses are optimized for querying and analysis rather than transactions. They use a dimensional model and denormalized structures to improve query performance for business users.
- There are two main approaches to data warehouse design - the dimensional model advocated by Kimball and the normalized model advocated by Inmon. Both have advantages and disadvantages for query performance and ease of use.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
A modern approach to streaming data integration, event processing with a big data (kappa style) data architecture. Key patterns are discussed with pros/cons of newer approaches and open source technologies. Focus on Oracle and GoldenGate technology. OpenWorld 2018 presentation.
Unlock Your Data for ML & AI using Data VirtualizationDenodo
How Denodo Complement’s Logical Data Lake in Cloud
● Denodo does not substitute data warehouses, data lakes,
ETLs...
● Denodo enables the use of all together plus other data
sources
○ In a logical data warehouse
○ In a logical data lake
○ They are very similar, the only difference is in the main
objective
● There are also use cases where Denodo can be used as data
source in a ETL flow
The document provides information about data warehousing fundamentals. It discusses key concepts such as data warehouse architectures, dimensional modeling, fact and dimension tables, and metadata. The three common data warehouse architectures described are the basic architecture, architecture with a staging area, and architecture with staging area and data marts. Dimensional modeling is optimized for data retrieval and uses facts, dimensions, and attributes. Metadata provides information about the data in the warehouse.
A data warehouse is a central repository of historical data from an organization's various sources designed for analysis and reporting. It contains integrated data from multiple systems optimized for querying and analysis rather than transactions. Data is extracted, cleaned, and loaded from operational sources into the data warehouse periodically. The data warehouse uses a dimensional model to organize data into facts and dimensions for intuitive analysis and is optimized for reporting rather than transaction processing like operational databases. Data warehousing emerged to meet the growing demand for analysis that operational systems could not support due to impacts on performance and limitations in reporting capabilities.
Global databases on the example of Betfair companyPakita Shamoi
Betfair is a global betting exchange company that processes over 20 million transactions daily and over 1,000 bets per second during peak periods. It has over 1 million registered customers across 90 countries. Betfair initially used a single centralized Oracle database located in the UK, but has since expanded to decentralized databases in other countries like Australia and Malta to handle its rapid growth. Betfair uses homogeneous Oracle databases across all locations to provide real-time integration and ensure high performance. It has implemented technologies like Oracle Real Application Clusters, Oracle Coherence, and Linux to meet its demands for speed, scalability, and reliability across its global databases.
This document discusses key concepts in data warehousing and modeling. It describes a multitier architecture for data warehousing consisting of a bottom tier warehouse database, middle tier OLAP server, and top tier front-end client tools. It also discusses different data warehouse models including enterprise warehouses, data marts, and virtual warehouses. The document outlines the extraction, transformation, and loading process used to populate data warehouses and the role of metadata repositories.
Webinar future dataintegration-datamesh-and-goldengatekafkaJeffrey T. Pollock
The Future of Data Integration: Data Mesh, and a Special Deep Dive into Stream Processing with GoldenGate, Apache Kafka and Apache Spark. This video is a replay of a Live Webinar hosted on 03/19/2020.
Join us for a timely 45min webinar to see our take on the future of Data Integration. As the global industry shift towards the “Fourth Industrial Revolution” continues, outmoded styles of centralized batch processing and ETL tooling continue to be replaced by realtime, streaming, microservices and distributed data architecture patterns.
This webinar will start with a brief look at the macro-trends happening around distributed data management and how that affects Data Integration. Next, we’ll discuss the event-driven integrations provided by GoldenGate Big Data, and continue with a deep-dive into some essential patterns we see when replicating Database change events into Apache Kafka. In this deep-dive we will explain how to effectively deal with issues like Transaction Consistency, Table/Topic Mappings, managing the DB Change Stream, and various Deployment Topologies to consider. Finally, we’ll wrap up with a brief look into how Stream Processing will help to empower modern Data Integration by supplying realtime data transformations, time-series analytics, and embedded Machine Learning from within data pipelines.
GoldenGate: https://www.oracle.com/middleware/tec...
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
Similar to Seems MPPs (b)eat Hardware Strict DBs (Teradata/Greenplum) ? (20)
2. Less than one year since announcing the release of Netezza's TwinFin, over a hundred customers
have adopted the appliance including:
· Blue Cross Blue Shield of Massachusetts
· BlueKai
· Catalina Marketing
· Con-way Freight
· DataLogix
· Epsilon, an Alliance Data Company
· interCLICK
· IntercontinentalExchange
· Japan Medical Data Center
· Kelley Blue Book
· Marshfield Clinic and Marshfield Clinic Research Foundation
· MediaMath
· MetroPCS
· MicroAd
· MyLife.com
· NYSE Euronext
· Pacific Northwest National Laboratory
· Premier, Inc.
· Safeway
· The Nielsen Company
3. Table of Contents PAGE
1 Introduction 4
2 Online Transaction Processing (OLTP) and Data Warehousing 6
3 Query Performance 8
4 Simplicity of Operation 14
5 Value 20
6 Conclusion 23
Oracle Exadata and Netezza TwinFin Compared page 3
4. 1 Introduction
Netezza focuses on technology designed to query and analyze big data. The company’s innovative data
warehouse appliances are disrupting the market. Wishing to exploit data at lower costs of operation and
ownership, many of our customers have moved their data warehouses from Oracle. Oracle has now
brought Exadata to market; a machine which apparently does everything TwinFin does, and also
processes online transactions. This examination of Exadata and TwinFin as data warehouse platforms is
written from an unashamedly Netezza viewpoint, however to ensure credibility we have taken advice
from Philip Howard, Research Director of Bloor Research and Curt Monash, President, Monash Research.
To innovate requires us to think and do things differently, solving a problem using new approaches.
Netezza focuses exclusively on customers’ needs and wants for data warehousing. TwinFin delivers
excellent performance for our customers’ warehouse queries. TwinFin offers customers simplicity;
anyone with basic knowledge of SQL and Linux has the skills needed to perform the few administrative
tasks required to maintain consistent service levels through dynamically changing workloads. TwinFin’s
performance with simplicity reduces their costs of owning and running their data warehouses. More
important, our customers create new business value by deploying analytic applications which
previously they considered beyond their reach.
“Netezza was part of the inspiration for Exadata. Teradata was
part of the inspiration for Exadata,” acknowledged Larry Ellison
on 27th January 2010. “We’d like to thank them for forcing our
hand and forcing us to go into the hardware business.”
Oracle Exadata and Netezza TwinFin Compared page 4
5. “Netezza was part of the inspiration for Exadata. Teradata was part of the inspiration for Exadata,”
acknowledged Larry Ellison on January 27, 2010. “We’d like to thank them for forcing our hand and
forcing us to go into the hardware business.”1 While delivered with Larry Ellison’s customary pizzazz,
there is a serious point to his comment: only the best catch Oracle’s attention. Exadata represents a
strategicdirection for Oracle; adapting their OLTP database management system, partnering it with a
massively parallel storage system from Sun. Oracle launched Exadata V2 with the promise of extreme
performance for processing both online transactions and analytic queries. That Oracle excels at OLTP is a
given. But data warehousing and analytics make very different demands of their software and hardware
than OLTP. Exadata’s data warehousing credentials demand scrutiny, particularly with respect to simplicity
and value.
This white paper opens by reviewing differences between processing online transactions and processing
queries and analyses in a data warehouse. It then discusses Exadata and TwinFin from perspectives of
their query performance, simplicity of operation and value.
All we ask of readers is that they do as our customers and partners have done: put aside notions of how
a database management system should work, be open to new ways of thinking and be prepared to do
less, not more, to achieve a better result.
...be prepared to do less, not
more, to achieve a better result.
One caveat: Netezza has no direct access to an Exadata machine. We are fortunate in the detailed
feedback we receive from many organizations that have evaluated both technologies and selected
TwinFin. Given Oracle’s size and their focus on Exadata, publicly available information on Exadata is
surprisingly scarce. The use cases quoted by Oracle provide little input to the discussion, which in
itself is of concern to several industry followers, e.g. Information Week.2 The information shared in
this paper is made available in the spirit of openness. Any inaccuracies result from our mistakes, not
an intent to mislead.
1 See http://oracle.com.edgesuite.net/ivt/4000/8104/9238/12652/lobby_external_flash_clean_480x360/default.htm
2 See http://www.informationweek.com/news/business_intelligence/warehouses/showArticle.jhtml?articleID=225702836&cid=RSSfeed_IWK_News
Oracle Exadata and Netezza TwinFin Compared page 5
6. 2 Online Transaction Processing (OLTP)
and Data Warehousing
OLTP systems execute many short transactions. Each transaction’s scope is small, limited to one or a
small number of records and is so predictable that often times data is cached. Although OLTP systems
process large volumes of database queries, their focus is writing (UPDATE, INSERT and DELETE) to a
current data set. These systems are typically specific to a business process or function, for example
managing the current balance of a checking account. Their data is commonly structured in third
normal form (3NF). Transaction types of OLTP systems are stable and their data requirements are
well-understood, so secondary data structures such as indices can usefully locate records on disk, prior
to their transfer to memory for processing.
In comparison, data warehouse systems are characterized by predominantly heavy database read
(SELECT) operations against a current and historical data set. Whereas an OLTP operation accesses a
small number of records, a data warehouse query might scan a table of a billion rows and join its records
with those from multiple other tables. Furthermore, queries in a data warehouse are often so
unpredictable in nature, it is difficult to exploit caching and indexing strategies. Choices for structuring
data in the warehouse range from 3NF to dimensional models such as star and snowflake schemas.
Data within each system feeding a typical warehouse is structured to reflect the needs of a specific
business process. Before data is loaded to the warehouse it is cleansed, de-duplicated and integrated.
This paper divides data warehouses as either first or second generation. While this classification may
not stand the deepest scrutiny, it reflects how many of our customers talk about their evolutionary
path to generating greater and greater value from their data.
First-generation data warehouses are typically loaded overnight. They provide information to their
business via a stable body of slowly evolving SQL-based reports and dashboards. As these simple
warehouses somewhat resemble OLTP systems – their workload and data requirements are understood
and stable – organizations often adopt the same database management products they use for OLTP.
With the product comes the practice: database administrators analyze each report’s data requirements
and build indices to accelerate data retrieval. Creep of OLTP’s technology and techniques appears a success,
until data volumes in the warehouse outstrip those commonly managed in transactional systems.
Oracle Exadata and Netezza TwinFin Compared page 6
7. In this century, corporations and public sector agencies accept growth rates for data of 30-50% per year
as normal. Technologies and practices successful in the world of OLTP prove less and less applicable to
data warehousing; the index as aid to data retrieval is a case in point. As the database system processes
jobs to load data, it is also busy updating its multiple indices. With large data volumes this becomes a
very slow process, causing load jobs to overrun their allotted processing window. Despite working long
hours, the technical team misses service levels negotiated with the business. Productivity suffers as
business units wait for reports and data to become available.
Technologies and practices successful
in the world of OLTP prove less
applicable to data warehousing...
Organizations are redefining how they need and want to exploit their data; this paper refers to this
development as the second-generation data warehouse. These new warehouses, managing massive
data sets with ease, serve as the corporate memory. When interrogated, they recall events recorded
years previously; these distant memories increase the accuracy of predictive analytic applications.
Constant trickle feeds are replacing overnight batch loads, reducing latency between the recording of an
event and its analysis. Beyond the simple SQL used to populate reports and dashboards, the warehouse
processes linear regressions, Naïve Bayes and other mathematical algorithms of advanced analytics.
Noticing a sudden spike in sales of a high-margin product at just five stores drives a retailer to understand
what happened and why. This knowledge informs strategies to promote similar sales activity at all 150
store locations. The computing system underpinning the warehouse must be capable of managing these
sudden surges in demand without disrupting regular reports and dashboards. The business users are
demanding the freedom to exploit their data at the time and in the manner of their choosing. Their
appetite for immediacy leaves no place for technologies whose performance depend upon the tuning
work of administrators.
Oracle Exadata and Netezza TwinFin Compared page 7
8. 3 Query Performance
Query Performance with Oracle Exadata
In acquiring Sun, Oracle has come to the conclusion Netezza reached a decade earlier: data warehouse
systems achieve highest efficiency when all parts, software and hardware, are optimized to their goal.
Exadata is created from two sub-systems connected by a fast network: a smart storage system
communicating via InfiniBand with an Oracle Database 11g V2 with Real Application Clusters (RAC).
A single rack system includes a storage tier of 14 storage servers, called Exadata cells, in a massively
parallel processing (MPP) grid, paired with the Oracle RAC database running as a shared disk cluster
of eight symmetric multi-processing nodes.
In acquiring Sun, Oracle
has come to the
conclusion Netezza reached a
decade earlier: data warehouse systems
achieve highest efficiency when all parts, software
and hardware, are optimized to their goal.
Oracle labels Exadata’s storage tier as smart because it processes SQL projection, restriction and join
filtering,3 before putting the resulting data set on the network for downstream processing by Oracle
RAC. This technique is called smart scan. However, smart scan is not comprehensive; the storage tier
does not process all restrictions. Oracle’s online forum4 lists a number of operations including scans
of index-organized tables or clustered tables as not benefitting from smart scan. Further to these,
Christian Antognini, author of the book Troubleshooting Oracle Performance, writes a blog that suggests
3 A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine – An Oracle white paper, October 2009.
4 http://forums.oracle.com/forums/thread.jspa?threadID=1036774&tstart=0 The full list is: scans of index-organized tables or
clustered tables; index range scans; access to a compressed index; access to a reverse key index; Secure Enterprise Search
Oracle Exadata and Netezza TwinFin Compared page 8
9. smart scan is not used with the TIMESTAMP datatype.5 Oracle recommends implementing fact tables in
data warehouses as index-organized tables for efficient execution of star queries.6 Exadata’s storage tier
will not process restrictions on index-organized tables, but instead must pass all of the records downstream
to the Oracle database. Exadata’s approach of passing full records from storage to database tier is highly
effective for OLTP as each transaction must only retrieve a small number of rows. However, a statistical
analysis requiring a scan of a long (hundreds of millions or billions of rows), wide (hundreds of columns)
fact table will generate a tidal wave of data to be inefficiently moved across the network. Exadata would
achieve better performance and be more efficient if it processed all SQL predicates (WHERE clauses) in its
MPP storage tier.
SMART SCAN LIMITATIONS
Smart scan is not comprehensive; the storage tier
does not process all restrictions. Numerous operations
including scans of index-organized tables or clustered
tables don’t benefit from smart scan; all records are
passed downstream to the Oracle database. Smart scan
is not used with the TIMESTAMP datatype. When
transactions (insert, update, delete) are operating
against the data warehouse concurrent with query
activity, smart scans are disabled. Dirty buffers turn
off smart scan.
MPP UNDERUSED
Exadata’s engineering does not fully exploit MPP
architecture. Database management is not completely
integrated into the storage tier, meaning too little is
asked of the hardware in its MPP grid.
5 See Christian Antognini's blog at http://antognini.ch/2010/05/exadata-storage-server-and-the-query-optimizer-%E2%80%93-part-2/
6 http://www.oracle.com/technology/products/oracle9i/datasheets/iots/iot_ds.html
Oracle Exadata and Netezza TwinFin Compared page 9
10. Exadata storage servers cannot communicate with one another; instead all communication is forced via the
InfiniBand network to Oracle RAC and then back across the network to the storage tier. This architecture is
beneficial to online transaction processing; where each transaction, with a scope of one or few records,
can be satisfied by moving a small data set from storage to the database. Analytical queries, such as
“find all shopping baskets sold last month in Washington State, Oregon and California containing product
X with product Y and with a total value more than $35,” must retrieve much larger data sets, all of which
must be moved from storage to database. This inefficient movement of big data adversely effects query
performance.
Exadata’s storage tier demonstrates other shortcomings. Exadata cells cannot process distinct aggregations,
which are common even in simple reports; they are unable to process complex joins or analytical functions
used in analytical applications. Unable to resolve these typical data warehousing queries in its storage
tier, Exadata must push very large data sets across its internal network to Oracle RAC. This architectural
flaw raises questions of Exadata’s suitability for second-generation data warehouses which must run
complex analytical queries.
Oracle positions its use of 40 Gb/sec switch InfiniBand as an advantage over TwinFin; in reality Exadata
needs this expensive network because of the system’s imbalance and inefficiency. Exadata storage
servers do too little work, so more data than necessary is put on the network to be moved downstream
for processing by Oracle RAC, which is asked to do too much work.
At its database tier Exadata runs Oracle 11g V2 with Real Application Clusters as a clustered, shared disk
architecture. Using this architecture for a data warehouse platform raises concern that contention for
the shared resource imposes limits on the amount of data the database can process and the number of
queries it can run concurrently. Time and customer experience will tell if this concern is justified.
Every disk in Exadata’s storage tier is shared by all nodes in the grid running Oracle RAC. This communal
storage creates the risk of a page being read by one node while it is being updated by another. To manage
this, Oracle forces coordination between nodes. Each node checks the disk activity of its peers to prevent
conflict. Oracle technicians refer to this activity as block pinging. Compute cycles consumed as each
node checks disk activity of its peers, or that are lost as one node idly waits for another to complete an
operation, are wasted. In an architecture specifically designed for data warehousing, these cycles would
be employed processing queries, mining data and running analyses.
Oracle Exadata and Netezza TwinFin Compared page 10
11. Exadata storage servers cannot communicate with one another; instead all communication is forced via the
InfiniBand network to Oracle RAC and then back across the network to the storage tier. This architecture is
beneficial to online transaction processing; where each transaction, with a scope of one or few records,
can be satisfied by moving a small data set from storage to the database. Analytical queries, such as
“find all shopping baskets sold last month in Washington State, Oregon and California containing product
X with product Y and with a total value more than $35,” must retrieve much larger data sets, all of which
must be moved from storage to database. This inefficient movement of big data adversely effects query
performance.
Exadata’s storage tier demonstrates other shortcomings. Exadata cells cannot process distinct aggregations,
which are common even in simple reports; they are unable to process complex joins or analytical functions
used in analytical applications. Unable to resolve these typical data warehousing queries in its storage
tier, Exadata must push very large data sets across its internal network to Oracle RAC. This architectural
flaw raises questions of Exadata’s suitability for second-generation data warehouses which must run
complex analytical queries.
Oracle positions its use of 40 Gb/sec switch InfiniBand as an advantage over TwinFin; in reality Exadata
needs this expensive network because of the system’s imbalance and inefficiency. Exadata storage
servers do too little work, so more data than necessary is put on the network to be moved downstream
for processing by Oracle RAC, which is asked to do too much work.
At its database tier Exadata runs Oracle 11g V2 with Real Application Clusters as a clustered, shared disk
architecture. Using this architecture for a data warehouse platform raises concern that contention for
the shared resource imposes limits on the amount of data the database can process and the number of
queries it can run concurrently. Time and customer experience will tell if this concern is justified.
Every disk in Exadata’s storage tier is shared by all nodes in the grid running Oracle RAC. This communal
storage creates the risk of a page being read by one node while it is being updated by another. To manage
this, Oracle forces coordination between nodes. Each node checks the disk activity of its peers to prevent
conflict. Oracle technicians refer to this activity as block pinging. Compute cycles consumed as each
node checks disk activity of its peers, or that are lost as one node idly waits for another to complete an
operation, are wasted. In an architecture specifically designed for data warehousing, these cycles would
be employed processing queries, mining data and running analyses.
Oracle Exadata and Netezza TwinFin Compared page 10
12. For all but simple queries Exadata must
move large sets of data from its storage tier to its
database tier, raising questions on its suitability as
a platform for a modern data warehouse.
Marrying its existing database technology with a new “smart” storage tier, Exadata removes the disk
throughput bottleneck Oracle suffers when partnered with conventional storage. Exadata presents
interesting opportunities for CIOs looking to consolidate multiple OLTP systems to a single platform.
For all but simple queries Exadata must move large sets of data from its storage tier to its database tier
raising questions on its suitability as a platform for a modern data warehouse.
PERFORMANCE BOTTLENECK
Exadata’s storage tier does
not process restrictions
on indexed tables. All such
records are loaded into
SHARED DISK ARCHITECTURE the database server for
Exadata’s database tier runs processing.
Oracle11g V2 with RAC as a
clustered, shared disk
architecture - limiting the
amount of data the
database processes and the
number of queries it runs
concurrently.
MANAGEMENT OVERHEAD
Administrators design and
define data distribution via
partitions, files, tablespaces,
and block/extent sizes.
HIGH NETWORK TRAFFIC
Exadata storage servers
communicate with each
other via the InfiniBand
network to Oracle RAC and
back across the network to
the storage tier. Only
uncompressed data is
returned to the database
servers, increasing network ANALYTICS LIMITATIONS
traffic significantly. CLUSTERED HIGH MASSIVELY Exadata cells do not process
DATABASE BANDWIDTH PARALLEL distinct aggregations
SERVERS INTERCONNECT STORAGE (common in simple reports),
complex joins or analytical
functions (used in analytical
applications).
Oracle Exadata and Netezza TwinFin Compared page 11
13. Query Performance with Netezza TwinFin
TwinFin is designed from the ground up as a data warehousing platform. Netezza employs an Asymmetric
Massively Parallel Processing (AMPP) architecture. A Symmetrical Multiprocessing host7 fronts a grid of
Massively Parallel Processing nodes. TwinFin exploits this MPP grid to process the heavy lifting of
warehousing and analyzing data.
A node in TwinFin’s grid is called an S-Blade (Snippet-Blade), an independent server containing multi-core
central processing units (CPUs). Each CPU is teamed with a multi-engine Field Programmable Gate
Array (FPGA) and gigabytes of random access memory. Because the CPUs have their own memory, they
remain focused exclusively on data analysis and are never distracted to track activity of other nodes,
as occurs with block pinging in Oracle RAC.
An FPGA is a semiconductor chip equipped with a large number of internal gates programmable to
implement almost any logical function, and particularly effective at managing streaming processing
tasks. Outside of Netezza, FPGAs are used in such applications as digital signal processing, medical
imaging and speech recognition. Netezza’s engineers have built software machines within our appliances’
FPGAs to accelerate processing of data before it reaches the CPU. Within each Exadata rack Oracle
dedicates 14 eight-way storage servers to accomplish less than Netezza achieves with 48 FPGAs
embedded within our blade servers. Each FPGA – just a 1”x1” square of silicon – achieves its work with
enormous efficiency, drawing little power and generating little heat.
Netezza TwinFin’s AMPP Architecture
FPGA CPU Advanced
Analytics
Memory
BI
FPGA CPU
Memory Host
ETL
FPGA CPU
Memory
Loader
Network
Disk Enclosures S-Blades Fabric
Netezza Appliance Applications
7 TwinFin has two SMP hosts for redundancy but only one is active at any one time.
Oracle Exadata and Netezza TwinFin Compared page 12
14. Netezza didn’t take an old system
with known shortcomings
and balance it with a new smarter
storage tier; TwinFin is designed as an
optimized platform for data warehousing.
Inter-nodal communication across Netezza’s MPP grid occurs on a network fabric running a customized
IP-based protocol fully utilizing total cross-sectional bandwidth and eliminating congestion even under
sustained, bursty network traffic. The network is optimized to scale to more than a thousand nodes, while
allowing each node to initiate large data transfers to every other node simultaneously. These transfers
bring enormous efficiency to the processing tasks typical of data warehousing and advanced analytics.
Just as SQL statements benefit from processing within TwinFin’s MPP architecture, so too do
computationally complex algorithms at the heart of advanced analytics. Previous generations of
technology physically separate application processing from database processing, introducing
inefficiencies and constraints as large data sets are shuffled out of the warehouse to the analytic
processing platforms and back again. Netezza brings the heavy computation of advanced analytics
into its MPP grid, running the algorithms in each CPU physically close to the data, making data
movement redundant and boosting performance. The algorithms benefit from running on the many
nodes of Netezza’s MPP grid, freed from constraints imposed on less-scalable clustered systems.
Netezza didn’t take an old system with known shortcomings and balance it with a new smarter storage
tier; TwinFin is designed as an optimized platform for data warehousing. TwinFin delivers performance
generously, making life easy for programmers, administrators and users.
Oracle Exadata and Netezza TwinFin Compared page 13
15. 4 Simplicity of Operation
A customer of Netezza’s from the financial services industry used the
Lean approach to analyze resource expenditure required to manage
their Oracle data warehouse. They learned in building and maintaining
indices, aggregates, materialized views and data marts that
more than 90% of their IT team’s work was either
required waste or non-value added processing.
Simplicity of Operation with Oracle Exadata
Before the warehouse can run queries it must be loaded with data. Exadata’s storage tier is an MPP grid.
MPP systems achieve performance and scale when all nodes participate equally in the computational
task at hand. Data must be evenly distributed, with the same amount of relevant data to be at each
node for each query, to the extent possible. To evenly distribute data across Exadata’s grid of storage
servers requires administrators trained and experienced in designing, managing and maintaining complex
partitions, files, tablespaces, indices, tables and block/extent sizes. “Even better might be a system that
doesn’t lean heavily on complex partitioning to achieve good performance.” 8
A customer of Netezza’s from the financial services industry used the Lean9 approach to analyze
resource expenditure required to manage their Oracle data warehouse. They learned in building and
maintaining indices, aggregates, materialized views and data marts that more than 90% of their IT
team’s work was either required waste or non-value added processing. The cost of this waste
translates to unnecessary hardware and software license costs, terabytes of wasted storage, elongated
development and data load cycles, long periods of data unavailability, stale data, poorly performing
loads and queries and excessive administrative costs.
8 Curt Monash at http://www.dbms2.com/2009/09/21/notes-on-the-oracle-database-11g-release-2-white-paper/
9 With roots in manufacturing, "Lean" is a practice using tools and techniques of Six Sigma to analyze wasteful expenditure of resources,
and target activities adding no value to the product or service for elimination.
Oracle Exadata and Netezza TwinFin Compared page 14
16. Exadata does little to simplify managing an Oracle data warehouse. Administrators must manage
multiple server layers, each with operating system images, firmware, file systems and software to be
maintained. Oracle suggests that DBAs should expect to spend 26% less time managing 11g, the
database version in Exadata, than they spend on older 10g deployments. If this is confirmed in practice
and Exadata reduces by a quarter the time customers waste in valueless administration, Oracle has taken
a step in the right direction. Netezza’s appliances are designed not to waste any of the customers’ time.
“The DBA team only backs up the environment and manages the high level security model for the
appliance and that is it. They don't need to do anything else (for example, the concept of indexing is
foreign to them when dealing with Netezza).” 10
Not only do business users demand that their queries complete quickly, they also expect consistent
performance; a report that completed in five seconds yesterday and three minutes today will likely create
a ticket requiring response from IT helpdesk staff. Warehouses are inevitably subject to the demands of
varied, dynamic workloads. Data arriving from OLTP systems via batch jobs or trickle feeds are loaded,
administrative tasks such as backup and restore and grooming run in the background out of view of the
business and dashboards are constantly updating. At the same time, computational intensive applications
– such as those predicting which claims or trades might be fraudulent or irregular – create sudden,
heavy load on the warehouse infrastructure. Delivering consistent performance to the business makes
two requirements of the warehouse: consistent query performance and effective workload management
which simplifies allocation of available computing power to all the jobs requiring service, usually based
on priorities agreed with the business.
Oracle’s philosophy of workload management is to offer administrators multiple tuning parameters.
Oracle’s parameters have a high degree of dependency on one another, and in Exadata some must
be set to the same value for every processor in their grid. This complexity forces administrators to
experimentally change parameter settings, tuning their way around unexpected demands on the
warehouse. Achieving and maintaining consistent performance for large communities of users, with
different application and data requirements, through rising and falling loads, is a complex task requiring
a high degree of Oracle experience and expertise of the warehouse administrators.
10 Customer using Oracle for OLTP and Netezza for data warehousing quoted from Linked-In Exadata Vs Netezza forum at
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&gid=2602952&discussionID=11385070&sik=127535332969
9&trk=ug_qa_q&goback=.ana_2602952_1275353329699_3_1
Oracle Exadata and Netezza TwinFin Compared page 15
17. Oracle RAC is a complex technology and its tuning parameters arcane. In OLTP systems with a stable,
well-understood population of transactions the business can be shielded from this complexity. Database
administrators have ample opportunity during an application’s development phase to analyze each
operation’s data requirements and have the time to design, test and tune the database. Data warehouses
are different. An event in the outside world creates the need to analyze data in ways never before attempted.
The immediate need for information leaves no time for administrators to analyze each query and
optimize its data retrieval. A warehouse unable to process requests immediately, as they are formulated,
denies the business of opportunities for action.
“The way we did a proof of concept with them [Netezza] was, they
shipped us a box, we put it into our data center and plugged into our
I’m not
network. Within 24 hours, we were up and running.
exaggerating, it was that easy.”
Simplicity of Operation with Netezza TwinFin
Netezza’s customers willingly confirm on public record that our appliances are simple to install and use.
“The way we did a proof of concept with them [Netezza] was, they shipped us a box, we put it into our
data center and plugged into our network," he said. "Within 24 hours, we were up and running. I'm not
exaggerating, it was that easy.”11
This commentary is from Joseph Essas, vice president of technology at eHarmony, Inc., a company
already using Oracle's database and RAC software.
11 www.computerworld.com/s/article/9126930/EHarmony_finds_data_warehouse_match_with_Netezza?source=rss_news
Oracle Exadata and Netezza TwinFin Compared page 16
18. Reducing the time to get productive is a good start; Netezza’s philosophy is to bring simplicity to all
phases of data warehousing. The first task facing a customer is loading their data. TwinFin automates
data distribution. Experience from proof-of-concept projects is that customers load their data to
Netezza using automatic distribution, run their queries and compare results to their highly tuned Oracle
environments. For all but the simplest queries, automatic distribution is good enough for TwinFin to
outperform Oracle. Customers may later analyze all their queries to identify those that can be accelerated
by redistributing data on different keys. TwinFin makes this task simple.
There’s something to be said for a simple approach
• NO cluster interconnect (GES & GCS) monitoring/tuning
• NO RAC-specific knowledge/tuning (DBAs with RAC experience are less of a commodity)
• NO dbspace/tablespace sizing and configuration
• NO redo/physical log sizing and configuration
• NO journaling/logical log sizing and configuration
• NO page/block sizing and configuration for tables
• NO extent sizing and configuration for tables
• NO temp space allocation and monitoring
• NO integration of OS kernel recommendations
• NO maintenance of OS recommended patch levels
• NO JAD sessions to configure host/network/storage
• NO query (e.g. first_rows) and optimizer (e.g. optimizer_index_cost_adj) hints
• NO statspack (statistics, cache hit, wait event monitoring)
• NO memory tuning (SGA, block buffers, etc.)
• NO index planning/creation/maintenance
• Simple partitioning strategies: HASH or ROUND ROBIN
Oracle Exadata and Netezza TwinFin Compared page 17
19. All queries submitted to TwinFin are automatically processed in its massively parallel grid with no
involvement of database administrators. Queries and analyses enter TwinFin through the host machine
where the optimizer, the compiler and the scheduler decompose them into many different pieces or
snippets, and distribute these instructions to the MPP grid of processing nodes, or S-Blades, all of which
then process their workload simultaneously against their locally-managed slice of data.
A Snippet arriving at each of TwinFin’s S-Blades initiates reading of compressed data from disk into
memory. The FPGA then reads the data from memory buffers and utilizing its Compress Engine
decompresses it, instantly transforming each block from disk into the equivalent of 4-8 data blocks
within the FPGA. Netezza’s engineering accelerates the slowest component in any data warehouse -
the disk. Next, within the FPGA data streams into the Project Engine which filters out columns based on
parameters specified in the SELECT clause of the SQL query being processed. Only records fulfilling the
SELECT clause are passed further downstream to the Restrict Engine where rows not needed to process
the query are blocked from passing through gates, based on restrictions specified in the WHERE clause.
The Visibility Engine maintains ACID (Atomicity, Consistency, Isolation and Durability) compliance at
streaming speeds. All this work, the constant pruning of unneeded columns and rows, is achieved in an
energy efficient FPGA measuring just one square inch. If TwinFin doesn’t need to move data, it doesn’t.
The FPGA’s pre-processing complete, it streams just the resulting trimmed down set of records back into
S-Blade memory where the CPU performs higher-level database operations such as sorts, joins and
aggregations, doing this in parallel with all other CPUs within the MPP grid. The CPU may also apply
complex algorithms embedded in the Snippet code for advanced analytics processing. The CPU finally
assembles all the intermediate results from the entire data stream and produces a result for the Snippet,
sent over the network fabric to other S-Blades or the host, as directed by the Snippet code. When data
required by a JOIN is not collocated on a node, TwinFin’s inter-nodal network fabric efficiently and simply
re-distributes late in the processing cycle after the database has completed restrictions and projections.
Some highly complex algorithms require communication among nodes to compute their answer. TwinFin
exploits a message passing interface to communicate interim results and to produce the final result.
And, as the original compressed data blocks are still in memory, they can be automatically reused in
later queries requiring similar data via TwinFin’s table cache – an automated mechanism requiring no
DBA training or involvement.
Oracle Exadata and Netezza TwinFin Compared page 18
20. Just three months after
moving to Netezza, a customer relates
that his team delivered more analytical applications
than they could in the previous three years with Oracle.
Because TwinFin applies full parallelism to all tasks, its workload management system plays a critical
role in controlling how much of the appliance’s computing resources are made available to each and
every job. In Netezza’s appliance architecture, one software component controls all system resources:
processors; disks; memory; network. This elegance is the foundation of TwinFin’s Workload Management
System. TwinFin’s Workload Management System makes it simple for administrators to allocate
computational resources to users and groups based on priorities agreed with the business and maintain
consistent response times for multiple communities.
TwinFin eliminates the wasted work of database tuning. Equipped to make their own intelligent decisions,
Netezza’s appliances require no tuning and little system administration. The few administrative tasks
necessary to maintain consistent performance through dynamic, changing workloads are within easy
reach of anyone with experience of Linux and SQL. All that is required of the administrator is to allocate
TwinFin’s resources to groups within the user community and hand control to the Workload Management
System. Freed from constant cycles of database administration, technical staff engages with the business
to investigate new, value-creating ways of exploiting data. Just three months after moving to Netezza,
a customer relates that his team delivered more analytical applications than they could in the previous
three years with Oracle. Processing analytical applications close to where data is managed, exploiting
the same MPP platform as used for processing SQL, represents a real opportunity for organizations to
dramatically increase the value they derive from data.
Oracle Exadata and Netezza TwinFin Compared page 19
21. 5 Value
Value with Exadata
As the waste analysis conducted by the financial services customer of both Netezza and Oracle highlights,
using Oracle for data warehousing is labor intensive. Oracle suggests the latest version of their database
management system might reduce this waste by 26%.12 Netezza customers attest that these low level, tech-
nically-demanding administration tasks are simply unnecessary; in this light it is indefensible that operating an
Oracle database demands administrators spend the majority of their time on care and feeding of the
underlying technology, while Netezza customers spend that time creating value by exploiting their data.
Exadata’s new storage tier adds another layer of complexity for administrators to tune and manage. Because
Exadata is very new, and so few data warehouses using the technology are in production, projections on
its cost of ownership are premature. However, customers should expect that achieving consistently high
performance from Exadata will incur substantial costs in database design and administration.
While adding a new storage tier removes the disk throughput bottleneck to Oracle’s database, Exadata’s
engineering is more adaption to massively parallel processing than full exploitation of the architecture.
Oracle’s failure to integrate data management fully into Exadata’s storage tier means too little is asked of
hardware in its MPP grid. This inflates the cost of acquiring Exadata; customers pay for hardware that will
never be fully exploited by its software. These costs build over the lifetime of the warehouse. Customers
pay for under-utilized space in their data centers which would return greater value if used to house a more
efficient computer system.
While costs destroy value, a fundamental question is whether Exadata helps customers to create value. First-
generation data warehouses play an important role in keeping an organization informed of the recent past,
yet data unleashes greater potential through advanced analytics and other capabilities of second-generation
warehouses discussed earlier in the paper. Oracle RAC teamed with traditional storage hasn’t proved a
success in this role to date. Exadata’s storage tier is unable to process complex joins, distinct aggregations
and analytical functions. It is difficult to envisage how two technologies, individually ill-equipped to analyze
deeply very large data sets with high performance, will achieve this feat when connected by a fast network
and housed in the same rack.
12 http://www.dbms2.com/2009/09/21/notes-on-the-oracle-database-11g-release-2-white-paper/
Oracle Exadata and Netezza TwinFin Compared page 20
22. Evaluating the Systems Netezza Oracle
Item TwinFin 12 Exadata v2 (SAS)
· True MPP · Hybrid – parallel storage nodes
& SMP clustered head node
· Optimized for Data Warehousing · Optimized for Transaction
& Analytics Processing (OLTP)
· Intelligent storage
· Full processing S-Blades (1 CPU core / 1.5 disk drives)
(1 CPU core + 1 FPGA core /
1 disk drive) · SMP Cluster nodes running
Oracle 11g RAC
· SMP host node used primarily for
user/applications interface · InfiniBand
(Exadata nodes to SMP cluster)
· Independent blade-to-blade
redistribution · Head node engagement
in all data redistributions
· FPGA performance assist on S-Blade · Exadata nodes primarily used for
– decompression, predicate filtering, decompression and predicate filtering.
row-level security enforcement · Most DW & Analytics work done in
· >95% of work done on S-Blades SMP head node
· Fully engaged MPP platform
for analytics
· User-defined functions, · Analytics processing limited
aggregates and tables to head node cluster only
· Language support: C/C++,
Java, Python, R, Fortran · User-defined functions and
· Paradigm support: SQL, aggregates
Matrix, Grid, Hadoop · Language support: C/C++, Java
· Built-in set of >50 key analytics
(fully parallelized) · Paradigm support: SQL,
Matrix (minor)
· Open source: support for GNU
Scientific & CRAN libraries · Basic analytics functions
· Integrated Development Env.:
Eclipse & R GUI w/ wizards
· Linear performance and data
size scalability · Non-linear performance & data
size scaling – performance and i/o
· Full-featured, enterprise-class work- bottleneck at/to head node cluster
load management & other features
· Heavily tuned performance
· No tuning, no indexing, no partitions dependency
· Balanced system developed to · Performance dependent on
deliver best price-performance administration of partitions
and indices
Oracle Exadata and Netezza TwinFin Compared page 21
23. Value with Netezza TwinFin
Netezza’s engineers integrate data management and analysis deep within massively parallel, shared-
nothing grids. One result we plan from this innovation is simplicity for our customers, which translates
directly to dramatically lower costs of owning and operating data warehouses than is possible with
traditional database products, such as Oracle’s.
Demands on data warehouses have moved beyond processing simple SQL; to fully exploit data
requires the warehouse be capable of running predictive models, investigative graphs and other
analytic applications. To illustrate, a financial services company – knowing that the next most likely
purchase of a new mortgage customer (whose profile includes investment products, loan products
and a checking account but no insurance policy) is an investment product followed by another
mortgage – can create targeted marketing campaigns of value to the customer and with a high chance
of success.13 This analysis, beyond SQL’s capabilities, requires a technique called Dynamic Bayesian
Networks. However, the analysis uses the same data processed by SQL to create reports and
dashboards suggesting an expanding role for the warehouse.
TwinFin is designed from the ground up for processing both SQL and the applications of advanced
analytics. Netezza frees customers from proprietary languages. Customers can port existing
applications to TwinFin or choose to develop new analytic applications in the language of their choice,
including C++, Java, Python, R and Fortran. Customers using C can take advantage of more than 1,000
analytic functions available as free software from the GNU Scientific Library.14 Customers using R can
also make use of the more than 2,000 packages publicly available in the Comprehensive R Archive
Network (CRAN).15 Additionally Netezza’s customers can choose to work with MapReduce / Hadoop
as, for example, a highly scalable ingestion mechanism to preprocess enormous data sets generated
by public facing web applications and web logs before they are loaded into TwinFin for analysis.
13 See Dynamic Bayesian Networks for acquisition pattern analysis: a financial-services cross-sell application by Anita Prinzie,
Marketing Group, Manchester Business School and Dirk Van den Poel, Department of Marketing, Ghent University
14 www.gnu.org/software/gsl/
15 http://cran.r-project.org/
Oracle Exadata and Netezza TwinFin Compared page 22
24. 6 Conclusion
Netezza has emerged as the principal alternative to Oracle for data warehousing. Moving data warehouses
and marts from Oracle to Netezza creates new opportunity, not risk. A majority of Netezza customers
have already walked this path, many of them by partnering with system integration companies with
strong track records for successful migrations.
Exadata is an evolution of Oracle’s OLTP platform. Oracle’s database management system is designed
for OLTP where data volumes are relatively modest compared to data warehouses. The database activity
of an OLTP system can be assessed before it is put into production; administrators have the time to
design, test and optimize each transaction’s data retrieval. Data warehouses must immediately process
whatever query the business needs to ask of their data; technologies requiring administrator mediation
are ill-suited to the task. Conscripting this technology into a role other than transaction processing
places enormous stress on people and processes harnessed to manage and operate a data warehouse.
“This [Netezza] is the first database product with a long term
product roadmap that aligns perfectly with our own roadmap.
We call this our on-demand database.”
– Steve Hirsch, Chief Data Officer at NYSE Euronext
Oracle advises customers that Exadata is architecturally similar to Netezza but better because TwinFin
doesn’t support every data type or SQL standard, and that it doesn’t support data mining or high
concurrency. Netezza’s customers disagree: “This is the first database product with a long term product
roadmap that aligns perfectly with our own roadmap. We call this our on-demand database,”16 said
Steve Hirsch, Chief Data Officer at NYSE Euronext.
16 www.netezza.com/customers/nyse-euronext-video.aspx
Oracle Exadata and Netezza TwinFin Compared page 23
25. Given their different workload characteristics, few customers attempt to run OLTP and data warehouse
systems on the same infrastructure; to do so demands constant tuning and optimizing. Technicians are
placed in a difficult situation: either accept compromised performance for both OLTP and data ware-
housing, or ceaselessly reconfigure the database in a vain attempt to satisfy conflicting demands of the
different workloads. Organizations will continue to run their OLTP and warehouse systems on different
platforms, each specifically configured to the needs of their workloads. Organizations planning to use
Oracle Exadata for OLTP17 can get the best of both worlds by pairing it with Netezza TwinFin for data
warehousing.
The only data warehouse that really matters is your data warehouse – your applications running on
your data in your data center. An on-site proof-of-concept (PoC) creates the opportunity for an IT
department to thoroughly investigate a technology, learning how they can use TwinFin to help their
business peers extract greater value from data. Making the most of this opportunity requires the PoC to
be managed with the same discipline afforded other projects. Curt Monash offers sage advice in his blog
“Best practices for analytic DBMS POCs,”18 including involving an independent consultant to steer
the project to a successful outcome. For organizations wanting to understand how their warehouse
performs on TwinFin, at no cost and with no risk, Netezza offers its Test Drive. To book one, go to
www.netezza.com/testdrive.
Netezza TwinFin: to use it is to enjoy it.
Learn More
Netezza is so sure you’ll like the Netezza TwinFin solution, we invite you to try it at your site with your
data – at no charge. Taking the TestDrive is easy.
Learn more here: www.netezza.com/testdrive
17 See Curt Monash’s DBMS2 Blog http://www.dbms2.com/2010/01/22/oracle-database-hardware-strategy/ for his discussion of the
role of Exadata-like technology as a platform for consolidating an enterprise’s many Oracle databases rather than running a few
heavy database management tasks.
18 www.dbms2.com/2010/06/14/best-practices-analytic-database-poc/#more-2297
Oracle Exadata and Netezza TwinFin Compared page 24
26. Pass it Along
Please share this eBook with your friends and colleagues.
• Send it
• Post a link: www.netezza.com/eBooks/Exadata-TwinFin-Compared.pdf
Visit Us
At Oracle OpenWorld – Netezza Booth #3641 in Moscone West
Give Us Feedback
What did you think about the ideas and arguments in this eBook? Let us know what you liked, disliked or
might want to discuss further.
• Chat with the Netezza Community: www.enzeecommunity.com/groups/twinfin-talk
• Tweet about the paper: #twinfin
Contact Us
Visit our blog: www.enzeecommunity.com/blogs/nzblog
Visit the Netezza website: www.netezza.com
Visit the Netezza Community website: www.enzeecommmunity.com
Send us your comments: www.netezza.com/company/contact_form.aspx
About the Author
Phil Francisco, Vice President, Product Management & Product Marketing, Netezza
Phil Francisco brings over 20 years of experience in technology development and global technology
marketing. As Vice President of Product Management and Product Marketing at Netezza, he fosters
new business and product strategies, directs the product portfolio and drives product marketing
programs. Prior to Netezza, Francisco was the Vice President of Marketing at PhotonEx, a leading
developer of 40 Gb/s optical transport systems for core telecommunications network providers.
Before PhotonEx Francisco served as Vice President of Product Marketing for Lucent Technologies'
Optical Networking Group, where he worked with some of the world's largest telecommunications
carriers in planning and implementing optical network solutions. Mr. Francisco holds a patent in
advanced optical network architectures. He received B.S. in Electrical Engineering and B.S. in
Computer Science degrees magna cum laude, from the Moore School of Electrical Engineering at
the University of Pennsylvania. He earned his Master's degree in Electrical Engineering from Stanford
University and completed the Advanced Management Program at the Fuqua School of Business at
Duke University. Read Phil's blog: www.enzeecommunity.com/blogs/nzblog.
Oracle Exadata and Netezza TwinFin Compared page 25