Business Intelligence for Big Data
Upcoming SlideShare
Loading in...5
×
 

Business Intelligence for Big Data

on

  • 3,795 views

 

Statistics

Views

Total Views
3,795
Views on SlideShare
3,756
Embed Views
39

Actions

Likes
3
Downloads
258
Comments
1

2 Embeds 39

http://confluence.cmi-hro.nl 29
http://static.slidesharecdn.com 10

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • 系统,清晰
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Business Intelligence for Big Data Business Intelligence for Big Data Document Transcript

  • Business Intelligence for Big Data James Dixon, Chief Geek August, 2010 © 2010, Pentaho. All Rights Reserved. www.pentaho.com.
  • Business Intelligence = reports, dashboards, analysis, visualization, alerts, auditing© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIt might be a self-selecting audience since we are a Business Intelligence company, but upwards of 90% of the companies we talk to are using, orplan to use Hadoop to transform structured or semi-structured data - with the aim of then analyzing, investigating and reporting on the data.
  • Hadoop and BI© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIt might be a self-selecting audience since we are a Business Intelligence company, but upwards of 90% of the companies we talk to are using, orplan to use Hadoop to transform structured or semi-structured data - with the aim of then analyzing, investigating and reporting on the data.
  • Example Hadoop Cases Today Transactional • Fraud detection • Financial services/stock markets Sub-Transactional • Weblogs • Social/online media • Telecoms events© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide* Not many companies have transactional data that classifies as Big Data. Credit card companies, and financial services companies are about it.* With stock market data were are talking about every stock trade and the bid and ask prices between the transactions - for every stock on multiplemarkets for a significant time period.For many other companies the Big Data is sub-transactional - it is the events that lead up to transactions* Weblogs are semi/badly structured. Consider the number of weblog entries created as you look for a book online - researching 5-10 books,reading reviews and comments. You might generate 1000 entries and may or may not buy a book - potentially lots of entries for no transaction. Wealso want to enrich this data with metadata about the URLs and information about the location of user* In an online game or world every interaction between participants and the system and between each other is logged. An individual participantmight generate > 1 million events for their 1 monthly transaction* A single phone call or text message generates many events within a telecoms company
  • Example Hadoop Cases Today Non-Transactional • Web pages, blogs etc • Documents • Physical events • Application events • Machine events In most cases structured or semi-structured© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide* In additional to transactional and sub-transactional there is also non-transactional data. Some of this data is human-generated and some of it ispeople-generated.* People generate lots of content that companies are interested in - web pages, blogs, and comments* Physical events include data such as weather data. If you take the combined output of the weather-sensing instruments deployed today you getBig Data* Many software applications log events as they execute, as do machines such as production line machineryTRANSITIONIn the majority of these cases the data is structured or semi-structured.LEAD-INWhat do we have in common between these use cases? How can we describe these Big Data scenarios?
  • Data Lake • Single source • Large volume • Not distilled • Can be treated© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIn most of these cases we are dealing with a single source of data.We are, we know, dealing with a large volume of data.We are also dealing with data that is not aggregated, or summarized. Itʼs not ʻdistilledʼ in any way.It is a large body of data. The data can be raw data or might be treated in some way, treated within the lake or on its way into the lake. For exampleweblog entries might be geocoded and enriched with metadata.So we are calling these things Data Lakes.
  • Data Lakes • 0-2 lakes per company • Known and unknown questions • Multiple user communities • $1-10k questions, not $1m ones • Don’t fit in traditional RDBMS with a reasonable cost© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideThere are some other interesting attributes of these data lakes* If there is a data lake at all in a company there is usually only 1. Some domains such as financial services companies might have two, but anymore than this is very rare.* In most cases we have some questions of this data that are known ahead of time. But we also have questions of the data that cannot beanticipated.* We also frequently have different user communities that want access to the data. In the example of weblogs we have sales and marketingdepartments that want to know about the behavior of visitors and the volume of traffic on the site, maybe for different geographies. We also have theIT department that wants information about throughput and load on the server for capacity planning.* In general most of the questions about the data are not million dollar questions, they are $1k to $10k questions. Because no one user or group hasa million dollar question, no-one has a million dollar budget to solve the problem.* Additionally this amount of data does not fit into a database either because the database physically will not fit or the cost of doing so is out of reacheconomically.
  • Data Lake Requirements • Store all the data • Satisfy routine reporting and analysis • Satisfy ad-hoc query / analysis / reporting • Balance performance and cost© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIf we look at the requirements of these data lakes we also see common ground:* We want to store all of the data because we donʼt know all the questions we have of the data. If we did know, weʼd only have to keep a subset ofthe data.* We still want to satisfy all of the traditional BI reporting and analysis needs.* We need to provide the ability to dip into the lake at any time to ask any question of the data:- In some cases we want to extract a slice of data from the lake for detailed analysis. Letʼs say Iʼm in charge of pricing and promotions for acompany and this week Iʼm looking at a particular region or a particular product. I want to select a subset of the data from the lake, summarized tosome level, with attributes that I want to analyze. I want to slice and dice this data for a few hours or days, and then move onto my next region orproduct. In this case we are creating a short-lived data-mart from the data lake.- In other cases we know exactly the data we are looking for and donʼt need to explore it. In this case we defined the attributes of the data that wewant and we get a query results back.* We also want to balance cost and performance. Big Data solutions are cheaper per-TeraByte than other solutions, but do not have the same levelof performance. We want a system where we can selectively improve the performance of data that we care the most about, and still have access tothe entire data set any time we need it.LEAD-INSince we are introducing a new term ʻData Lakeʼ we need to explain how it is different from traditional BI system
  • Traditional BI Data Mart(s) Tape/Trash Data ? ? ? Source ? ? ??© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIn a traditional BI system where we have not been able to store all of the raw data, we have solved the problem by being selective.Firstly we selected the attributes of the data that we know we have questions about. Then we cleansed it and aggregated it to transaction levels orhigher, and packaged it up in a form that is easy to consume. Then we put it into an expensive system that we could not scale, whether technicallyor financially. The rest of the data was thrown away or archived on tape, which for the purposes of analysis, is the same as throwing it away.TRANSITIONThe problem is we donʼt know what is in the data that we are throwing away or archiving. We can only answer the questions that we could predictahead of time.
  • What if... Data Mart(s) Ad-Hoc Data Warehouse Data Lake(s) Tape/Trash Data Source© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideBut what if, instead of sampling the data and throwing the rest awayTRANSITIONWe pour all of the data into a Data LakeTRANSITIONAnd then create whatever data marts we need from the Data LakeTRANSITIONAnd also provide the ability to extract data from the Data Lake on an ad-hoc basisTRANSITIONAnd also provide the ability to extract data from the Data Lake to feed into a data warehouse
  • Big Data Architecture Data Mart(s) Ad-Hoc Data Warehouse Data Lake(s) Data Source© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideThis, then, is our Big Data architecture.As well as pouring data from the source into the Data Lake, we can also take our archive tapes and pour them into the lake as well. Giving us ahuge about of historical data.Does this meet our requirements?TRANSITIONWe are storing all of the data, so we can answer both known and unknown questionsTRANSITIONWe are satisfying our standard reporting and analysis requirements by putting the most commonly requested data into data martsTRANSITIONWe are satisfying ad-hoc needs by providing the ability to dip into the lake at any time to extract data. This extracted data might be used topopulate a temporary data mart, it might be used at the input for a specialized visualization tool, or might be used by an analytical application.TRANSITIONWe are meeting the need to balance performance and cost by allowing you to choose how much data is staged in high-performance databases forfast access, and how much data is available from the Data Lake only.
  • Does Big Data Replace Data Marts? • If it is a database • If it has low latency Hadoop (to date) • Databases are immature • Databases are no-SQL© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Why Hadoop and BI? • Distributed processing • Distributed file system • Commodity hardware • Platform independent (in theory) • Scales out beyond technology and/or economy of a RDBMS In many cases it’s the only viable solution© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide* For the purposes of BI the parallel processing and distributed storage of Hadoop, along with its scale-out architecture using commodity hardwareis attractive.* Since Hadoop is written in Java it is, theoretically plaform-independent. At this point, due to some dependencies, it is only recommended forLinux/Unix.* And because these factors allow it to scale with a better price/performance characteristics than databases...TRANSITION... in many cases itʼs the only viable solutionLEAD-INSo are there any downsides to Hadoop for BI use cases?
  • Hadoop and BI? 90% of new Hadoop use cases are transformation of semi/structured data* * of those companies we’ve talked to...© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIt might be a self-selecting audience since we are a Business Intelligence company, but upwards of 90% of the companies we talk to are using, orplan to use Hadoop to transform structured or semi-structured data - with the aim of then analyzing, investigating and reporting on the data.
  • Hadoop and BI? “The working conditions within Hadoop are shocking” ETL Developer© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideUnfortunately for developers who are used to working with data transformation tools, the productivity within the Hadoop environment is not whatthey are used to.
  • Hadoop and BI? Instead of this...© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideInstead of a graphical UI with palettes of data transformation operations to string together in a way that is easy to understand, easy to trace, andeasy to explain...
  • Hadoop and BI? You have to do this... public void map( Text key, Text value, OutputCollector output, Reporter reporter) public void reduce( Text key, Iterator values, OutputCollector output, Reporter reporter)© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIn Hadoop we have two Java functions - Map and Reduce - that need to be implemented. These functions are part of the MapReduce processingengine mentioned earlier.Mapping and reducing are important functions in a data transformation engine, unfortunately there are many other operations that we need to doon our data.Hadoop does not include a comprehensive suite of data transformation operationsTo understand how we ended up in this situation we need to take a brief look at the history of Hadoop
  • MapReduce Limitations Doing everything with MapReduce is like doing everything with recursion. You can, but that doesn’t mean its the best solution© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • MapReduce Limitations Not a scalable name... What’s next? MapReduceLookupJoinDenormalize UpdateDedupeFilterCalcMergeAppend© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Google’s Use Case • Needed to index the internet • Huge set of unstructured data • Predetermined input • Predetermined output (the index) • Predetermined questions • Single user community • Needed parallel processing and storage Their answer was MapReduce (MR)© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideThe trail starts with Google. Google wanted to index the internet.* This is clearly a big data set, and also an unstructured data set.* Before they set out, Google knew what their data set was* They knew how they wanted to process the data - to create an index* They knew the questions they wanted to ask of the data - given some keys words, what are the most relevant web pages* They has a single user community - the set of people trying to search the internet* In order to solve this problem they needed a scalable architecture with distributed storage and parallel processingTRANSITIONTheir answer was to use MapReduce
  • Yahoo’s Use Case • Needed to index the internet • Huge set of unstructured data • Predetermined input • Predetermined output (the index) • Predetermined questions • Single user community • Needed parallel processing and storage Their answer was Hadoop (w/ MapReduce)© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideNext along the trail is Yahoo. Yahooʼs requirements were very similar, in fact almost identical, to Googlesʼs.* The exact same data set* The same input format* The same output* The same questions* From the same population* With the same scalability requirementsTRANSITIONYahooʼs answer was Hadoop, which includes a MapReduce engineLEAD-INSo how do these requirements compare with the current, BI-specific use cases?
  • Current Use Cases ✗ Not indexing the internet • ✗ Huge set of semi/structured data • ✗ Different input source and format • ✗ Different outputs • ✗ Different questions • ✗ Multiple user communities • ✓ Need parallel processing and storage •© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide* No-one is indexing the internet - that is not a BI use case* In most cases we have structured or semi-structured data, not unstructured* In each use case the data source is different, so the format of the data is different* In each case the output is not an index, it is a variety of data sets, data feeds, and reports* In each case the questions of the data are different, and the questions cannot all be predicted* In most cases we have multiple user communities with different needs and questions* In each case the volume of the data is such that we need a scalable architecture with distributed storage and parallel processingWhen we compare these scenarios with the purpose for which Hadoop was created we see thatTRANSITIONThere is not much overlap between the Big Data needs of BI, and the original intent of HadoopLEAD-INThe realization here is that...
  • Unfortunately Hadoop wasn’t designed for most BI requirements© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Hadoop’s Strengths and Weaknesses • Distributed processing • Distributed file system • Commodity hardware • Platform independent (in theory) • Scales out beyond technology and/or economy of a RDBMS But... • Not designed for BI© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • No-SQL and BI© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • BI Tools Need... Structured Query Language© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • BI Tools Don’t Need • CREATE / INSERT • UPDATE • DELETE • (only Read needed) • No ACID transactions© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Mondrian (OLAP) Needs Required: Nice to have: • SELECT • HAVING • FROM • ORDER BY ... NULLS COLLATE • WHERE • COUNT(DISTINCT x,y) • GROUP BY • COUNT(DISTINCT x), COUNT(DISTINCT y) • ORDER BY • VALUES (1,’a’), (2,’b’)© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Why not add to Hadoop the things it’s missing...© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • ... until it can do what we need it to?© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • If only we had a Java, embeddable, data transformation engine...© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Hadoop Architecture Java/ Clients Python Map/ Job Task Task Task Reduce Tracker Tracker Tracker Tracker Hadoop Common Filesystem: Name Data Data Data HDFS, Node Node Node Node S3...© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Pentaho Data Integration Data Marts, Data Warehouse, Analytical Applications Pentaho Data Integration Design Hadoop Pentaho Data Deploy Integration Orchestrate Pentaho Data Integration© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideFortunately we have an embeddable data integration engine, written in JavaWe have taken our Data Integration engine, PDI and integrated with Hadoop in a number of different areas:* We have the ability to move files between Hadoop and external locations* We have the ability to read and write to HDFS files during data transformations* We have the ability to execute data transformations within the MapReduce engine* We have the ability to extract information from Hadoop and load it into external data bases and applications* And we have the ability to orchestrate all of this so you can integrate Hadoop into the rest of your data architecture with scheduling, monitoring,logging etc
  • Visualize Repor3ng  /  Dashboards  /  Analysis Web Tier DM  &  DW RDBMS Op#mize Hive Hadoop Files  /  HDFS Load Applica3ons  &  Systems© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlidePut in to diagram form so we can indicate the different layers in the architecture and also show the scale of the data we get this Big Data pyramid.* At the bottom of the pyramid we have Hadoop, containing our complete set of data.* Higher up we have our data mart layer. This layer has less data in it, but has better performance.* At the top we have application-level data caches.* Looking down from the top, from the perspective of our users, they can see the whole pyramid - they have access to the whole structure. The onlything that varies is the query time, depending on what data they want.* Here we see that the RDBMS layer lets up optimize access to the data. We can decide how much data we want to stage in this layer. If we addmore storage in this layer, we can increase performance of a larger subset of the data lake, but it costs more money.
  • Repor3ng  /  Dashboards  /  Analysis Web Tier DM  &  DW RDBMS Metadata PDI Hive Hadoop PDI Files  /  HDFS PDI Applica3ons  &  Systems© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideWe are able to provide this data architecture because we have metadata about every layer in the architecture.We used Pentaho Data Integration to move data into Hadoop, and to process data within Hadoop, and as result we have metadata about the datawithin Hadoop.We also use PDI to create the data marts and extracts from Hadoop, so we have metadata about those as well
  • Repor3ng  /  Dashboards  /  Analysis Web Tier RDBMS Data Hadoop Lake Applica3ons  &  Systems© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIf we compare this diagram to our other Big Data diagram we see how it fits together.TRANSITIONOur Data Lake sits within HadoopTRANSITIONOur neatly packaged data mart and DW extracts feed into the database layer. Data from here can get to users very quickly.TRANSITIONOur ad-hoc queries and ad-hoc data-marts come directly from the Data Lake
  • Visualize Repor3ng  /  Dashboards  /  Analysis Web Tier DM  &  DW RDBMS Op#mize Hive Hadoop Files  /  HDFS Load Applica3ons  &  Systems© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideThis, then, is our big data architecture.Its a hybrid architecture that enables you to blend Hadoop with other elements of your data architecture, and with whatever amount of databasestorage you think necessary.The blend of Hadoop and other technologies is flexible and easy to tweak over time
  • Repor3ng  /  Dashboards  /  Analysis Web Tier DM   RDBMS Hive Hadoop HDFS© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideIn this demo we will show how easy it is to execute a series of Hadoop and non-Hadoop tasks. We are going toTRANSITION 1 Get a weblog file from an FTP serverTRANSITION 2 Make sure the source file does not exist with the Hadoop file systemTRANSITION 3 Copy the weblog file into HadoopTRANSITION 4 Read the weblog and process it - add metadata about the URLs, add geocoding, and enrich the operating system and browserattributesTRANSITION 5 Write the results of the data transformation to a new, improved, data fileTRANSITION 6 Load the data into HiveTRANSITION 7 Read an aggregated data set from HadoopTRANSITION 8 And write it into a databaseTRANSITION 9 Slice and dice the data with the databaseTRANSITION 10 And execute an ad-hoc query into Hadoop
  • Demo© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • FAQ 1. Will Pentaho contribute to Apache’s Hadoop projects? Yes 2. Will Pentaho distribute Hadoop as part of their product? Unlikely 3. What version of Hadoop will be supported? Initially 20.2 4. Will Pentaho’s APIs allow existing open source APIs to be used in parallel? Yes© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide1. Any changes Pentaho makes to the Apache code will be contributed to Apache.2. Pentaho does not plan to provide its own distribution of Hadoop or to provide anyone elseʼs distribution as part of our products. If we need toprovide binary patches while we wait for our contributions to be accepted by the Hive developers, we will do so, but this will be a temporary situationonly.3. We are looking into support for version 20.0 as well.4. We are not modifying or disabling any Hadoop APIs so any existing MapReduce tasks will work as they did before
  • FAQ 5. Will Pentaho provide support or services to help setup Hadoop? Yes, no, maybe 6. What are the requirements to be in the Pentaho Hadoop beta program? Requirements, be serious, have started already, etc© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide5. Hadoop is a data source for Pentaho, just as any filesystem, FTP, web service or database is. We donʼt directly provide support for these thirdparty services. We recognize that companies want support and services for Hadoop so we will work with partners to provide these.6. For the ongoing beta program we are looking for Hadoop sites that have data, have Hadoop installed, and have requirements
  • Can I Use ‘Big Data’ as a Data Warehouse? Yes, probably© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Should I Use ‘Big Data’ as a Data Warehouse? No, probably not© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • What is a Data Warehouse? Data Mart • Data structured for query and reporting Data Warehouse • What you get if you create data marts for every system, then combine them together© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Data Warehouse • Multiple sources • Cleansed and processed • Organized • Summarized© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideBy definition a data warehouse has content from many different sources - every operational system within your organization. This data has beencleansed, processed, structured and aggregated to the transaction levelTRANSITIONIf we compare the data warehouse to the Data Lake the differences between them become obvious
  • Big Data Architecture Data Mart(s) Ad-Hoc Data Warehouse Data Lake(s) Data Source© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideSo our recommendation is the Data Lake architecture, where data marts and a data warehouse are fed from a data lake.
  • But what if I really, really want to . . .© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | Slide
  • Data Water-Garden • Lake(s) • Pools and ponds • Organized • Cleansed • Linkages© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideInstead of a single Data Lake, create a series of data pools. Each pool will be populated from a different data source. The data in the pools shouldbe cleansed and structured.Create links between the pools with attributes that are exist in both.
  • Water-Garden Architecture Data Mart(s) Ad-Hoc Data Mart(s) Water-Garden Data Sources© 2010, Pentaho. All Rights Reserved. www.pentaho.com. US and Worldwide: +1 (866) 660-7555 | SlideThen optimize your system by creating data marts for different domains or user populations
  • More informationwww.pentaho.com/hadoopcontact: hadoop@pentaho.com Pentaho Template v6 © 2010, Pentaho. All Rights Reserved. www.pentaho.com.