• Save
Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop
 

Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop

on

  • 3,826 views

 

Statistics

Views

Total Views
3,826
Views on SlideShare
2,329
Embed Views
1,497

Actions

Likes
3
Downloads
0
Comments
0

4 Embeds 1,497

http://www.cloudera.com 1461
http://lanyrd.com 32
http://www.twylah.com 3
http://test.cloudera.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Most people think of orbitz.com, but Orbitz Worldwide is really a global portfolio of leading online travel consumer brands including Orbitz, Cheaptickets, The Away Network, ebookers and HotelClub. Orbitz also provides business to business services - Orbitz Worldwide Distribution provides hotel booking capabilities to a number of leading carriers such as Amtrak, Delta, LAN, KLM, Air France and Orbitz for Business provides corporate travel services to a number of Fortune 100 clients Orbitz started in 1999, orbitz site launched in 2001.
  • Some benefits of Hadoop you start to hear so many times they almost become cliché, but based on our experience at Orbitz they’ve proven to be true, so they bear repeating.
  • On Orbitz alone we do millions of searches and transactions daily, all of this activity leads to extremely large volumes of data – hundreds of GB/day. Not all of this data has value – much of it’s logged for historic reasons and is no longer useful, but much of it is valuable. In addition there’s more data that we’re not currently capturing that we know has value
  • This chart isn’t exactly an apples-to-apples comparison, but provides some idea of the difference in cost per TB for the DW vs. Hadoop Hadoop doesn’t provide the same functionality as a data warehouse, but it does allow us to store and process data that wasn’t practical before for economic and technical reasons.
  • Putting data into a DB or DWH requires having knowledge or making assumptions about how the data will be used. Either way you’re putting constraints around how the data is accessed and processed. With Hadoop each application can process the raw data in whatever way is required.
  • Our data warehouse contains a full archive of all transactions – every booking, refund, cancellation etc. Much valuable non-transactional data was just thrown away because it was uneconomical to store and didn’t necessarily have clear value.
  • Hadoop was deployed late 2009/early 2010 to begin collecting this non-transactional data. Orbitz has been using CDH for that entire period with great success. Much of this non-transactional data is contained in web analytics logs.
  • Having access to this data allows us to perform processing and analyses not previously possible.
  • Hadoop was first used to facilitate the machine learning teams work. This team needed accessed to large amounts of data on user interaction in order to do things like optimize hotel ranking and show consumers hotels more closely matching their preferences.
  • Hadoop is used to crunch data for input to a system to recommend products to users.
  • Although we use third-party sites to monitor site performance, Hadoop allows the front end team to provide detailed reports on page download performance, providing valuable trending data not available from other sources.
  • Hadoop collects and processes data for input to analyses to optimize cache performance.
  • Data is used analysis of user segments, which can drive personalization. This chart shows that Safari users click on hotels with higher mean and median prices as opposed to other users.
  • MAD: acronym for magnetic, agile, and deep agile: ability to quickly integrate new data sources deep: able to perform sophisticated analyses
  • This would facilitate access to all of our data through standard BI tools plus which most of our BI developers, not to mention users, develop SQL, ETL, etc, and are not Java developers and won’t be writing MR jobs we haven’t yet achieved this data warehouse nirvana
  • QlikView is used extensively for reporting at Orbitz. Although QlikView is working on enhancements to facilitate integration with tools such as Hadoop, there’s no direct integration. This is understandable since QlikView uses an in-memory model which presents a challenge when dealing with Hadoop sized data. We can however use Hadoop to summarize data for export to QlikView.
  • This provides an example of a typical processing flow for the large volumes of non-transactional data we’re collecting. This processing allows us to convert large volumes of un-structured data into structured data that can be queried, extracted, etc. for further processing.
  • This type of processing also allows us summarize large volumes of data into a data set that can be exported to the data warehouse, allowing us to query and report on that data using all of our standard BI tools.
  • Still being implemented, but a good example of how Hadoop allows us to offload time and resource intensive processing from the data warehouse.
  • Processing of click data gathered by web servers. This click data contains marketing info. data cleansing step is done inside data warehouse using a stored procedure further downstream processing is done to generate final data sets for reporting Although this processing generates the required user reports, this process consumes considerable time and resources on the data warehouse, consuming resources that could be used for reports, queries, etc.
  • ETL step is eliminated, instead raw logs will be uploaded to HDFS which is a much faster process Moving the data cleansing to MapReduce will allow us to take advantage of Hadoop’s efficiencies and greatly speed up the processing. Moves the “heavy lifting” of processing the relatively large data sets to Hadoop, and takes advantage of Hadoop’s efficiencies.

Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop Presentation Transcript

  • Extending the Enterprise Data Warehouse with Hadoop Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 | 2011
  • Who We Are
    • Robert Lancaster
      • Solutions Architect, Hotel Supply Team
      • [email_address]
      • @rob1lancaster
    • Jonathan Seidman
      • Lead Engineer, Business Intelligence/Big Data Team
      • Co-founder/organizer of Chicago Hadoop User Group (http://www.meetup.com/Chicago-area-Hadoop-User-Group-CHUG )
      • [email_address]
      • @jseidman
    page
  • page Launched: 2001, Chicago, IL
  • page Why are we using Hadoop? Stop me if you’ve heard this before…
    • On Orbitz alone we do millions of searches and transactions daily, which leads to hundreds of gigabytes of log data every day.
    page
  • page Hadoop provides us with efficient, economical, scalable, and reliable storage and processing of these large amounts of data. $ per TB
  • And… page Hadoop places no constraints on how data is processed.
  • Before Hadoop page
  • page With Hadoop
  • page Access to this non-transactional data enables a number of applications…
  • Optimizing Hotel Search page
  • Recommendations page
  • Page Performance Tracking page
  • Cache Analysis page A small number of queries (3%) make up more than a third of search volume.
  • User Segmentation page
  • All of this is great, but…
    • Most of these efforts are driven by development teams.
    • The challenge now is to unlock the value in this data by making it more available to the rest of the organization.
    page
  • page “ Given the ubiquity of data in modern organizations, a data warehouse can keep pace today only by being “magnetic”: attracting all the data sources that crop up within an organization regardless of data quality niceties.”* *MAD Skills: New Analysis Practices for Big Data
  • page In a better world…
  • Integrating Hadoop with the Enterprise Data Warehouse Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 | 2011
  • page The goal is a unified view of the data, allowing us to use the power of our existing tools for reporting and analysis.
  • page BI vendors are working on integration with Hadoop…
  • And one more reporting tool… page
  • Example Processing Pipeline for Web Analytics Data page
  • Aggregating data for import into Data Warehouse page
  • page Example Use Case: Beta Data Processing
  • Example Use Case – Beta Data Processing page
  • Example Use Case – Beta Data Processing Output page
  • page Example Use Case: RCDC Processing
  • Example Use Case – RCDC Processing page
  • page Example Use Case: Click Data Processing
  • Click Data Processing – Current DW Processing page Web Server Logs ETL DW Data Cleansing (Stored procedure) DW Web Server Web Servers 3 hours 2 hours ~20% original data size
  • Click Data Processing – New Hadoop Processing page Web Server Logs HDFS Data Cleansing (MapReduce) DW Web Server Web Servers
  • Conclusions
    • Market is still immature, but Hadoop has already become a valuable business intelligence tool, and will become an increasingly important part of a BI infrastructure.
    • Hadoop won’t replace your EDW, but any organization with a large EDW should at least be exploring Hadoop as a complement to their BI infrastructure.
    • Use Hadoop to offload the time and resource intensive processing of large data sets so you can free up your data warehouse to serve user needs.
    • The challenge now is making Hadoop more accessible to non-developers. Vendors are addressing this, so expect rapid advancements in Hadoop accessibility.
    page
  • Oh, and also…
    • Orbitz is looking for a Lead Engineer for the BI/Big Data team.
    • Go to http://careers.orbitz.com / and search for IRC19035.
    page
  • References
    • MAD Skills: New Analysis Practices for Big Data, Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph Hellerstein, and Caleb Welton, 2009
    page