Your SlideShare is downloading. ×
Extending the Enterprise Data Warehouse with Hadoop Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 | 2...
Who We Are <ul><li>Robert Lancaster </li></ul><ul><ul><li>Solutions Architect, Hotel Supply Team </li></ul></ul><ul><ul><l...
page  Launched:  2001, Chicago, IL
page  Why are we using Hadoop? Stop me if you’ve heard this before…
<ul><li>On Orbitz alone we do millions of searches and transactions daily, which leads to hundreds of gigabytes of log dat...
page  Hadoop provides us with efficient, economical, scalable, and reliable storage and processing of these large amounts ...
And… page  Hadoop places no constraints on how data is processed.
Before Hadoop page
page  With Hadoop
page  Access to this non-transactional data enables a number of applications…
Optimizing Hotel Search page
Recommendations page
Page Performance Tracking page
Cache Analysis page  A small number of queries (3%) make up more than a third of search volume.
User Segmentation page
All of this is great, but… <ul><li>Most of these efforts are driven by development teams. </li></ul><ul><li>The challenge ...
page  “ Given the ubiquity of data in modern organizations, a data warehouse can keep pace today only by being “magnetic”:...
page  In a better world…
Integrating Hadoop with the Enterprise Data Warehouse Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 |...
page  The goal is a unified view of the data, allowing us to use the power of our existing tools for reporting and analysis.
page  BI vendors are working on integration with Hadoop…
And one more reporting tool… page
Example Processing Pipeline for Web Analytics Data page
Aggregating data for import into Data Warehouse page
page  Example Use Case: Beta Data Processing
Example Use Case – Beta Data Processing page
Example Use Case – Beta Data Processing Output page
page  Example Use Case: RCDC Processing
Example Use Case – RCDC Processing page
page  Example Use Case: Click Data Processing
Click Data Processing – Current DW Processing  page  Web Server Logs ETL DW Data Cleansing (Stored  procedure) DW Web Serv...
Click Data Processing – New Hadoop Processing  page  Web Server Logs HDFS Data Cleansing (MapReduce) DW Web Server Web Ser...
Conclusions <ul><li>Market is still immature, but Hadoop has already become a valuable business intelligence tool, and wil...
Oh, and also… <ul><li>Orbitz is looking for a Lead Engineer for the BI/Big Data team. </li></ul><ul><li>Go to  http://care...
References <ul><li>MAD Skills: New Analysis Practices for Big Data, Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph Heller...
Upcoming SlideShare
Loading in...5
×

Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop

3,549

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,549
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • Most people think of orbitz.com, but Orbitz Worldwide is really a global portfolio of leading online travel consumer brands including Orbitz, Cheaptickets, The Away Network, ebookers and HotelClub. Orbitz also provides business to business services - Orbitz Worldwide Distribution provides hotel booking capabilities to a number of leading carriers such as Amtrak, Delta, LAN, KLM, Air France and Orbitz for Business provides corporate travel services to a number of Fortune 100 clients Orbitz started in 1999, orbitz site launched in 2001.
  • Some benefits of Hadoop you start to hear so many times they almost become cliché, but based on our experience at Orbitz they’ve proven to be true, so they bear repeating.
  • On Orbitz alone we do millions of searches and transactions daily, all of this activity leads to extremely large volumes of data – hundreds of GB/day. Not all of this data has value – much of it’s logged for historic reasons and is no longer useful, but much of it is valuable. In addition there’s more data that we’re not currently capturing that we know has value
  • This chart isn’t exactly an apples-to-apples comparison, but provides some idea of the difference in cost per TB for the DW vs. Hadoop Hadoop doesn’t provide the same functionality as a data warehouse, but it does allow us to store and process data that wasn’t practical before for economic and technical reasons.
  • Putting data into a DB or DWH requires having knowledge or making assumptions about how the data will be used. Either way you’re putting constraints around how the data is accessed and processed. With Hadoop each application can process the raw data in whatever way is required.
  • Our data warehouse contains a full archive of all transactions – every booking, refund, cancellation etc. Much valuable non-transactional data was just thrown away because it was uneconomical to store and didn’t necessarily have clear value.
  • Hadoop was deployed late 2009/early 2010 to begin collecting this non-transactional data. Orbitz has been using CDH for that entire period with great success. Much of this non-transactional data is contained in web analytics logs.
  • Having access to this data allows us to perform processing and analyses not previously possible.
  • Hadoop was first used to facilitate the machine learning teams work. This team needed accessed to large amounts of data on user interaction in order to do things like optimize hotel ranking and show consumers hotels more closely matching their preferences.
  • Hadoop is used to crunch data for input to a system to recommend products to users.
  • Although we use third-party sites to monitor site performance, Hadoop allows the front end team to provide detailed reports on page download performance, providing valuable trending data not available from other sources.
  • Hadoop collects and processes data for input to analyses to optimize cache performance.
  • Data is used analysis of user segments, which can drive personalization. This chart shows that Safari users click on hotels with higher mean and median prices as opposed to other users.
  • MAD: acronym for magnetic, agile, and deep agile: ability to quickly integrate new data sources deep: able to perform sophisticated analyses
  • This would facilitate access to all of our data through standard BI tools plus which most of our BI developers, not to mention users, develop SQL, ETL, etc, and are not Java developers and won’t be writing MR jobs we haven’t yet achieved this data warehouse nirvana
  • QlikView is used extensively for reporting at Orbitz. Although QlikView is working on enhancements to facilitate integration with tools such as Hadoop, there’s no direct integration. This is understandable since QlikView uses an in-memory model which presents a challenge when dealing with Hadoop sized data. We can however use Hadoop to summarize data for export to QlikView.
  • This provides an example of a typical processing flow for the large volumes of non-transactional data we’re collecting. This processing allows us to convert large volumes of un-structured data into structured data that can be queried, extracted, etc. for further processing.
  • This type of processing also allows us summarize large volumes of data into a data set that can be exported to the data warehouse, allowing us to query and report on that data using all of our standard BI tools.
  • Still being implemented, but a good example of how Hadoop allows us to offload time and resource intensive processing from the data warehouse.
  • Processing of click data gathered by web servers. This click data contains marketing info. data cleansing step is done inside data warehouse using a stored procedure further downstream processing is done to generate final data sets for reporting Although this processing generates the required user reports, this process consumes considerable time and resources on the data warehouse, consuming resources that could be used for reports, queries, etc.
  • ETL step is eliminated, instead raw logs will be uploaded to HDFS which is a much faster process Moving the data cleansing to MapReduce will allow us to take advantage of Hadoop’s efficiencies and greatly speed up the processing. Moves the “heavy lifting” of processing the relatively large data sets to Hadoop, and takes advantage of Hadoop’s efficiencies.
  • Transcript of "Chicago Data Summit: Extending the Enterprise Data Warehouse with Hadoop"

    1. 1. Extending the Enterprise Data Warehouse with Hadoop Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 | 2011
    2. 2. Who We Are <ul><li>Robert Lancaster </li></ul><ul><ul><li>Solutions Architect, Hotel Supply Team </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>@rob1lancaster </li></ul></ul><ul><li>Jonathan Seidman </li></ul><ul><ul><li>Lead Engineer, Business Intelligence/Big Data Team </li></ul></ul><ul><ul><li>Co-founder/organizer of Chicago Hadoop User Group (http://www.meetup.com/Chicago-area-Hadoop-User-Group-CHUG ) </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>@jseidman </li></ul></ul>page
    3. 3. page Launched: 2001, Chicago, IL
    4. 4. page Why are we using Hadoop? Stop me if you’ve heard this before…
    5. 5. <ul><li>On Orbitz alone we do millions of searches and transactions daily, which leads to hundreds of gigabytes of log data every day. </li></ul>page
    6. 6. page Hadoop provides us with efficient, economical, scalable, and reliable storage and processing of these large amounts of data. $ per TB
    7. 7. And… page Hadoop places no constraints on how data is processed.
    8. 8. Before Hadoop page
    9. 9. page With Hadoop
    10. 10. page Access to this non-transactional data enables a number of applications…
    11. 11. Optimizing Hotel Search page
    12. 12. Recommendations page
    13. 13. Page Performance Tracking page
    14. 14. Cache Analysis page A small number of queries (3%) make up more than a third of search volume.
    15. 15. User Segmentation page
    16. 16. All of this is great, but… <ul><li>Most of these efforts are driven by development teams. </li></ul><ul><li>The challenge now is to unlock the value in this data by making it more available to the rest of the organization. </li></ul>page
    17. 17. page “ Given the ubiquity of data in modern organizations, a data warehouse can keep pace today only by being “magnetic”: attracting all the data sources that crop up within an organization regardless of data quality niceties.”* *MAD Skills: New Analysis Practices for Big Data
    18. 18. page In a better world…
    19. 19. Integrating Hadoop with the Enterprise Data Warehouse Robert Lancaster and Jonathan Seidman Chicago Data Summit April 26 | 2011
    20. 20. page The goal is a unified view of the data, allowing us to use the power of our existing tools for reporting and analysis.
    21. 21. page BI vendors are working on integration with Hadoop…
    22. 22. And one more reporting tool… page
    23. 23. Example Processing Pipeline for Web Analytics Data page
    24. 24. Aggregating data for import into Data Warehouse page
    25. 25. page Example Use Case: Beta Data Processing
    26. 26. Example Use Case – Beta Data Processing page
    27. 27. Example Use Case – Beta Data Processing Output page
    28. 28. page Example Use Case: RCDC Processing
    29. 29. Example Use Case – RCDC Processing page
    30. 30. page Example Use Case: Click Data Processing
    31. 31. Click Data Processing – Current DW Processing page Web Server Logs ETL DW Data Cleansing (Stored procedure) DW Web Server Web Servers 3 hours 2 hours ~20% original data size
    32. 32. Click Data Processing – New Hadoop Processing page Web Server Logs HDFS Data Cleansing (MapReduce) DW Web Server Web Servers
    33. 33. Conclusions <ul><li>Market is still immature, but Hadoop has already become a valuable business intelligence tool, and will become an increasingly important part of a BI infrastructure. </li></ul><ul><li>Hadoop won’t replace your EDW, but any organization with a large EDW should at least be exploring Hadoop as a complement to their BI infrastructure. </li></ul><ul><li>Use Hadoop to offload the time and resource intensive processing of large data sets so you can free up your data warehouse to serve user needs. </li></ul><ul><li>The challenge now is making Hadoop more accessible to non-developers. Vendors are addressing this, so expect rapid advancements in Hadoop accessibility. </li></ul>page
    34. 34. Oh, and also… <ul><li>Orbitz is looking for a Lead Engineer for the BI/Big Data team. </li></ul><ul><li>Go to http://careers.orbitz.com / and search for IRC19035. </li></ul>page
    35. 35. References <ul><li>MAD Skills: New Analysis Practices for Big Data, Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph Hellerstein, and Caleb Welton, 2009 </li></ul>page

    ×