• Save
Gartner peer forum sept 2011   orbitz
Upcoming SlideShare
Loading in...5
×
 

Gartner peer forum sept 2011 orbitz

on

  • 3,587 views

 

Statistics

Views

Total Views
3,587
Views on SlideShare
3,568
Embed Views
19

Actions

Likes
3
Downloads
0
Comments
0

3 Embeds 19

http://www.linkedin.com 17
http://www.kashyaps.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Welcome everyone. I will be presenting on how we are shaping up web analytics and big data to optimize the data driven decisions at Orbitz World wide 2.. I will also be talking about the process model on how we are effectively utilizing the brains and man power across the organization towards a common goal 4. Between me and Jonathan we promise to give you some thought provoking details about analytics and Big data :-)
  • Most people think of orbitz.com, but Orbitz Worldwide is really a global portfolio of leading online travel consumer brands including Orbitz, Cheaptickets, The Away Network, ebookers and HotelClub. Orbitz also provides business to business services - Orbitz Worldwide Distribution provides hotel booking capabilities to a number of leading carriers such as Amtrak, Delta, LAN, KLM, Air France and Orbitz for Business provides corporate travel services to a number of Fortune 100 clients Orbitz started in 1999, orbitz site launched in 2001.
  • A couple of years ago when I mentioned Hadoop I’d often get blank stares, even from developers. I think most folks now are at least aware of what Hadoop is.
  • This chart isn’t exactly an apples-to-apples comparison, but provides some idea of the difference in cost per TB for the DW vs. Hadoop Hadoop doesn’t provide the same functionality as a data warehouse, but it does allow us to store and process data that wasn’t practical before for economic and technical reasons. Putting data into a DB or DWH requires having knowledge or making assumptions about how the data will be used. Either way you’re putting constraints around how the data is accessed and processed. With Hadoop each application can process the raw data in whatever way is required. If you decide you need to analyze different attributes you just run a new query.
  • The initial motivation was to solve a particular business problem. Orbitz wanted to be able to use intelligent algorithms to optimize various site functions, for example optimizing hotel search by showing consumers hotels that more closely match their preferences, leading to more bookings.
  • Improving hotel search requires access to such data as which hotels users saw in search results, which hotels they clicked on, and which hotels were actually booked. Much of this data was available in web analytics logs.
  • Management was supportive of anything that facilitated ML team efforts. But when we presented a hardware spec for servers with local non-raided storage, etc. syseng offered us blades with attached storage.
  • Hadoop is used to crunch data for input to a system to recommend products to users. Although we use third-party sites to monitor site performance, Hadoop allows the front end team to provide detailed reports on page download performance, providing valuable trending data not available from other sources. Data is used for analysis of user segments, which can drive personalization. This chart shows that Safari users click on hotels with higher mean and median prices as opposed to other users. This is just a handful of examples of how Hadoop is driving business value.
  • Recently received an email from a user seeking access to Hive. Sent him a detailed email with info on accessing Hive, etc. Received an email back basically saying “you lost me at ssh”.
  • Previous to 2011 Hadoop responsibilities were split across technology teams. Moving under a single team centralized responsibility and resources for Hadoop.
  • Processing of click data gathered by web servers. This click data contains marketing info. data cleansing step is done inside data warehouse using a stored procedure further downstream processing is done to generate final data sets for reporting Although this processing generates the required user reports, this process consumes considerable time and resources on the data warehouse, consuming resources that could be used for reports, queries, etc.
  • ETL step is eliminated, instead raw logs will be uploaded to HDFS which is a much faster process Moving the data cleansing to MapReduce will allow us to take advantage of Hadoop’s efficiencies and greatly speed up the processing. Moves the “heavy lifting” of processing the relatively large data sets to Hadoop, and takes advantage of Hadoop’s efficiencies.
  • Bad news is we need to significantly increase the number of servers in our cluster, the good news is that this is because teams are using Hadoop, and new projects are coming online.
  • I met someone at the train station who asked me what I do? I said I work in the web analytics field and I help shape up the strategy and vision at Orbitz worldwide and enable our business teams to get insights on the performance of our site and act upon it. So he said, Ah you do reporting :-) 2. I started thinking why web analytics is hard for people to get and started evangelizing both within and outside Orbitz 3. I manage the webanalytics team at Orbitz worldwide I also try to help out non-profit organizations while I am not busy with my wife and 2 sons
  • 1 So what is web analytics? 2 Read the definition. It tells you exactly why someone came to your site and what kind of impact they had on the bottom line of your revenue 3. Read the definition. You need to immerse yourself in data to understand the story it's telling 4. Read the definition. Focus on Customer. Customer is the king. You need to listen and act upon their feedback 5. Read the definition. Test Test and Test. If you want to prove or disprove a HIPPO's opinion you need to perform tests on your site 6. btw HIPPO is a common terminology in the industry. It stands for Highest Income Paid person's opinion :-)
  • S o with so many brands and so much data we had quite a few challenges? For starters we couldn't easily do multi dimensional analysis with the tools. With data spread across in multiple tools it was hard to picture the whole 9 yards obviously tools cost money Harder for people to understand where to look at for data With Analytics you need direction rather than precision to take action and get insights
  • In the Big Data front we didn't have a good infrastructure where we could house all this data in a cost effective way. 2. Data extraction was NOT an easy task 3. Focusing on the key differences on when you need testing v/s when you need reporting. 4. Earlier I mentioned that you need to do rigorous outcome analysis. However, with all the challenges we faced it was not an easy task.
  • So how do we fit the puzzle? By learning the behavior of the customer and focusing on key attributes Know the travel details such as how many travelers, what kind of travelers, any preferred carrier or hotels? 4. Understand the shopping patterns. Does he want to shop only on weekends or else only on Thursdays. 5. Focus on Visit Patterns. How many times does he come to the site before he buys anything 6. Learn the page navigation. I.e does he see 100 pages every time he comes or does he know exactly what to look at 7. Master the Demand source. Anyone who's worked in the marketing side knows that attribution is a holy war. Deciding which demand source gets the credit for conversion is something people will argue to death Just like the IDE war between VIM, EMACS, Intellij and Eclipse :-)
  • We realized that with all the challenges we had, we had to innovate and experiment new ways to enable successful web analytics at OWW 2. We generate hundreds of GB of log data per day. How can we effectively store this massive data and how can we mine this data and make sense out of it? 3. Our existing DW was not intended to support such large sets of data and more importantly process this data We also needed to make sure that we don't spend huge money to store this data set. 4. Big data infrastructure with Hadoop has been a huge success at Orbitz and at other organizations So what does this buy us? We can now store data for a long period of time without worrying too much about the space Analysts and developers have access to this data set Developers can run adhoc queries to support our business needs. While the core web analytics team focuses on the company standards and metrics
  • Here is an example of how we process our site analytics data today. We FTP the log files into our Hadoop infrastructure daily. The files are LZO compressed for better storage utilization. Developers then write Map reduce jobs against these raw log files to output data into HIVE tables. HIVE is a DW equivalent of Hadoop Most of the MR jobs are written using Java and scripting languages such as Python, Ruby, BASH. Business teams however, have skillset to run queries against HIVE tables.
  • Since the market on Big Data is not that mature there are no good ways to build visualization on top of HIVE 2. Due to this and for other reasons we need to bring a subset of this data into our warehouse. 3. So in essence the data that are in HIVE will make it into the warehouse. 4. There are companies such as Karmaspehe, Datamere who are in the initial stages of bridging the gap between business needs and Hadoop access. 5. However, its too early to say if this will be the norm
  • We focused on some key areas of our business such as demand source and campaigns as our pilot and worked with our business partners to enable the analytics on Big Data 2. We have developers writing Map Reduce jobs which run every day and populate HIVE tables We generate more than 25 million records for a month for the pilot use case that we worked on This only show cases the sheer magnitude and power of analytics within the Big Data framework
  • So if you have read Avinash Kaushik’s book and his follow his blog Occoms razor” then you know what he always mentions 2 words Data puke Gold (Insights) Here we have a nice depiction of all kinds of insights provided in a nice dashbaord format to our business users. These insights were only made possible due to the data that we housed and extracted from Hadoop. Obviously I couldn’t share what these graphs meant without giving more details 
  • So how do you organizationally structure yourself and Big Data so that you can be effective both in terms of resource utilization and setting the platform for success 2. This is what we call the Centralized Decentralization. 3. With this approach the core web analytics team controls and supports the individual teams when it comes to data extraction and modeling. 4. This prevents one team from being the bottle neck with data extraction and analytics 5. If you have ever worked in the Data Warehouse side of the world you will know the challenges and delays in getting the data
  • With the core process of centralized decentralization and being agile how do you succeed? You can't manage if you can't measure. But once you measure make sure you fail fast Every team needs to be thinking of analytics with every feature they work on Dimensional modeling is great but like someone wise said 'All models are wrong but some are useful" :-) My point here is data without analysis is like a Ferrari without gas. If you Make it a point to extract smaller chunks of data and tie this effort to your business objectives. You are sure to succeed
  • Here are some key learning's from our experience and some thoughts for you to consider If you have the strength of technology go for it. This needs heavy investment from time and resource perspective Like I mentioned many times data without analysis is worthless
  • Thanks again for listening to our story and we would be available for any further questions you may have. Also if you are know anyone who is interested in working at Orbitz please check out the career site

Gartner peer forum sept 2011   orbitz Gartner peer forum sept 2011 orbitz Presentation Transcript

  • Architecting for Big Data Integrating Hadoop into an Enterprise Data Infrastructure Raghu Kashyap and Jonathan Seidman Gartner Peer Forum September 14 | 2011
  • Who We Are
    • Raghu Kashyap
      • Director, Web Analytics
      • [email_address]
      • @ragskashyap
      • http://kashyaps.com
    • Jonathan Seidman
      • Lead Engineer, Business Intelligence/Big Data Team
      • Co-founder/organizer of Chicago Hadoop User Group http://www.meetup.com/Chicago-area-Hadoop-User-Group-CHUG/ and Chicago Big Data http://www.meetup.com/Chicago-Big-Data/
      • [email_address]
      • @jseidman
    page
  • page Launched in 2001, Chicago, IL Over 160 million bookings
  • What is Hadoop?
    • Open source software that supports the storage and analysis of extremely large volumes of data – typically terabytes to petabytes.
    • Two primary components:
      • Hadoop Distributed File System (HDFS) provides economical, reliable, fault tolerant and scalable storage of very large datasets across machines in a cluster.
      • MapReduce is a programming model for efficient distributed processing. Designed to reliably perform computations on large volumes of data in parallel.
    page
  • Why Hadoop?
    • Hadoop allows us to store and process data that was previously impractical because of cost, technical issues, etc., and places no constraints on how that data is processed.
    page $ per TB
  • Why We Started Using Hadoop page Optimizing hotel search…
  • Why We Started Using Hadoop
    • In 2009, the Machine Learning team was formed to improve site performance. For example, improving hotel search results.
    • This required access to large volumes of behavioral data for analysis.
    page
  • The Problem…
    • The only archive of the required data went back about two weeks.
    page Transactional Data (e.g. bookings) Data Warehouse Non-transactional Data (e.g. searches)
  • Hadoop Was Selected as a Solution… page Transactional Data (e.g. bookings) Data Warehouse Non-Transactional Data (e.g. searches) Hadoop
  • Unfortunately…
    • We faced organizational resistance to deploying Hadoop.
      • Not from management, but from other technical teams.
    • Required persistence to convince them that we needed to introduce a new hardware spec to support Hadoop.
    page
  • Current Big Data Infrastructure Hadoop page MapReduce HDFS MapReduce Jobs (Java, Python, R/RHIPE) Analytic Tools (Hive, Pig) Data Warehouse (Greenplum) psql, gpload, Sqoop External Analytical Jobs (Java, R, etc.) Aggregated Data Aggregated Data
  • Hadoop Architecture Details
    • Production cluster
      • About 200TB of raw storage
      • 336 (physical) cores
      • 672GB RAM
      • 4 client nodes (Hive, ad-hoc jobs, scheduled jobs, etc.)
    • Development cluster for user testing
    • Test cluster for testing upgrades, new software, etc.
    • Cloudera CDH3
    page
  • Deploying Hadoop Enabled Multiple Applications… page
  • But Brought New Challenges…
    • Most of these efforts are driven by development teams.
    • The challenge now is unlocking the value of this data for non-technical users.
    page
  • In Early 2011…
    • Big Data team is formed under Business Intelligence team at Orbitz Worldwide.
    • Reflects the importance of big data to the future of the company.
    • Allows the Big Data team to work more closely with the data warehouse and BI teams.
    • We’re also evaluating tools to facilitate analysis of Hadoop data by the wider organization.
    page
  • Karmasphere Analyst page
  • Karmasphere Analyst page
  • Datameer Analytics Solution page
  • Datameer Analytics Solution page
  • Not to Mention Other BI Vendors… page
  • One More Use Case – Click Data Processing
    • Still under development, but a good example of how Hadoop can be used to complement an existing data warehouse.
    page
  • Click Data Processing – Current Data Warehouse Processing page Web Server Logs ETL DW Data Cleansing (Stored procedure) DW Web Server Web Servers 3 hours 2 hours ~20% original data size
  • Click Data Processing – Proposed Hadoop Processing page Web Server Logs HDFS Data Cleansing (MapReduce) DW Web Server Web Servers
  • Lessons Learned
    • Expect organizational resistance from unanticipated directions.
    • Advice for finding big data developers:
      • Don ’t bother.
      • Instead, train smart and motivated internal resources or new hires.
    • But get help if you need it.
      • There are a number of experienced providers who can help you get started.
    page
  • Lessons Learned
    • Hadoop market is still immature, but growing quickly. Better tools are on the way.
      • Look beyond the usual (enterprise) suspects. Many of the most interesting companies in the big data space are small startups.
    • Use the appropriate tool based on requirements. Treat Hadoop as a complement, not replacement, to traditional data stores.
    page
  • Lessons Learned
    • Work closely with your existing data management teams.
      • Your idea of what constitutes “big data” might quickly diverge from theirs.
    • The flip-side to this is that Hadoop can be an excellent tool to off-load resource-consuming jobs from your data warehouse.
    page
  • In the Near Future…
    • Production cluster capacity increase:
      • ~500TB of raw storage.
    • Further integration with the data warehouse.
    • Deployment of analysis and reporting tools on top of Hadoop.
    page
    • Web Analytics and Big Data
    page
  • What is Web Analytics?
    • Understand the impact and economic value of the website
    • Rigorous outcome analysis
    • Passion for customer centricity by embracing voice-of-customer initiatives
    • Fail faster by leveraging the power of experimentation(MVT)
  • Challenges
    • Site Analytics
      • Lack of multi-dimensional capabilities
      • Hard to find the right insight
      • Heavy investment on the tools   
      • Precision vs Direction
  • continued….
    • Big Data
      • No data unification or uniform platform across organizations and business units
      • No easy data extraction capabilities
        • Business
      • Distinction between reporting and testing(MVT)
      • Minimal measurement of outcomes
  • Data Categories
    • Traffic acquisition
    • Marketing optimization
    • User engagement
    • Ad optimization
    • User behaviour
  • Web Analytics & Big Data
    • OWW generates a couple million air and hotel searches every day.
    • Massive amounts of data. Hundreds of GB of log data per day.
    • Expensive and difficult to store and process this data using existing data infrastructure.
  • Processing of Web Analytics Data
  • Aggregating data into Data Warehouse
  • Data Analysis Jobs
    • Traffic Source and Campaign activities
    • Daily jobs, Weekly analysis
    • Map reduce job
      • ~ 20 minutes for one day raw logs
      • ~ 3 minutes to load to hive tables
      • Generates more than 25 million records for a month
  • Business Insights page
  • Centralized Decentralization Web Analytics team + SEO team + Hotel optimization team
  • Model for success
    • Measure the performance of your feature and fail fast
    • Experimentation and testing should be ingrained into every key feature.
    • Break down into smaller chunks of data extraction
  • Should everyone do this?
    • Do you have the Technology strength to invest and use Big Data?
    • Analytics using Big Data comes with a price (resource, time)
    • Big Data mining != analysis
    • Key Data warehouse challenges still exist (time, data validity)
  • Questions?
    • http://careers.orbitz.com