Welcome everyone. I will be presenting on how we are shaping up web analytics and big data to optimize the data driven decisions at Orbitz World wide 2.. I will also be talking about the process model on how we are effectively utilizing the brains and man power across the organization towards a common goal 4. Between me and Jonathan we promise to give you some thought provoking details about analytics and Big data :-)
Most people think of orbitz.com, but Orbitz Worldwide is really a global portfolio of leading online travel consumer brands including Orbitz, Cheaptickets, The Away Network, ebookers and HotelClub. Orbitz also provides business to business services - Orbitz Worldwide Distribution provides hotel booking capabilities to a number of leading carriers such as Amtrak, Delta, LAN, KLM, Air France and Orbitz for Business provides corporate travel services to a number of Fortune 100 clients Orbitz started in 1999, orbitz site launched in 2001.
A couple of years ago when I mentioned Hadoop I’d often get blank stares, even from developers. I think most folks now are at least aware of what Hadoop is.
This chart isn’t exactly an apples-to-apples comparison, but provides some idea of the difference in cost per TB for the DW vs. Hadoop Hadoop doesn’t provide the same functionality as a data warehouse, but it does allow us to store and process data that wasn’t practical before for economic and technical reasons. Putting data into a DB or DWH requires having knowledge or making assumptions about how the data will be used. Either way you’re putting constraints around how the data is accessed and processed. With Hadoop each application can process the raw data in whatever way is required. If you decide you need to analyze different attributes you just run a new query.
The initial motivation was to solve a particular business problem. Orbitz wanted to be able to use intelligent algorithms to optimize various site functions, for example optimizing hotel search by showing consumers hotels that more closely match their preferences, leading to more bookings.
Improving hotel search requires access to such data as which hotels users saw in search results, which hotels they clicked on, and which hotels were actually booked. Much of this data was available in web analytics logs.
Management was supportive of anything that facilitated ML team efforts. But when we presented a hardware spec for servers with local non-raided storage, etc. syseng offered us blades with attached storage.
Hadoop is used to crunch data for input to a system to recommend products to users. Although we use third-party sites to monitor site performance, Hadoop allows the front end team to provide detailed reports on page download performance, providing valuable trending data not available from other sources. Data is used for analysis of user segments, which can drive personalization. This chart shows that Safari users click on hotels with higher mean and median prices as opposed to other users. This is just a handful of examples of how Hadoop is driving business value.
Recently received an email from a user seeking access to Hive. Sent him a detailed email with info on accessing Hive, etc. Received an email back basically saying “you lost me at ssh”.
Previous to 2011 Hadoop responsibilities were split across technology teams. Moving under a single team centralized responsibility and resources for Hadoop.
Processing of click data gathered by web servers. This click data contains marketing info. data cleansing step is done inside data warehouse using a stored procedure further downstream processing is done to generate final data sets for reporting Although this processing generates the required user reports, this process consumes considerable time and resources on the data warehouse, consuming resources that could be used for reports, queries, etc.
ETL step is eliminated, instead raw logs will be uploaded to HDFS which is a much faster process Moving the data cleansing to MapReduce will allow us to take advantage of Hadoop’s efficiencies and greatly speed up the processing. Moves the “heavy lifting” of processing the relatively large data sets to Hadoop, and takes advantage of Hadoop’s efficiencies.
Bad news is we need to significantly increase the number of servers in our cluster, the good news is that this is because teams are using Hadoop, and new projects are coming online.
I met someone at the train station who asked me what I do? I said I work in the web analytics field and I help shape up the strategy and vision at Orbitz worldwide and enable our business teams to get insights on the performance of our site and act upon it. So he said, Ah you do reporting :-) 2. I started thinking why web analytics is hard for people to get and started evangelizing both within and outside Orbitz 3. I manage the webanalytics team at Orbitz worldwide I also try to help out non-profit organizations while I am not busy with my wife and 2 sons
1 So what is web analytics? 2 Read the definition. It tells you exactly why someone came to your site and what kind of impact they had on the bottom line of your revenue 3. Read the definition. You need to immerse yourself in data to understand the story it's telling 4. Read the definition. Focus on Customer. Customer is the king. You need to listen and act upon their feedback 5. Read the definition. Test Test and Test. If you want to prove or disprove a HIPPO's opinion you need to perform tests on your site 6. btw HIPPO is a common terminology in the industry. It stands for Highest Income Paid person's opinion :-)
S o with so many brands and so much data we had quite a few challenges? For starters we couldn't easily do multi dimensional analysis with the tools. With data spread across in multiple tools it was hard to picture the whole 9 yards obviously tools cost money Harder for people to understand where to look at for data With Analytics you need direction rather than precision to take action and get insights
In the Big Data front we didn't have a good infrastructure where we could house all this data in a cost effective way. 2. Data extraction was NOT an easy task 3. Focusing on the key differences on when you need testing v/s when you need reporting. 4. Earlier I mentioned that you need to do rigorous outcome analysis. However, with all the challenges we faced it was not an easy task.
So how do we fit the puzzle? By learning the behavior of the customer and focusing on key attributes Know the travel details such as how many travelers, what kind of travelers, any preferred carrier or hotels? 4. Understand the shopping patterns. Does he want to shop only on weekends or else only on Thursdays. 5. Focus on Visit Patterns. How many times does he come to the site before he buys anything 6. Learn the page navigation. I.e does he see 100 pages every time he comes or does he know exactly what to look at 7. Master the Demand source. Anyone who's worked in the marketing side knows that attribution is a holy war. Deciding which demand source gets the credit for conversion is something people will argue to death Just like the IDE war between VIM, EMACS, Intellij and Eclipse :-)
We realized that with all the challenges we had, we had to innovate and experiment new ways to enable successful web analytics at OWW 2. We generate hundreds of GB of log data per day. How can we effectively store this massive data and how can we mine this data and make sense out of it? 3. Our existing DW was not intended to support such large sets of data and more importantly process this data We also needed to make sure that we don't spend huge money to store this data set. 4. Big data infrastructure with Hadoop has been a huge success at Orbitz and at other organizations So what does this buy us? We can now store data for a long period of time without worrying too much about the space Analysts and developers have access to this data set Developers can run adhoc queries to support our business needs. While the core web analytics team focuses on the company standards and metrics
Here is an example of how we process our site analytics data today. We FTP the log files into our Hadoop infrastructure daily. The files are LZO compressed for better storage utilization. Developers then write Map reduce jobs against these raw log files to output data into HIVE tables. HIVE is a DW equivalent of Hadoop Most of the MR jobs are written using Java and scripting languages such as Python, Ruby, BASH. Business teams however, have skillset to run queries against HIVE tables.
Since the market on Big Data is not that mature there are no good ways to build visualization on top of HIVE 2. Due to this and for other reasons we need to bring a subset of this data into our warehouse. 3. So in essence the data that are in HIVE will make it into the warehouse. 4. There are companies such as Karmaspehe, Datamere who are in the initial stages of bridging the gap between business needs and Hadoop access. 5. However, its too early to say if this will be the norm
We focused on some key areas of our business such as demand source and campaigns as our pilot and worked with our business partners to enable the analytics on Big Data 2. We have developers writing Map Reduce jobs which run every day and populate HIVE tables We generate more than 25 million records for a month for the pilot use case that we worked on This only show cases the sheer magnitude and power of analytics within the Big Data framework
So if you have read Avinash Kaushik’s book and his follow his blog Occoms razor” then you know what he always mentions 2 words Data puke Gold (Insights) Here we have a nice depiction of all kinds of insights provided in a nice dashbaord format to our business users. These insights were only made possible due to the data that we housed and extracted from Hadoop. Obviously I couldn’t share what these graphs meant without giving more details
So how do you organizationally structure yourself and Big Data so that you can be effective both in terms of resource utilization and setting the platform for success 2. This is what we call the Centralized Decentralization. 3. With this approach the core web analytics team controls and supports the individual teams when it comes to data extraction and modeling. 4. This prevents one team from being the bottle neck with data extraction and analytics 5. If you have ever worked in the Data Warehouse side of the world you will know the challenges and delays in getting the data
With the core process of centralized decentralization and being agile how do you succeed? You can't manage if you can't measure. But once you measure make sure you fail fast Every team needs to be thinking of analytics with every feature they work on Dimensional modeling is great but like someone wise said 'All models are wrong but some are useful&quot; :-) My point here is data without analysis is like a Ferrari without gas. If you Make it a point to extract smaller chunks of data and tie this effort to your business objectives. You are sure to succeed
Here are some key learning's from our experience and some thoughts for you to consider If you have the strength of technology go for it. This needs heavy investment from time and resource perspective Like I mentioned many times data without analysis is worthless
Thanks again for listening to our story and we would be available for any further questions you may have. Also if you are know anyone who is interested in working at Orbitz please check out the career site
Transcript of "Gartner peer forum sept 2011 orbitz"
Architecting for Big Data Integrating Hadoop into an Enterprise Data Infrastructure Raghu Kashyap and Jonathan Seidman Gartner Peer Forum September 14 | 2011
Who We Are <ul><li>Raghu Kashyap </li></ul><ul><ul><li>Director, Web Analytics </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>@ragskashyap </li></ul></ul><ul><ul><li>http://kashyaps.com </li></ul></ul><ul><li>Jonathan Seidman </li></ul><ul><ul><li>Lead Engineer, Business Intelligence/Big Data Team </li></ul></ul><ul><ul><li>Co-founder/organizer of Chicago Hadoop User Group http://www.meetup.com/Chicago-area-Hadoop-User-Group-CHUG/ and Chicago Big Data http://www.meetup.com/Chicago-Big-Data/ </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>@jseidman </li></ul></ul>page
page Launched in 2001, Chicago, IL Over 160 million bookings
What is Hadoop? <ul><li>Open source software that supports the storage and analysis of extremely large volumes of data – typically terabytes to petabytes. </li></ul><ul><li>Two primary components: </li></ul><ul><ul><li>Hadoop Distributed File System (HDFS) provides economical, reliable, fault tolerant and scalable storage of very large datasets across machines in a cluster. </li></ul></ul><ul><ul><li>MapReduce is a programming model for efficient distributed processing. Designed to reliably perform computations on large volumes of data in parallel. </li></ul></ul>page
Why Hadoop? <ul><li>Hadoop allows us to store and process data that was previously impractical because of cost, technical issues, etc., and places no constraints on how that data is processed. </li></ul>page $ per TB
Why We Started Using Hadoop page Optimizing hotel search…
Why We Started Using Hadoop <ul><li>In 2009, the Machine Learning team was formed to improve site performance. For example, improving hotel search results. </li></ul><ul><li>This required access to large volumes of behavioral data for analysis. </li></ul>page
The Problem… <ul><li>The only archive of the required data went back about two weeks. </li></ul>page Transactional Data (e.g. bookings) Data Warehouse Non-transactional Data (e.g. searches)
Hadoop Was Selected as a Solution… page Transactional Data (e.g. bookings) Data Warehouse Non-Transactional Data (e.g. searches) Hadoop
Unfortunately… <ul><li>We faced organizational resistance to deploying Hadoop. </li></ul><ul><ul><li>Not from management, but from other technical teams. </li></ul></ul><ul><li>Required persistence to convince them that we needed to introduce a new hardware spec to support Hadoop. </li></ul>page
Current Big Data Infrastructure Hadoop page MapReduce HDFS MapReduce Jobs (Java, Python, R/RHIPE) Analytic Tools (Hive, Pig) Data Warehouse (Greenplum) psql, gpload, Sqoop External Analytical Jobs (Java, R, etc.) Aggregated Data Aggregated Data
Hadoop Architecture Details <ul><li>Production cluster </li></ul><ul><ul><li>About 200TB of raw storage </li></ul></ul><ul><ul><li>336 (physical) cores </li></ul></ul><ul><ul><li>672GB RAM </li></ul></ul><ul><ul><li>4 client nodes (Hive, ad-hoc jobs, scheduled jobs, etc.) </li></ul></ul><ul><li>Development cluster for user testing </li></ul><ul><li>Test cluster for testing upgrades, new software, etc. </li></ul><ul><li>Cloudera CDH3 </li></ul>page
But Brought New Challenges… <ul><li>Most of these efforts are driven by development teams. </li></ul><ul><li>The challenge now is unlocking the value of this data for non-technical users. </li></ul>page
In Early 2011… <ul><li>Big Data team is formed under Business Intelligence team at Orbitz Worldwide. </li></ul><ul><li>Reflects the importance of big data to the future of the company. </li></ul><ul><li>Allows the Big Data team to work more closely with the data warehouse and BI teams. </li></ul><ul><li>We’re also evaluating tools to facilitate analysis of Hadoop data by the wider organization. </li></ul>page
One More Use Case – Click Data Processing <ul><li>Still under development, but a good example of how Hadoop can be used to complement an existing data warehouse. </li></ul>page
Click Data Processing – Current Data Warehouse Processing page Web Server Logs ETL DW Data Cleansing (Stored procedure) DW Web Server Web Servers 3 hours 2 hours ~20% original data size
Click Data Processing – Proposed Hadoop Processing page Web Server Logs HDFS Data Cleansing (MapReduce) DW Web Server Web Servers
Lessons Learned <ul><li>Expect organizational resistance from unanticipated directions. </li></ul><ul><li>Advice for finding big data developers: </li></ul><ul><ul><li>Don ’t bother. </li></ul></ul><ul><ul><li>Instead, train smart and motivated internal resources or new hires. </li></ul></ul><ul><li>But get help if you need it. </li></ul><ul><ul><li>There are a number of experienced providers who can help you get started. </li></ul></ul>page
Lessons Learned <ul><li>Hadoop market is still immature, but growing quickly. Better tools are on the way. </li></ul><ul><ul><li>Look beyond the usual (enterprise) suspects. Many of the most interesting companies in the big data space are small startups. </li></ul></ul><ul><li>Use the appropriate tool based on requirements. Treat Hadoop as a complement, not replacement, to traditional data stores. </li></ul>page
Lessons Learned <ul><li>Work closely with your existing data management teams. </li></ul><ul><ul><li>Your idea of what constitutes “big data” might quickly diverge from theirs. </li></ul></ul><ul><li>The flip-side to this is that Hadoop can be an excellent tool to off-load resource-consuming jobs from your data warehouse. </li></ul>page
In the Near Future… <ul><li>Production cluster capacity increase: </li></ul><ul><ul><li>~500TB of raw storage. </li></ul></ul><ul><li>Further integration with the data warehouse. </li></ul><ul><li>Deployment of analysis and reporting tools on top of Hadoop. </li></ul>page
<ul><li>Web Analytics and Big Data </li></ul>page
What is Web Analytics? <ul><li>Understand the impact and economic value of the website </li></ul><ul><li>Rigorous outcome analysis </li></ul><ul><li>Passion for customer centricity by embracing voice-of-customer initiatives </li></ul><ul><li>Fail faster by leveraging the power of experimentation(MVT) </li></ul>
Challenges <ul><li>Site Analytics </li></ul><ul><ul><li>Lack of multi-dimensional capabilities </li></ul></ul><ul><ul><li>Hard to find the right insight </li></ul></ul><ul><ul><li>Heavy investment on the tools </li></ul></ul><ul><ul><li>Precision vs Direction </li></ul></ul>
continued…. <ul><li>Big Data </li></ul><ul><ul><li>No data unification or uniform platform across organizations and business units </li></ul></ul><ul><ul><li>No easy data extraction capabilities </li></ul></ul><ul><ul><ul><li>Business </li></ul></ul></ul><ul><ul><li>Distinction between reporting and testing(MVT) </li></ul></ul><ul><ul><li>Minimal measurement of outcomes </li></ul></ul>
Web Analytics & Big Data <ul><li>OWW generates a couple million air and hotel searches every day. </li></ul><ul><li>Massive amounts of data. Hundreds of GB of log data per day. </li></ul><ul><li>Expensive and difficult to store and process this data using existing data infrastructure. </li></ul>
Data Analysis Jobs <ul><li>Traffic Source and Campaign activities </li></ul><ul><li>Daily jobs, Weekly analysis </li></ul><ul><li>Map reduce job </li></ul><ul><ul><li>~ 20 minutes for one day raw logs </li></ul></ul><ul><ul><li>~ 3 minutes to load to hive tables </li></ul></ul><ul><ul><li>Generates more than 25 million records for a month </li></ul></ul>
Centralized Decentralization Web Analytics team + SEO team + Hotel optimization team
Model for success <ul><li>Measure the performance of your feature and fail fast </li></ul><ul><li>Experimentation and testing should be ingrained into every key feature. </li></ul><ul><li>Break down into smaller chunks of data extraction </li></ul>
Should everyone do this? <ul><li>Do you have the Technology strength to invest and use Big Data? </li></ul><ul><li>Analytics using Big Data comes with a price (resource, time) </li></ul><ul><li>Big Data mining != analysis </li></ul><ul><li>Key Data warehouse challenges still exist (time, data validity) </li></ul>