This document discusses techniques for large-scale analysis of clickstream and transaction log data using Apache Hadoop. It describes best practices used in case studies, including data cleaning, sampling, de-duplication, normalization, and bias removal. Algorithms for query suggestions and time series mining are discussed. Query suggestion algorithms aim to broaden or narrow user searches and optimize the buying experience. Time series analysis can detect seasonal trends and buzzing queries. Hadoop is leveraged to enable efficient and scalable analysis over petabytes of data spanning long time frames.
Machine Learning and Apache Mahout : An IntroductionVarad Meru
An Introductory presentation on Machine Learning and Apache Mahout. I presented it at the BigData Meetup - Pune Chapter's first meetup (http://www.meetup.com/Big-Data-Meetup-Pune-Chapter/).
SnappyData, the Spark Database. A unified cluster for streaming, transactions...SnappyData
Apache Spark 2.0 offers many enhancements that make continuous analytics quite simple. In this talk, we will discuss many other things that you can do with your Apache Spark cluster. We explain how a deep integration of Apache Spark 2.0 and in-memory databases can bring you the best of both worlds! In particular, we discuss how to manage mutable data in Apache Spark, run consistent transactions at the same speed as state-the-art in-memory grids, build and use indexes for point lookups, and run 100x more analytics queries at in-memory speeds. No need to bridge multiple products or manage, tune multiple clusters. We explain how one can take regulation Apache Spark SQL OLAP workloads and speed them up by up to 20x using optimizations in SnappyData.
We then walk through several use-case examples, including IoT scenarios, where one has to ingest streams from many sources, cleanse it, manage the deluge by pre-aggregating and tracking metrics per minute, store all recent data in a in-memory store along with history in a data lake and permit interactive analytic queries at this constantly growing data. Rather than stitching together multiple clusters as proposed in Lambda, we walk through a design where everything is achieved in a single, horizontally scalable Apache Spark 2.0 cluster. A design that is simpler, a lot more efficient, and let’s you do everything from Machine Learning and Data Science to Transactions and Visual Analytics all in one single cluster.
Machine Learning and Apache Mahout : An IntroductionVarad Meru
An Introductory presentation on Machine Learning and Apache Mahout. I presented it at the BigData Meetup - Pune Chapter's first meetup (http://www.meetup.com/Big-Data-Meetup-Pune-Chapter/).
SnappyData, the Spark Database. A unified cluster for streaming, transactions...SnappyData
Apache Spark 2.0 offers many enhancements that make continuous analytics quite simple. In this talk, we will discuss many other things that you can do with your Apache Spark cluster. We explain how a deep integration of Apache Spark 2.0 and in-memory databases can bring you the best of both worlds! In particular, we discuss how to manage mutable data in Apache Spark, run consistent transactions at the same speed as state-the-art in-memory grids, build and use indexes for point lookups, and run 100x more analytics queries at in-memory speeds. No need to bridge multiple products or manage, tune multiple clusters. We explain how one can take regulation Apache Spark SQL OLAP workloads and speed them up by up to 20x using optimizations in SnappyData.
We then walk through several use-case examples, including IoT scenarios, where one has to ingest streams from many sources, cleanse it, manage the deluge by pre-aggregating and tracking metrics per minute, store all recent data in a in-memory store along with history in a data lake and permit interactive analytic queries at this constantly growing data. Rather than stitching together multiple clusters as proposed in Lambda, we walk through a design where everything is achieved in a single, horizontally scalable Apache Spark 2.0 cluster. A design that is simpler, a lot more efficient, and let’s you do everything from Machine Learning and Data Science to Transactions and Visual Analytics all in one single cluster.
Experimentation plays a vital role in business growth at eBay by providing valuable insights and prediction on how users will reach to changes made to the eBay website and applications. On a given day, eBay has several hundred experiments running at the same time. Our experimentation data processing pipeline handles billions of rows user behavioral and transactional data per day to generate detailed reports covering 100+ metrics over 50 dimensions.
In this session, we will share our journey of how we moved this complex process from Data warehouse to Hadoop. We will give an overview of the experimentation platform and data processing pipeline. We will highlight the challenges and learnings we faced implementing this platform in Hadoop and how this transformation led us to build a scalable, flexible and reliable data processing workflow in Hadoop. We will cover our work done on performance optimizations, methods to establish resilience and configurability, efficient storage formats and choices of different frameworks used in the pipeline.
This is a slide deck that was used for our 11/19/15 Nike Tech Talk to give a detailed overview of the SnappyData technology vision. The slides were presented by Jags Ramnarayan, Co-Founder & CTO of SnappyData
Thing you didn't know you could do in SparkSnappyData
This presentation discusses issues with the modern lambda architecture and how Spark attempts to solve them with structured streaming and interactive querying. It then shows how SnappyData takes these solutions one step further with its Synopsis Data Engine
In this video from the ISC Big Data'14 Conference, Ted Willke from Intel presents: The Analytics Frontier of the Hadoop Eco-System.
"The Hadoop MapReduce framework grew out of an effort to make it easy to express and parallelize simple computations that were routinely performed at Google. It wasn’t long before libraries, like Apache Mahout, were developed to enable matrix factorization, clustering, regression, and other more complex analyses on Hadoop. Now, many of these libraries and their workloads are migrating to Apache Spark because it supports a wider class of applications than MapReduce and is more appropriate for iterative algorithms, interactive processing, and streaming applications. What’s next beyond Spark? Where is big data analytics processing headed? How will data scientists program these systems? In this talk, we will explore the current analytics frontier, the popular debates, and discuss some potentially clever additions. We will also share the emergent data science applications and collaborative university research that inform our thinking."
Learn more:
http://www.isc-events.com/bigdata14/schedule.html
and
http://www.intel.com/content/www/us/en/software/intel-graph-solutions.html
Watch the video presentation: https://www.youtube.com/watch?v=qlfx495Ekw0
This presentation lets you know about Apache Mahout.
The Apache Mahout is a machine learning library and the main goal is to build scalable machine learning libraries.
SDEC2011 Mahout - the what, the how and the whyKorea Sdec
Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn't, and things to expect in the future releases.
http://sdec.kr/
Experimentation plays a vital role in business growth at eBay by providing valuable insights and prediction on how users will reach to changes made to the eBay website and applications. On a given day, eBay has several hundred experiments running at the same time. Our experimentation data processing pipeline handles billions of rows user behavioral and transactional data per day to generate detailed reports covering 100+ metrics over 50 dimensions.
In this session, we will share our journey of how we moved this complex process from Data warehouse to Hadoop. We will give an overview of the experimentation platform and data processing pipeline. We will highlight the challenges and learnings we faced implementing this platform in Hadoop and how this transformation led us to build a scalable, flexible and reliable data processing workflow in Hadoop. We will cover our work done on performance optimizations, methods to establish resilience and configurability, efficient storage formats and choices of different frameworks used in the pipeline.
This is a slide deck that was used for our 11/19/15 Nike Tech Talk to give a detailed overview of the SnappyData technology vision. The slides were presented by Jags Ramnarayan, Co-Founder & CTO of SnappyData
Thing you didn't know you could do in SparkSnappyData
This presentation discusses issues with the modern lambda architecture and how Spark attempts to solve them with structured streaming and interactive querying. It then shows how SnappyData takes these solutions one step further with its Synopsis Data Engine
In this video from the ISC Big Data'14 Conference, Ted Willke from Intel presents: The Analytics Frontier of the Hadoop Eco-System.
"The Hadoop MapReduce framework grew out of an effort to make it easy to express and parallelize simple computations that were routinely performed at Google. It wasn’t long before libraries, like Apache Mahout, were developed to enable matrix factorization, clustering, regression, and other more complex analyses on Hadoop. Now, many of these libraries and their workloads are migrating to Apache Spark because it supports a wider class of applications than MapReduce and is more appropriate for iterative algorithms, interactive processing, and streaming applications. What’s next beyond Spark? Where is big data analytics processing headed? How will data scientists program these systems? In this talk, we will explore the current analytics frontier, the popular debates, and discuss some potentially clever additions. We will also share the emergent data science applications and collaborative university research that inform our thinking."
Learn more:
http://www.isc-events.com/bigdata14/schedule.html
and
http://www.intel.com/content/www/us/en/software/intel-graph-solutions.html
Watch the video presentation: https://www.youtube.com/watch?v=qlfx495Ekw0
This presentation lets you know about Apache Mahout.
The Apache Mahout is a machine learning library and the main goal is to build scalable machine learning libraries.
SDEC2011 Mahout - the what, the how and the whyKorea Sdec
Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn't, and things to expect in the future releases.
http://sdec.kr/
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Building High Available and Scalable Machine Learning ApplicationsYalçın Yenigün
The slide contains some high level information about some machine learning algorithms, cross validation and feature extraction techniques. It also contains high level techniques about high available and scalable ML products.
"A session in the DevNet Zone at Cisco Live, Berlin. Analytics of network telemetry data (such as flow records, IPSLA measurements, and time series of MIB data) helps address many important operational problems. Traditional Big Data approaches run into limitations even as they push scale boundaries for processing data further. One reason for this is the fact that in many cases, the bottleneck for analytics is not analytics processing itself but the generation and export of the data on which analytics depends. Data does not come for free. The amount of data that can be reasonably collected from the network runs into inherent limitations due to bandwidth and processing constraints in the network itself. In addition, management tasks related to determining and configuring which data to generate lead to significant deployment challenges.
This presentation provides an overview of DNA (Distributed Network Analytics), a novel technology to analyze network telemetry data in distributed fashion at the network edge, allowing users to detect changes, predict trends, recognize anomalies, and identify hotspots in their network. Analytics processing occurs at the source of the data using an embedded DNA Agent App that dynamically configures data sources as needed and analyzes the data using an embedded analytics engine. This provides DNA with superior scaling characteristics while avoiding the significant operational and bandwidth overhead that is associated with centralized analytics solutions. An ODL-based SDN controller application orchestrates network analytics tasks across the network, providing a network analytics service that allows users to interact with the network as a whole instead of individual devices one at a time. DNA is enabled by the IOx App Hosting Framework and integrated with light-weight embedded analytics engines, CSA (Connected Service Analytics) and DMO (Data in Motion). "
Customer value analysis of big data productsVikas Sardana
Business value analysis through Customer Value Model for software technology choices with a case study from Mobile Advertising industry for Big Data use case.
Introduction of streaming data, difference between batch processing and stream processing, Research issues in streaming data processing, Performance evaluation metrics , tools for stream processing.
Delivering fast, powerful and scalable analyticsMariaDB plc
This session will provide insight on making the most of your data assets with analytics, and what you need for your next analytics project. We’ll showcase how the MariaDB AX solution delivers fast and scalable analytics using real-world use cases.
This presentation inludes step-by step tutorial by including the screen recordings to learn Rapid Miner.It also includes the step-step-step procedure to use the most interesting features -Turbo Prep and Auto Model.
Apache Big Data 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Apache Apex is a next gen big data analytics platform. Originally developed at DataTorrent it comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn about the Apex architecture, including its unique features for scalability, fault tolerance and processing guarantees, programming model and use cases.
http://apachebigdata2016.sched.org/event/6M0L/next-gen-big-data-analytics-with-apache-apex-thomas-weise-datatorrent
Providing true interactive and scalable BI on Hadoop is proven to be one of the biggest challenges that is preventing completion of legacy EDW OLAP system transit to Hadoop. While we have all seen many benchmarks running consecutive queries claiming success, having thousands of concurrent business users sending complicated generated queries from their dashboards over billions of records while delivering interactive speed is yet to be seen.
In this session we will discuss how an architecture that replaces full-scan brute-force approach with adaptive indexing and auto-generated cubes can dramatically reduce the resources and effort per query, resulting in interactive performance for high concurrency workloads and explain how this is achieved with minimum data engineering efforts. We will also discuss how this architecture can be seamlessly integrated with Hive to provide a complete OLAP-on-Hadoop solution.
Session will include live demo of complex business dashboards connected to Hive and accessing billions of rows at interactive speed.
Speaker
Boaz Raufman, CTO and Co-Founder, JethroData
Real time analytics with Spark Streaming by Padma at Bangalore I & D meetup (https://www.meetup.com/Bengaluru-Insights-and-Data-Meetup/events/238459154)
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
3. Key Ideas
• Big Data Sets
• Big Data Properties
• Challenges in working with big data
• Practical Solutions
• Leveraging Hadoop
• Case Studies
3
4. Types of Data Used in this Tutorial
• Click-stream logs
– PetaByte Scale
• Transactional Data
– TeraByte Scale
– More than ½ B items for sale
4
5. BEST PRACTICES USED IN PRESENTED CASE
STUDIES
• Data Cleaning
– Taking care of bad data
– Importance of domain knowledge
• Data Sampling
– Reservoir sampling
• De-duplication
• Normalization
• Handling Idiosyncrasies of long-tail data
• Understanding Tractability of Algorithms
• Efficiency at scale
• Bucketing data in the right way
• Bias Removal
– System bias
– Platform bias
– User bias
• Handling curse of dimensionality
5
10. Query Suggestions at eBay
• Enable users to broaden or narrow searches.
• Lead users to related products or brands.
• Optimize the buying experience.
11. Query Suggestion Algorithms
• Various algorithms in literature
– Agglomerative clustering
– Query Similarity Measures (Linguistic, Latent)
– Query Flow Graphs
• Our approach primarily based on user trails.
12. Challenges
• Large-scale data
– 100M+ users.
– 30TB+ click-stream logs.
– 1B+ user sessions.
– Several billion searches.
• Noisy Data
– Robots
– API Calls
– Crawlers, spiders
– Tools and scripts
– User Bias
Query Suggestions for
the query ‘calculator’.
15. Hadoop Cluster at eBay (One of several)
• Nodes
– Cent OS 4 64 Bit
– Intel Dual Hex Core Xeon 2.4 GHz
– 72 GB RAM
– 2 * 12 (24TB) HDD
– SSD for OS
• Network
– TOR 1Gbps
– Core Switches uplink 40 Gbps
• Cluster
– 532n – 1008n
– 4000+ cores – 24000 vCPUs
– 5 – 18 PB
16. Mobius – Computation Platform
eBay Data (Logs, Tables)
Hadoop Cluster
Low level Dataset access API
Query LanguageGeneric Java Dataset API
Mobius Studio (Eclipse plugin)
Click Stream Visualizer Metrics Dashboard Research Projects
Application
Layer
eBay
Infra-
structure &
Data
Source
Layer
Mobius
Layer
Sundaresan et al. Scalable Stream Processing & Map
Reduce, HadoopWorld, 2009.
17. Data Cleaning
• Data is cleaned during the processing phase.
• User Bias Removal
– Filter information from robots, API calls, spiders and crawlers.
– De-duplicate signals from the same user.
• Platform Bias Removal
– Treat signals from different platforms like mobile phones, game consoles, computers differently.
• System Bias Analysis
– Treat searches typed in by users differently from searches issued through user clicks on features.
18. Recommendation Computation – Phase 1
• Data Cleaning.
• Query Pair and Behavioral Frequency extraction.
• Query normalization.
• User de-duplication.
• Computation of behavioral features.Reducer
Mapper
Key: user, originating query
Value: Recommendation
query and behavioral
frequencies.
Input: User Click-stream data
Output: Query pair and behavioral features per user
19. Recommendation Computation – Phase 2
• Identity Mapper
• Aggregate over users
• Compute textual features for query pair
Reducer
Mapper
Key: query, recommendation
Value: feature values
Input: Query pairs, behavioral features per user
Output: Query pair, behavioral features, textual features
• Query pairs with non-trivial textual similarity tend to have non-zero
behavioral frequencies.
• Textual similarities computed only for 200M query pairs instead of
several trillion.
21. Remarks
• Log Mining algorithms are parallelizable.
• Easy to scale such algorithms using Hadoop.
• Hadoop empowers us to look at data-sets spanning larger time-frames.
• Hadoop enables us to iterate faster and hence run more user-facing experiments.
23. Why study temporal dynamics?
• Stock Markets
• Bio-Medical Signals
• Traffic, Weather and Network Systems
• Web Search & Ranking
• Recommender Systems
• eCommerce…
24. Challenges
• Large Scale data
– 100M+ users
– Petabytes of click-stream logs
– Billions of user sessions
– Billions of unique queries
• Noisy Data
– Robots
– API Calls
– Crawlers, Spiders
– Tools, Scripts
– Data Biases
• Data spread across long time frames
– Differences in collection methodologies
• Complexity of certain algorithms
25. Mobius – Generic JAVA Dataset API
•Java-based, high-level data processing framework built on
top of Apache Hadoop.
•Tuple oriented.
•Supports job chaining.
•Supports high level operators such as join (inner or outer) or
grouping.
•Supports filtering.
•Used internally at eBay for various data science applications.
•https://github.com/gysingh/openmobius
26. Hadoop – Handling External Code
•Pre-compiled Java code can easily be used with Apache
Hadoop
•User code needs to be assembled into one or more jar files
•Jars can be copied to the task nodes on the Hadoop cluster
with the -libjar option (takes a comma-separated list of local
jar names)
•The Hadoop software will add the contents from the Jar file(s)
to the classpath on the task nodes
31. Air conditioner searches become popular as summer approaches
Why are searches related to monopoly pieces popular every
October?
Mining Temporal Data – Does History Repeat Itself?
• Seasonality and Trend Prediction
32. Mining Temporal Data – Temporal Similarity
Similar patterns for queries related to Hanukkah
33. Preparing Data – Getting Queries from User Sessions
Search View Purchase
Typical eBay flow
• Search: specify a query, with optional constraints
• View: click on an item shown on search results page
• Purchase: buy a fixed-price item or place winning bid on an auction item
Consider only queries typed in by humans. Ignore
page views from robots or views from paid
advertisements, campaigns or natural search links.
34. • Apply default robot detection and removal algorithm
– Based on IP, number of actions per day, agent information.
• Find the right flows from the sessions.
– Filter out noisy search events.
– Remove anomalies due to outlier users.
– Limit the impact a single user can have on aggregated data (de-duplication).
Cleaning Data
35. Search Exit
Finding the right flow in the session
May not consider flows without any interesting activity like clicks
Ads/paid
search
View Purchase
May not consider searches coming from advertisements
Session 1
Session 2
Search View Purchase
Session 3
These kind of sessions are considered and information is aggregated.
36. Data Preparation - Map Reduce Flow
M R
Read raw events • Group events into sessions.
• Group sessions by GUID
• Apply bot filtering algorithm
Preprocessing stage
Save the result so it
can be reused by
other apps.
M R
• Find the right flow.
• Emit query as key.
• Emit de-duplicated query
volume as value
Calculate sum per
key
Collecting stage
Query Volume
output daily as
dailyQueryData
37. Time Series Generation
• Data Cleaning.
• Query normalization.
• Time Series formation for all unique queries
• Time Series indicating total daily activity volumeReducer
Mapper
Key: query
Value: date: query volume
Input: dailyQueryData for multi-year time-frames
Output: Vectors of Query Volume Time Series
Data not to scale and only shown as an example
38. Buzz Detection – 2 state automaton model
•Arrival of queries as a stream.
•“low rate” state (q0) and a “high rate” state (q1).
• where α1 > α0.
•The automaton changes state with probability p ε (0, 1)
between query arrivals.
•Let Q = (qi1, qi2… qin) be a state sequence. Each state
sequence Q induces a density function fQ over
sequences of gaps, which has the form
fQ(x1, x2 …xn) =
x
exf 0
00 )( α
α −
= ( ) x
exf 1
11
α
α −
=
( )∏ =
n
t ti xf t1
N. Parikh, N. Sundaresan. KDD 2008.
Scalable and Near Real-time Burst Detection from eCommerce Queries.
39. Buzz Detection – Modeling Queries as a Stream
Frequency of Query
Gaps between arrival
times for queries
40. Buzz Detection – 2 state automaton model
•If number of state transitions in sequence Q are denoted as b
•Prior probability of Q is given as
•Using Bayes theorem, the cost equation is
•Sequence that minimizes the cost would depend on
– Ease of jumps between 2 states.
– How well the sequence conforms to the rate of query arrivals.
•Configurable Parameters for model are α0, α1 and cost p.
–α0, α1 are calculated from data in the MR job.
–Heuristically determined value of p = 0.38 is used.
−
∏∏
++ =≠ 11
1
tttt iiii
pp ( ) ( )n
b
bnb
p
p
p
pp −
−
=−=
−
1
1
1
∑
=
−+
−
=
n
t
ti xf
p
p
bXQC t
1
))(ln()
1
ln(.)|(
42. Time Series Normalization and Buzz Detection
• Normalize Time Series
• Transform Time Series to two state model
• Calculate parameters α0, α1 for every query and
apply dynamic programming for 2 state calculation
• Calculate probability of being a periodic event query e.g.
superbowl
Group queries buzzing at similar time intervals
Reducer
Mapper
Key: query
Value: normalized time series,
two state model, probability of
being a seasonal event query
Key: time-frame
Value: query that buzzes
during that time frame
Input: 4-7 Years Query Time Series Vectors
Output: time-frame Queries Buzzing during that time-period
44. Binary data structure generation from MR job
•Created new FileOutputFormat
•Write time series data to two files
–Binary File with fixed sized records indicating time series
volume
–Text file mapping each unique query string to binary file and
offset
•Index created by reducers directly loaded by custom servers
written in C++.
•Used for an internal Query Trends Application
48. Temporal Similarity
• 1+ Billion Queries
• Naïve Algorithm – Quadratic Complexity
• Pearson’s Correlation
• Candidate Set Reduction
– Correlations useful only for event-based or seasonal queries
– Correlations useful in applications only for head and torso queries
– These filters reduce candidate space from B+ to a few M.
51. Remarks
• Log Mining and Time Series mining algorithms are
parallelizable.
• Easy to scale such algorithms using Hadoop.
• Hadoop empowers us to look at data-sets spanning years
and years.
• Hadoop enables us to iterate faster and hence run more
user-facing experiments.
53. Outline
•Introduction to selling on eBay
•Shipping suggestion opportunity
•Data to the rescue
•Shipping suggestions: Base approach
•Inhomogeneous category problem
•Improved data mining to the rescue
•Shipping suggestions: Current approach
53
54. Listing an item for sale on eBay
•Specify listing title
•Accept / override suggested listing category
•Upload one or more pictures
•Specify item condition (eg, New, Used)
•Type in item description
•Set start price or fixed price, and listing duration
•Specify shipping (service, cost, who pays: buyer / seller)
•Specify accepted payment methods
54
55. Shipping on eBay
•eBay would like to help sellers choose a shipping method
•Many different and unique items are offered on eBay
•Weight and dimensions are usually unknown
•Asking sellers to type in weight and dimensions creates
friction
•Would like an automatic approach
55
56. Data to the rescue
•Sellers on eBay often buy their postage labels through eBay’s
label printing platform
•Many different shipping services are offered through eBay
label printing (from US Postal Service, FedEx)
•Shipping labels usually include weight and dimensions to
determine pricing
•While items are often unique, all items are assigned to
categories during listing
56
57. Data to the rescue (cont.)
•Approach: aggregate past shipping label data by category
•Run statistics on the weight and dimension data for each
category
•Derive a usable data-driven estimate on weight and
dimensions
•Choose a suitable service and carrier, and make a
suggestion
57
58. Label data at eBay
•eBay has at any given time more than 350 million listings
worldwide
•Many millions of shipping labels for the US are printed
through eBay every year
•Thousands of categories
58
59. Processing of label data with Hadoop
•Use Mappers to extract desired fields (weight, dimensions)
•Use Mappers for filtering (eg, exclude USPS flatrate)
•Mapper output key = category, value = weight and
dimensions
•Use Reducers to perform statistical evaluation
•Reducer output key = category, value = suggested weight
and dimensions
•Pick a suitable carrier and service for each category
59
61. Improved Approach
•Differentiate items within a category into light and heavy
•Light vs. heavy:
–“trumpet” category: mouthpiece vs. trumpet with case
–“dinnerware” category: single plate vs. dinnerware set
–“computer accessories” category : mouse vs. keyboard
•Besides the listing category use the listing title
•Different words are important for different categories
61
62. Improved Approach: What precisely is “heavy”?
•Each category has its own separation into light and heavy
•Some categories are uniform and have no such separation
•Attempt to cluster items by weight in each category into
precisely two clusters
•Split the category if both the light and the heavy clusters have
sufficient items
62
63. Improved Approach: Bag of title words
•Each category has its own collection of title words indicating
light and heavy items
•Preselect words important for each category
•Fit a statistical model on the title words that for each listing
produces a probability that the item is heavy (or light)
63
64. Improved Approach with Hadoop
•Use Mappers to extract desired fields (weight, dimensions,
title)
•Use Mappers for filtering (eg, exclude USPS flatrate)
•Mapper output key = category, value = weight, dimensions,
and title
•Use Reducers to perform machine learning
–Clustering to determine light / heavy cut-off
–Title word selection
–Title word model fitting
64
65. Sampling
•Categories have very different numbers of listings
– Searching on 2013/09/23 on ebay.com yields:
– 2,576,202 results for ”dvd”
– 487 results for ”Climbing Holds”
•Above results are “active items”, if using historical data then
some categories’ data will be too large to fit into a single
reducer
•The reducer does not know ahead of time how large the
category is (records are streamed by Hadoop)
•Use reservoir sampling in case leaf category is too large to fit
into a single reducer (hundreds of thousands of records)
65
66. Modeling Details
•K-means for clustering of weights, K=2
•Discard clustering if almost all records are in larger cluster or
too few records in smaller cluster
•For each category, fit a binary Maximum Entropy model (aka
Logistic Regression) on item titles predicting light vs. heavy
using standard public-domain Java software
•Perform cross-validation
66
67. Improved Approach with Hadoop (cont)
•Reducer also performs data-driven validation and testing of
goodness of model fits
•Reducer output key = category, value = model words, model
word parameters, and suggested weight / dimensions for light
and heavy, model performance statistics
67
68. Final System
•Thousands of categories with title models to have
suggestions for light and heavy items
•For thousands more rarely used categories have the baseline
suggestions
•All transparent to the seller, no additional input required
•Sellers can override if they want
•Abandoning rate of listing flow at shipping stage is
significantly improved
68
71. References
• Hasan et al. Query suggestion for E-commerce sites. WSDM 2011.
• Parikh et al. Inferring semantic query relations from collective user behavior. CIKM 2008.
• Sundaresan et al. Scalable Stream Processing and Map Reduce. Hadoop World 2009.
• Anil Madan. Hadoop at eBay. http://www.slideshare.net/madananil/hadoop-at-ebay.
• Parikh et al. Scalable and near real-time burst detection from eCommerce queries. KDD 2008.
• N Sundaresan. Popup Commerce, Towards Building Transient and Thematic Stores. X.Innovate 2011.
• Pantel et al. Web-Scale Distributional Similarity and Entity Set Expansion. EMNLP 2009.
• Gyanit Singh, Nish Parikh, Neel Sundaresan. Query Suggestion at Scale with Hadoop. Hadoop Summit
2011.
• Nish Parikh. Mining Large-scale Temporal Dynamics with Hadoop. Hadoop Summit 2012.
• Uwe Mayer. Parallel and Distributed Computing, Data Mining and Machine Learning. EBay Shipping
Recommendations over Hadoop. Hadoop Innovation Summit 2013.
• Nish Parikh, Gyanit Singh. Large scale user-interaction log analysis. ACM Data Mining SIG Bay Area
Summit 2010.
• Halevy et al. The Unreasonable effectiveness of data. IEEE Intelligent Systems, 2009.
• Banko and Brill. Scaling to very very large corpora for natural language disambiguation. ACL 2001.
• Pilaszy and Tikk. Recommending new movies: even a few ratings are more valuable than metadata.
RecSys 2009.
• Rajaraman. More data usually beats better algorithms. DataWocky, 2008.
72. Acknowledgments
• Neel Sundaresan
• Evan Chiu
• Mohammad Al Hasan
• Karin Mauge
• Jack Shen
• Rifat Joyee
• Zhou Yang
• Hui Hong
• Long Hoang
• Narayanan Seshadri
72