• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Social Recommender Systems Tutorial - WWW 2011
 

Social Recommender Systems Tutorial - WWW 2011

on

  • 32,117 views

 

Statistics

Views

Total Views
32,117
Views on SlideShare
30,975
Embed Views
1,142

Actions

Likes
212
Downloads
0
Comments
17

24 Embeds 1,142

http://www2013.org 520
http://www.scoop.it 329
http://sw072.sinaapp.com 144
http://blog.sw072.co.cc 35
http://trust.sce.ntu.edu.sg 23
http://sethathay.wordpress.com 22
http://www.mefeedia.com 18
http://a0.twimg.com 7
https://twitter.com 7
http://www.techgig.com 6
http://paper.li 6
https://inside.bluekiwi.net 5
https://si0.twimg.com 3
http://control.blog.sina.com.cn 3
http://webcache.googleusercontent.com 2
http://twitter.com 2
https://twimg0-a.akamaihd.net 2
http://trunk.ly 2
https://bb.uhd.edu 1
http://www.mashme.tv 1
http://fbweb-test.comoj.com 1
http://translate.googleusercontent.com 1
http://www.linkedin.com 1
https://inside.stagingbk.net 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

110 of 17 previous next Post a comment

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi Ido!

    Very nice sldes!
    Could you please send them to dominik.kowald@student.tugraz.at ?

    Many thanks and best regards,
    Dominik
    Are you sure you want to
    Your message goes here
    Processing…
  • dear author;

    it would be great if I could have the slides.

    ali.fcb67@gmail.com

    thanks in advance
    Are you sure you want to
    Your message goes here
    Processing…
  • Ido ... great presentation here ... would love to get a copy of the slides if available ... sstrifler@earthlink.net.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi Ido Guy. First of all, congratulation for this presentation, is superb, could you send it to my email antonio@podgorski.com.br . Best Regards
    Are you sure you want to
    Your message goes here
    Processing…
  • It's really useful, thanks for share.
    P/S: If you could share this slide, would please to mail to me too, thanks so much.
    Are you sure you want to
    Your message goes here
    Processing…

110 of 17 previous next

Post Comment
Edit your comment
  • The sites in the top sites lists are ordered by their 1 month alexa traffic rank. The 1 month rank is calculated using a combination of average daily visitors and pageviews over the past month. The site with the highest combination of visitors and pageviews is ranked #1.
  • Facebook – updated March 2011 Twitter – June 2010 YouTube – May 2010
  • To mention that table entries can be filled by implicit feedback
  • Outside the box – since similar users can recommend surprising items compared to content-based approaches which is limited to items that fit the user profile
  • What a Difference a Group Makes:Web-Based Recommendations for Interrelated Users (Mason & Smith)
  • Explaining collaborative filtering
  • Tagsplanations: explaining recommendations using tags, Vig, IUI09
  • User Evaluation Framework of Recommender Systems, Chen (SRS 2010)

Social Recommender Systems Tutorial - WWW 2011 Social Recommender Systems Tutorial - WWW 2011 Presentation Transcript

  • Social Recommender Systems Ido Guy, David Carmel IBM Research-Haifa, Israel WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Outline
    • Introduction
    • Fundamental Recommendation Approaches
    • Entity Recommendation: Content, Tags, People, Communities
    • Recommendation for Groups
    • The Cold Start Problem
    • Trust
    • Social Recommender Systems in the Enterprise
    • Temporal Aspects in Social Recommendation
    • Social Recommendation over Activity Streams
    • Evaluation Methods
    • Summary of Open Challenges
  • Introduction IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Web 2.0 and Social Media
    • Web 2.0 [Oreiley03]
      • It’s all about people
        • Joining online communities
        • Connecting to each other on social networks
        • Creating content as in wikis and blogs a
        • Annotating content with tags, comments, ratings
    • Social Media
      • Refers to Web 2.0 sites that allow users to share and interact
      • Characterized by
        • User-centered design
        • User-generated content (e.g., tags)
        • Social networks and online communities
  • Top 10 Website (Alexa.com, 3/2011)
    • Google
    • Facebook
    • YouTube
    • Yahoo!
    • Windows Live
    • Blogger.com
    • Baidu
    • Wikipedia
    • Twitter
    • QQ.com
  • Social Overload
    • Facebook – largest social network site
      • 600,000,000 users, half log in every day
      • 35,000,000,000 online “friendships”
      • 900,000,000 objects people interact with
      • 30,000,000,000 shared content items / month
    • YouTube – largest video sharing site
      • 2,000,000,000 views per day
      • 1,000,000 video hours uploaded per month
    • Twitter – largest microblogging site
      • 200,000,000 users per month
      • 65,00,000 tweets per day (750 per second)
      • 8,000,000 followers of most popular user
  • Social Overload
    • Information overload – blogs, microblogs, forums, wikis, news, bookmarked web pages, photos, videos, …
    • Interaction overload – friends, followers, followees, commenters, co-members, voters, “likers”, taggers, …
  • Social Recommender Systems
    • Recommender Systems that target the social media domain
    • Aim at coping with the challenge of social overload by presenting the most attractive and relevant content
    • Also aim at increasing adoption and engagement
    • Often apply personalization techniques
  • Recommender Systems and Social Media
    • Recommender Systems are an augmentation of the social process, in which we rely on advices or suggestions from other people [Resnick97]
    • Any CF system has social characteristics. However, here we focus on the social media domain
    • Social Media and Recommender Systems can mutually benefit each other:
    Recommender Systems Social Media Social media introduces new types of data and metadata that can be leveraged by RS (tags, comments, votes, explicit social relationships) RS can significantly impact the success of social media, ensuring each user is present with the most relevant items that suits her personal needs
  • Real-World Examples
  • Real-World Examples
  • Real-World Examples
  • Real-World Examples
  • Real-World Examples
  • Real-World Examples
    • Want to know what movie you should see this weekend? Ask Facebook . What should you eat for dinner tonight? Yelp knows! Should you wear the pink sweater or the blue one? You don’t have to decide – you can just ask Twitter !
    • “ The Culture of Recommendation: Has It Gone Too Far?” Susan Murphy at SuzeMuse Blog, Feb 22 nd 2011
  • Fundamental Recommendation Approaches IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Fundamental Recommendation approaches
    • Collaborative filtering based Recommendation
      • Aggregate ratings of objects from users and generate recommendation based on inter-user similarity
    • Demographic Recommendation
      • Categorize users based on personal attributes (age, gender, income..) and make recommendation based on demographic classes
    • Content-based recommendation
      • A user profile is constructed based on the features of the items the user has rated/consumed. This profile is used to identify new interesting items for the user (that match his profile)
    • Knowledge-based recommendation
      • Compute the utility of each item to the user and the user needs
    • Hybrid methods
      • combine several approaches together
  • Recommendation techniques ( Burke,2002 ) Generate a classifier based on u ’s ratings, use it to classify new items Rating from u to items Features of items Content-based Infer a match between items and u ’s needs User needs Features of items Knowledge-based Identify similar users, extrapolate from their rating Demographic information about u Demographic information about users Demographic Identify similar users, extrapolate from their rating Rating from u to items User-item matrix CF Process Input Background Technique
  • Collaborative Filtering
    • In the real world we seek advices from our trusted people (friends, colleagues, experts)
    • CF automate the process of “word-of-mouth”
      • Weight all users with respect to similarity with the active user.
      • Select a subset of the users ( neighbors ) to use as recommenders
      • Predict the rating of the active user for specific items
        • based on its neighbors’ ratings
      • Recommend items with maximum prediction
  • User based CF algorithm
    • The User x Item Matrix:
    Jon’s taste is similar to both Chris and Alice tastes  Do not recommend Superman to Jon Shall we recommend Superman for John? ? Like Like Jon Dislike Like Like Chris Like Dislike ? Bob Dislike Like Like Alice Superman Snow-white Shrek
  • User based CF algorithm (cont)
    • Voting: v ij corresponds to the vote of user i for item j
    • I i is the set of items on which user i voted
    • w(i,k) -the similarity between u i and u k
    • k – a normalization factor
    • Predictive vote :
    • The mean vote for user i :
  • Cosine based similarity between users Pearson based similarity between users
  • CF - Practical challenges
    • Ratings data is often sparse, and pairs of users with few co-ratings are prone to skewed correlations
    • Fails to incorporate agreement about an item in the population as a whole
      • Agreement about a universally loved item is much less important than agreement for a controversial item
        • Some algorithms account for global item agreement by including weights inversely proportional to an item’s popularity
    • Calculating a user’s perfect neighborhood is expensive – requiring comparison against all other users
      • Sampling: a subset of users is selected prior to prediction computation
      • Clustering: can be used to quickly locate a user's neighbors
  • Item-Based Nearest Neighbor Algorithms
    • The transpose of the user-based algorithms
      • generate predictions based on similarities between items
      • The prediction for an item is based on the user’s ratings for similar items
    • traverse over all m items rated by user i and measure their rate , averaged by their similarity to the predicted item
    • w(k,j) is a measure of item similarity - usually the cosine measure
    • Average correcting is not needed because the component ratings are all from the same target user
    Bob dislikes Snow-white (which is similar to Shrek)  do not recommend Shrek to Bob ? Like Like Jon Dislike Like Like Chris Like Dislike ? Bob Dislike Like Like Alice Superman Snow-white Shrek
  • Dimensionality Reduction Algorithms
    • Reduce domain complexity by mapping the item space to a smaller number of underlying “dimensions”
      • represent the latent topics or tastes present in those items
      • improve accuracy in predicting ratings for the most part
      • reduce run-time performance needs and lead to larger numbers of co-rated dimensions
    • Popular techniques: Singular value decomposition and Principal Component Analysis
      • Require an extremely expensive offline computation step to generate the latent dimensional space
  • Singular Value Decomposition (SVD)
    • Handy mathematical technique that has application to many problems
    • Given any m  n matrix R , find matrices U , I , and V such that
      • I = U I V T
      • U is m  r and orthonormal
      • I is r  r and diagonal
      • V is nxr and orthonormal
    • Discard the smallest I values to get R m,k with k << r
      • R m,k is an k-approximation of R based on k most important latent feature
      • Recommendation will be given based on R k
  • Hybrid recommendation methods
    • Any Recommendation approach has pros and cons
      • e.g. CF & CB both suffer from the cold start problem
        • but CF can recommend “outside the box” compared to Content-based approaches
    • Hybrid recommender system combines two or more techniques to gain better performance with fewer drawbacks
    • Hybrid methods:
      • Weighted: scores of several recommenders are combined together
      • Switching: switch between recommenders according to the current situation
      • Mixed: present recommendations that are coming from several recommenders
      • Cascade: One recommender refines the recommendations given by another
  • References
    • Recommender Systems: An Introduction, Jannach et al. 2011 .
    • Recommender Systems Handbook , Ricci et al. 2010
    • Hybrid web recommender systems, : Survey and Experiments. Burke , User Modeling and User-Adapted Interaction . 2002
    • Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. Adomavicius et al., IEEE Transactions on Knowledge and Data Engineering. 2005
  • Content Recommendation IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Video Recommendations
    • The YouTube Video Recommendation System [Davidson et al., RecSys ’10]
    • Goals
      • recent and fresh
      • diverse
      • relevant to the user’s recent actions
      • users to understand why a video was recommended to them
    • Based on user’s personal activities (watched, favorited, liked)
    • Expending through co-visitation graph of videos
    • Ranking based on a variety of signals for relevance and diversity
  • Video Recommendations
    • Calculating related videos (CB):
      • Association rule mining (co-visitation count)
      • Additional issues: presentation bias, noisy watch data,
      • More data sources: sequence and timestamp of video watches, video metadata
    • Expansion through related video graph
    • Ranking
      • Video quality
      • User specificity (personalization)
    • Topic diversification by removing very similar videos or ones yielded by the same seed video
    • Evaluation – A/B testing
      • CTR, long CTR, session length, time to long watch,
      • 60% of all homepage video clicks are recommendations
  • News Recommendations
    • Personalized news recommendation based on click behavior [Liu et al., IUI 2010]
    • News Recommendation on the Google News website
    • Combined CB-CF approach: Rec(article) = IF(article) x CF(article)
    • IF is based on the topic of the article and two main factors:
      • User’s own past clicks (reflecting the user’s genuine news interests)
      • General news trends based on click behavior from the general public
      • Combination of long-term and short-term behavior
    • CF is based on clustering dynamic datasets
      • MinHash – fuzzy clustering based on the proportional overlap between the set of items they clicked
      • Shown to scale and retain quality
  • News Recommendations
    • Evaluation based on live trail
    • Hybrid method shown to perform best
      • 31% CF over popularity; 38% hybrid over popularity
    • Noticeable effect on return rate on the longer run (after a week of getting used to the feature)
    • No effect on overall stories read on the News homepage (maybe people only have fixed amount on time)
  • News Recommendations
    • Social network and social information filtering on Digg [Lerman, ICWSM ’07]
    • Digg – social news aggregator, allowing users to submit links to, vote, and discuss news stories
    • Recommendations by Digg Friends
    • Live trial by tracking Digg users over time
      • Users tend to like stories submitted by friends
      • Users tend to like stories their friends read and liked
    • Need to control for user-diversity to avoid “tyranny of the minority”, where lion’s share of front page stories comes from most active users
    • In practice, also use the concept of “diggers like me”
  • Enhancing CF with Friends
    • The user’s network of friends and people of interest becomes more accessible in the Web 2.0 era (Facebook, LinkedIn, Twitter,...)
    • Such social relationships can be very effective for recommendation compared to traditional CF
      • Recommendation from people the user knows and thus can judge
      • Spare explicit feedback such as ratings
      • Effective for new users
    • Various works have shown the effectiveness of friend-based recommendation over CF, e.g.:
      • Movie and book recommendation - Comparing Recommendations Made by Online Systems and Friends [Sinha & Swearingen, 2001]
      • Friends as trusted recommenders for movies [Golbeck, 2006]
      • Club recommendation within a German SNS - Collaborative Filtering vs. Social Filtering [Groh & Ehmig, Group 2007]
  • Blog Recommendations
    • Document Representation and query expansion models for blog recommendation [Arguello et al., ICWSM ’08]
    • Personalized recommendation of blogs in response to a query
    • Blog is a collection of documents (blog entries)
    • The query represents an ongoing (multifaceted) interest in a topic
    • Two document models
      • Large document model – based on the blog as a whole, a virtual concatenation of its respective entries
      • Smoothed small document model – each entry is a document, aggregation at the ranking level
    • Query expansion using Wikipedia
    • Evaluation using the TrecBlog06 collection
      • Two document models equally perform, hybridization further improves
      • Query expansion shown to improve recommendation results
    • Recommendation vs. Search
  • Mixed Social Media Item Recommendations
    • Social network-based recommendations of blogs, bookmarks, and communities
    • Key distinction:
      • Familiarity: co-authorship, org chart, direct connection or tagging, etc.
      • Similarity – co-usage of tags, co-bookmarking, co-membership, co-commenting
    • Explanations – showing the “implicit recommender” and her relationship to the user and item
    Personalized Recommendation of Social Software Items based on Social Relations [Guy et al., RecSys ’09]
  • Mixed Social Media Item Recommendations
    • Evaluation combines a user survey and a live system
    • Recommendations from familiar people are significantly more accurate than recommendations from similar people
      • 57% to 43% interest ratio
    • Similar people yield more diverse, less expected items
    • Explanations have an instant effect increasing interest in recommended items
  • Mixed Social Media Item Recommendations
    • Social media recommendation based on people and tags [Guy et al., SIGIR ’10]
    • Underlying social aggregation system – SaND
    • 5 item types: blogs, bookmarks, communities, wikis, files
    • First comprehensive study to compare people-based and tag-based recommenders
    • Outgoing and incoming tags
  • Mixed Social Media Item Recommendations
    • Direct tag evaluation
      • 65 participants rated 16 tags each
      • Hybrid tags most accurate
      • Incoming slightly outperform outgoing
      • Indirect are least effective
    • Large-scale user study
      • 412 participants rated 16 items each
      • All personalization methods outperform popularity
      • Tag-based significantly outperforms people-based in terms of accuracy
      • Yet has less diversity, more expected results, and less effective explanations
      • Hybrid combines the good of both worlds
      • Reaches 70:30 interest ratio for first 16 items
  • Tag-based Movie Recommendations
    • Tagommenders: connecting users to items through tags [Sen et al., WWW ’09]
    • Inspecting various ways to recommend items based on tags
      • Movie ratings
      • Movie clicks
      • Tag applications
      • Tag Searches
    • Evaluation based on MovieLens (featuring tags)
      • Algorithms based on item ratings performed best, followed by those based on implicit item signals, followed by those based on tag signals.
      • A linear combination of all algorithms, performed best.
      • Tag-based algorithms outperformed state-of-the-art CF
        • Lead to RS that leverage item features users find most useful
  • Summary of Key Points
    • Articulated social networks play an important role in CF for social media
      • Enhance regular CF in various manners
    • “ Tagommenders” are highly effective for recommendation
      • Outperform regular CF
    • As in Traditional RS, hybrid approaches (e.g., tags+social networks, short+long term interests) typically further improve
    • Many users => strong evaluation on live systems
    • Accuracy vs. Serendipity tradeoff
      • Accuracy alone is not enough – serendipity and diversity also play a key role [Mcnee et al., CHI ’06]
  • References
    • Arguello, J., Elsas, J., Callan, J., & Carbonell J. Document representation and query expansion models for blog recommendation. Proc. ICWSM’08.
    • Davidson, J. Liebal, B., Junning L., et al. The YouTube video recommendation system. Proc. RecSys '10 , 293-296.
    • Groh, G., & Ehmig, C. Recommendations in taste related domains: collaborative filtering vs. social filtering . Proc. GROUP ’07 , 127-136.
    • Golbeck J. Generating predictive movie recommendations from trust in social networks. Proc. 4th Int. Conf. on Trust Management . Pisa, Italy
    • Guy, I., Zwerdling, N., Carmel, D., et al. Personalized recommendation of social software items based on social relations. Proc. RecSys ’09 , 53-60.
    • Guy, I., Zwerdling N., Ronen, I, et al. Social media recommendation based on people and tags. Proc SIGIR ’10 , 194-201.
  • References
    • Lerman, K. Social networks and social information filtering on Digg. Proc. ICWSM ’07.
    • Liu, J., Dolan, P. & Pederson E.B. Personalized news recommendation based on click behavior. Proc. IUI ’10 , 31-40.
    • McNee M.S., Riedl, J., & Konstan J.A. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. Proc CHI '06 , 1097-1101.
    • Sen, S., Vig, J., & Riedl, J.Tagommenders: connecting users to items through tags. Proc. WWW '09 , 671-680.
    • Sinha, R. & Swearingen, K. Comparing recommendations made by online systems and friends. 2001 DELOS-NSF Workshop on Personalization and Recommender Systems in Digital Libraries .
  • Tag Recommendation IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Tag Recommendation
    • Adding terms (tags) to objects by the public provides additional contextual and semantic information to various resources:
      • Web pages (e.g. Delicious)
      • Academic publications (e.g. CiteILike)
      • Multimedia objects (e.g. Flickr, Last.Fm, YouTube)
    • External tags are useful for many applications
      • search/browse, classification, tag-cloud representation, query expansion
    • Tag Recommendation: - recommend appropriate tags to be applied by the user per specific item annotation
      • assist the user in the tagging phase
      • reduce undesired noise in the aggregated folksonomy
  •  
  • Tag Recommendation Approaches
    • Popular:
      • Recommend the most popular tags to the user
        • Popular tags already assigned for the target item (Golder 2005)
        • Frequent tags previously used by the user
        • Tags co-occurred with already assigned tags (Sigurbjornsson 2008)
    • Collaborative Filtering:
      • Recommend tags associated with “similar” items
      • Recommend tags given by “similar” users
    • Hybrid:
      • Recommend tags given by similar users to similar items (Symeonidis08, Rendle10, Carmel 10)
  • Flickr’s tag recommender (Sigurbjornsson WWW2008 )
  • Content-based tag recommendation
    • Recommend keywords/phrases from the item’s associated text (content, anchor-text, meta data, etc.)
      • e.g .terms with highest tf-idf score
    • Analyze mutual relationship between content and tags
      • Recommend tags that have the highest co-occurrence with important keywords
      • Language modeling approach (Givon 2010):
        • Estimate the joint tag and keyword probability distribution.
        • This provides an estimation that a given item will be annotated with certain tags, given a background collection of annotated items
  • Graph based approaches
    • The FolkRank algorithm (Hotho 2006):
      • a resource which is tagged with important tags by important users becomes important
        • The same holds, symmetrically, for tags and users
    • We have a graph of connected vertices (resources, users, tags) which are mutually reinforcing each other by spreading their weights
    • Graph nodes are scored by random walk techniques:
    w – a weight vector over nodes A – a row-stochastic matrix of the graph p - preference vector over the nodes
    • For tag recommendation, return the top ranked tags, while setting p to bias the desired pair of user and resource
  • Tag Recommendations in Folksonomies, Jaeschke07
    • For each user we pick one of his posts randomly
    • The task of the different recommenders is to predict the user tags of this post, based on the rest of the Folksonomy
    • We measure how many of those tags are covered by the top-k recommended tags
  • References
    • The Structure of Collaborative Tagging System. Golder et al. Journal of Information Science, 2005
    • Flickr tag recommendation based on collective knowledge. Sigurbjörnsson et al. WWW 2008
    • Tag recommendations based on tensor dimensionality reduction. Symeonidis RecSys 2008
    • Pairwise interaction tensor factorization for personalized tag recommendation Rendle et al. WSDM 2010
    • Social bookmark weighting for search and recommendation, Carmel et al. VLDB Journal 2010
    • Large Scale Book Annotation with Social Tags. Givon et al. ICWSM 2009
    • FolkRank: A Ranking Algorithm for Folksonomies . Hotho et al. FGIR 2006
    • Tag Recommendations in Folksonomies, Jaeschke et al, PKDD 2007
  • People Recommendation IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Social Matching
    • Social matching : A framework and research agenda [Terveen & Mcdonald, 2005]
    • Social matching systems = recommender systems that recommend people to each other
      • Must reveal some amount of personal information
      • Privacy, trust, reputation, interpersonal attraction have greater importance
      • Interaction overload vs. information overload
  • Recommending People to Connect with
    • Do You Know? Recommending People to Invite into Your Social Network [Guy et al., IUI ’09]
    • Recommendation in the enterprise based on the following signals:
      • Org chart relationships
      • Paper and patent co-authorship
      • Project co-membership
      • Blog commenting
      • People tagging
      • Mutual connections
      • Connection in another SNS
      • Wiki co-editing
      • File sharing
    • Rich and detailed “evidence”
  • Recommending People to Connect with
    • Evaluation based on the Fringe enterprise SNS
    • Dramatic increase in the number of invitations sent and users sending invitations
      • “ I must say I am a lazy social networker, but Fringe was the first application motivating me to go ahead and send out some invitations to others to connect ”
    • Evidence increases users’ trust in the system and makes them feel more comfortable
      • “ If I see more direct connections I’m more likely to add them […] I know they are not recommended by accident ”
    • Substantial increase in friends per user
    • Sharp decay in usage over time
      • Excitement drops, connections exhausted
  • Recommending People to Connect with
    • Make new friends, but keep the old: recommending people on social networking sites [Chen et al., CHI ’09]
    • Content Matching (CM)
      • Profile entries, status messages, photo text, shred lists, job title, location, description, tags
      • vu ( wi ) = TFu ( wi ) ⋅ IDFu ( wi )
      • Cosine similarity of both users’ word vector
      • Latent semantic analysis did not perform better
        • And does not yield intuitive explanations
    • Content-plus-Link (CplusL)
      • Hybrid CM + social link
      • Social link: a sequence of 3 or 4 users
        • a connects to b, a comments on b, b connects to a
    • Friend-of-Friend (FoF)
      • Based on number of mutual friends
      • One or more recommendations for 57.2% of the users
    • Aggregated Relationships (SONAR)
      • Similar to the “Do you know?” algorithm
      • One ore more recommendations for 87.7% of the users
    4 Algorithms Compared
  • Recommending People to Connect with
    • Evaluation based on the SocialBlue enterprise SNS (“Beehive”)
    • Survey with 258 participants
      • CM and CplusL yield mostly unknown people, while FoF and SONAR yield mostly known
      • Content similarity vs. relationships alg.
      • The latter are more accurate overall
      • The former are better at discovering new friends
    • Controlled field study with 3,000 users
      • SONAR yields most effective results
      • Combine relationships (at first) and content similarity (when the network grows)?
  • Recommending People to Connect with
    • The network effects of recommending social connections [Daly et al., RecSys ’10]
    • FoF is highly biased towards well-connected users, leading to high rec. frequency of the same users
    • CM is most diverse and often recommends users with few connections only
    • CM and SONAR affect betweenness centrality most significantly
    • CM is most biased for same country but least biased for same division
    • SONAR substantially increases cross-country and intra-division connections
    • Highlight network effects when recommending people?
    degree distribution Betweenness delta
  • Recommending People to Connect with
    • FriendSensing: recommending friends using mobile phones [Quercia et al., RecSys ’09]
    • “ Sensing” friends based on BlueTooth information on mobile devices
    • Friends are recommended based on collocation:
      • Creating a weighted graph based on frequency and duration
      • Applying link prediction on the graph (shortest path, page rank, k-markov chain, hits)
    • Simulation-based evaluation
  • Recommending People to Follow
    • Recommending twitter users to follow using content and collaborative filtering approaches [Hannon et el., RecSys’10]
    • CB, CF, and Hybrid approaches
    • User profiles based on
      • Own tweets
      • Followers’ tweets
      • Followees’ tweets
      • Followers
      • Followees
    • Using Lucene to index users by their profile, after applying TF-IDF to boost distinctive terms/users within the profile
    • http://twittomender.ucd.ie/
    • http://twittomender.ucd.ie/
  • Recommending People to Follow
    • Offline Evaluation, 20K users
      • 19,000 training set Twitter users
      • 1,000 test users
      • Create index per profile and predict followees
      • Measure by precision and position
      • Results show trade-off between the two
      • Slight advantage to followers and tweets of followers
      • Hybrid improves results (precision close to 0.3)
    • Live User Trial, 34 participants
      • Hybrid approach combining all types
      • 30 recommended Twitter users
      • Indicate whom s/he is likely to follow
        • No actual effect
      • On average, 6.9 out of 30
  • Recommending People to Follow
    • Who should I follow? Recommending people in directed social networks [Brzozowski & Romero, 2010]
    • Experiments with the WaterCooler enterprise SNS
    • 110 users followed 774 new people during 24-day trail period
    • Strongest pattern A<-X->B
      • Sharing an audience with someone is a surprisingly compelling reason to follow them
    • Similarity (and most read) are not so strong indicators
    • Most replied is a good indicator
  • Expertise Location
    • Expertise location systems aid in finding a person with regard to a specific topic
    • For answering a question, getting advice, having a conversation, promoting an idea, etc.
    • Sometimes referred to as “expert recommendation” systems
    • E.g., Expertise recommender: a flexible recommendation system and architecture [McDonald & Ackerman, CSCW ’00]
    • Different from people recommendation
      • Initiated by the user
      • In context: the user types a query
      • Typically answers an ad-hoc user need
      • Recommendation vs. Search
  • References
    • Brzozowski, M.J. & Romero, D.M. Who should I follow? Recommending people in directed social networks.
    • Chen, J., Geyer, W. Dugan, C., Muller, M., & Guy, I. 2009. Make new friends, but keep the old: recommending people on social networking sites. Proc. CHI ’09 , 201-210.
    • Daly E.M., Geyer W., & Millen D.R. The network effects of recommending social connections. Proc. RecSys '10 , 301-304.
    • Guy I., Ronen I., & Wilcox E. Do you know? recommending people to invite into your social network. Proc. IUI’09 , 77-86.
    • Hannon, J., Bennett, M., & Smyth, B. Recommending twitter users to follow using content and collaborative filtering approaches. Proc. RecSys '10 , 199-206.
    • McDonald D.W & Ackerman M.S. 2000. Expertise recommender: a flexible recommendation system and architecture. Proc. CSCW '00, 231-240.
    • Quercia, D., & Capra, L. 2009. FriendSensing: recommending friends using mobile phones. Proc. RecSys ’09 , 273-276.
    • Terveen, L. and McDonald, D. W. Social matching: A framework and research agenda. ACM TOCHI 12, 3, (2007), 401-434.
  • Community Recommendations IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Recommending Similar Communities
    • Evaluating similarity measures: a large-scale study in the Orkut social network [Spertus et al., KDD ’05]
    • Orkut – SNS by Google, used to be largest in Brazil and India
      • 20K communities with over 20 members
      • 180K distinct members
      • Over 2M memberships
    • 6 community similarity measures, based on community membership
      • How appropriate is r as a recommendation for b
    at time of experiments L1-Norm L2-Norm Log-Odds Salton (IDF) Pointwise Mutual-Info: Pos. Correlations Pointwise Mutual-Info: Pos. and Neg. Correlations
  • Recommending Similar Communities
    • Live trail on the Orkut SNS
    • Click-through to measure user acceptance
    • L2 shown as the best similarity measure
    • Followed by MI1, MI2, IDF,L1, and Log-Odds
    • Conversion rate - % of non-members who clicked-through and then joined
      • 46% for base members, 17% for base non-members
    • Measure user similarity through common communities – will L2 win too?
  • Personalized Community Recommendation
    • Collaborative filtering for Orkut communities: discovery of user latent behavior [Chen et al., WWW ’09]
    • Personalized community recommendation using CF of two types
      • Association rule mining (ARM) – association between communities shared between many users: users who join X typically join Y
      • Latent Dirichlet Allocation (LDA) – user-community co-occurrences using latent aspects (topics): x is related to y through a semantic feature, e.g., “baseball”
        • Users=docs, communities=words, membership=co-occurrence
        • Per-topic distribution of users and communities
  • Personalized Community Recommendation
    • Orkut membership data: 492K users, 118K communities
    • Top-k recommendation: withhold 1 community the user has joined with k-1 random communities, obtain rank (k=1001)
    • ARM is better when recommending lists of up to 3 communities,
    • LDA is consistently better when recommending a list of 4 ore more
    • In general, LDA ranks communities better than ARM
    • LDA is parallelized to improve efficiency
  • Personalized Community Recommendation
    • Flickr group recommendation based on tensor decomposition [Zheng et al., SIGIR ’10]
    • Latent association between users and groups through tags
    • Model through a three-mode tensor
    • Experiments using 197x5328x4064 tensor
      • Statistically significant superiority over user-based and non-negative matrix factorization approaches
  • Personalized Community Recommendation
    • Group Proximity Measure for Recommending Groups in Online Social Networks (Saha & Getoor, SNA-KDD ’08)
    • Group proximity based on escape probability
      • Random walks starting in G1 visits G2 before any other node in G1
      • Extend with core vs. outlier nodes
    • Method 1: recommend the closest groups to those the user is a member of
    • Method 2: boost groups that are closer to many of the user’s groups
    • GM and GL- two SVM classifiers trained over group membership data
    • Accuracy = predicted membership in top 5
  • Personalized Community Recommendation
    • Group Recommendation System for Facebook [Baatarjav et al., OTM ’08]
    • Matching user profiles with group identities
    • Combining hierarchical clustering and decision tree
    • Facebook data for University of North Texax (1580 users), focus on 17 groups
    • 15 profile features: age, gender, timezone, relationship status, political view, interests, movies, affiliations, …
    • Characterize groups by the majority of their members
    • Removing noise by removing members far from the group’s center
    • Reported average accuracy 73%
    • From LinkedIn Blog (“groups you may like”):
      • Building a virtual profile per group by selecting the most representative features of group members using Information Theory techniques like Mutual Information and KL Divergence.
      • Mapping user’s attributes to group’s virtual profile
      • Adding more recommendations based on CF
  • References
    • Baatarjav, E.A., Phithakkitnukoon S., & Dantu R. Group recommendation system for Facebook. Proc. OTM ’08 , 211-219.
    • Chen W.Y., Chu J.C., Luan J., Bai H., Wang, Y., & Chang E.Y. Collaborative filtering for Orkut communities: discovery of user latent behavior. Proc. WWW '09 , 681-690.
    • Official LinkedInBlog - The engineering behind LinkedIn products “you may like”: http://blog.linkedin.com/2011/03/02/linkedin-products-you-may-like/
    • Saha & Getoor. Group Proximity Measure for Recommending Groups in Online Social Networks. Proc. SNA-KDD ’08 .
    • Spertus, E., Sahami, M., & Buyukkokten O. Evaluating similarity measures: a large-scale study in the Orkut social network. Proc. KDD '05 , 678-684.
    • Zheng N., Li Q., Liao S., & Zhang L. Flickr group recommendation based on tensor decomposition. Proc. SIGIR '10 , 737-738.
  • Recommendations for Groups IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Recommendation for a group rather than individuals
    • Friends collaborating with a recommender system to design the perfect vacation
    • Family selecting a movie or TV show to watch together
    • Group of colleagues choosing a restaurant for an evening out
      • Or looking for a recipe for a joint meal
  • Issues in recommendation for groups How to support the process of arriving at a final decision Negotiation may be required. Members decide which recommendation (if any) to accept Aggregation methods Aggregating preferences/results must be applied System generates recommendations Benefits/drawback for the group/system Members can examine each other Members specify their preferences General issues Differences from individuals Phase of recommendation
  • Explicit specification of preferences: MusicFX ( McCarthy, CSCW 2000) Hate Don’t mind Love Alternative rock : 1 2 3 4 5 Hot country: 1 2 3 4 5 50’oldies 1 2 3 4 5 …90 more genres
  • Collaborative speciation of preferences in the Travel Decision Forum [Jameson 04]
  • Collaborative specification - Advantages
    • Persuade other members to specify a similar preference, perhaps by giving them information that they previously lacked
    • Explain and justify a member's preference
        • (e.g., .I can't go hiking, because of an injury)
    • Taking into account attitudes and anticipated behavior of other members
    • Encouraging assimilation to facilitate the reaching of agreement
  • Aggregating Preferences
    • Least misery:
      • Recommendations are based on the lowest predicted rating
      • Rate item i by R i =min {r i1 ..r ik } (for group of users {1..k})
    • Fairness:
      • No-one wants a perfectly fair solution that makes everyone equally miserable
        • R i = avg {r i1 ..r ik } – w * std(r i1 ..r ik)
    • Fusion:
      • Aggregate item ranking created for the individuals (e.g., Borda count)
    • Group modeling:
      • Compute an aggregate preference model that represents the preferences of the group as a whole
        • e.g. the system computes a model of the group by forming a linear combination of the individual models
  • Aggregation Methods [Berkovsky, RecSys ’10]
  • Group recommendations .. Baltrunas et a.l, RecSys2010
    • Individual recommendation:
      • containing all the items that are not present in the user’s training set of any group member .
    • Group recommendation:
      • takes as input the individual ranked lists of items’ recommendations for each user in the group
      • returns a ranked list of recommendations for the whole group
  • References
    • MusicFX: An Arbiter of Group Preferences for Computer Supported Collaborative Workouts, McCarthy et al. CSCW 98
    • Two Methods for Enhancing Mutual Awareness in a Group Recommender System , Jameson et al., AVI 2004
    • Recommendation to groups Jameson and Smyth, 2007
    • Group recommendations with rank aggregation and collaborative filtering (Baltrunas et a.l, RecSys2010)
    • Group-Based Recipe Recommendations: Analysis of Data Aggregation Strategies. Berkovsky et al, RecSys2010
  • The Cold Start Problem IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • The Cold Start Problem
    • The Cold Start problem concerns the issue where the RS cannot draw inferences for users or items for which it has not yet gathered sufficient information
    • New items
      • e.g., a newly created document w/o tags or bookmarks
      • e.g., a newly created community w/o members
    • New users
      • e.g., a user that has just signed up to a new site
      • e.g., a new member or employee
    • Typically addressed by applying a hybrid approach
    CF CB CF CB
  • The Cold Start Problem of New Items
    • Traditional CF systems are based on item ratings
      • Until rated by a substantial number of users, the system will not be able to recommend the item
    • a.k.a the “early rater” problem – first person to rate an item gets little benefit
    • Same for implicit feedback over items – clicks, searches, comments, tags
    • Even more acute for activity streams, where items quickly come and go
    • Typically addressed by integrating CB similarity measurements
      • Recommendation based on the data of older similar items
    Recommend Y to B Recommend Y to A CB CF+CB User A Item X Item Y User B Similar Likes Similar User A Item X Item Y Likes Similar
  • The Cold Start Problem of New Items
    • Methods and metrics for cold-start recommendations [Schein et al., SIGIR ’02]
    • Extend hybrid RS to average content data in a model-based fashion
  • The Cold Start Problem for New Users
    • Sometimes also referred to as the “ New User Problem ”
    • User needs to rate sufficient items for a CB recommender to really understand the user’s preferences
    • Mitigated by CF – similar users who rated more items can yield more recommendations
    • Traditional CF still faces an issue if the user did not provide any explicit feedback (or very small amount of feedback)
    • Typically resolved through building a user profile by integrating other user activity (implicit feedback)
      • Browsing history, click-through data, searches
    • Social media introduces new ways to learn about the user from external sources
      • Friends (“social filtering”), tags, communities, …
      • More public information which is less sensitive to privacy issues
  • The Cold Start Problem for New Users
    • Increasing engagement through early recommender intervention [Freyne et al., RecSys ’09]
    • New users of an enterprise SNS
    • Use aggregation approach – extract social network data from external sources (project and patent DB, org chart, wikis and blogs)
    • A brand new employee – still has org chart and basic profile data (location, division, project, etc.)
  • The Cold Start Problem for New Users
    • Getting to know you: learning new user preferences in recommender systems [Rashid et al., IUI ’02]
    • Experimentation with MovieLens (offline and online)
    • Focus on minimizing user effort, while maximizing accuracy
    • 6 techniques for CF can use to learn about new users
      • Random
      • Popular (#ratings), add less information
      • Entropy (diverse ratings), add a lot of information
      • Pop*Ent – product of popularity and entropy
      • Item-Item personalized – once the user has rated one movie (before – random from top 200 of Pop*Ent)
  • Cold Start for Tag-based Recommenders
    • Tag-based recommenders are sensitive to both new item and new user cold start problems
      • New items are “tagless”
      • New users are not associated with tags
    • Possible Solutions:
      • Hybridize with a CB recommender
      • Use CB automatic tag extraction
      • Apply CF for tags to enrich tags
        • Will not work for brand new users or items
  • References
    • Freyne J., Jacovi M., Guy I., and Geyer, W. Increasing engagement through early recommender intervention. Proc. RecSys '09 , 85-92.
    • Schein A.I., Popescul A., Ungar L.H, & Pennock D.M. Methods and metrics for cold-start recommendations. Proc. SIGIR ’02 , 253-260.
    • Rashid A.M., Albert I., Cosley D., Lam S.K., McNee S.M., & Konstan, J.A. Getting to know you: learning new user preferences in recommender systems. Proc. IUI ’02 , 127-134.
  • Trust and Distrust IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Social relations based recommendation
    • Traditional recommender systems ignore the social connections between users
    • To improve the recommendation accuracy users’ social network should be taken into consideration
      • People who are socially connected are more likely to share the same or similar interests
      • Users can be easily influenced by the friends they trust, and prefer their friends’ recommendations.
  •  
  • Trust Enhanced Recommendation
    • The question of whom to trust and what information to trust has become both more important and more difficult to answer
    • Social trust relationships, derived from social networks, are uniquely suited to speak to the quality of online information
    • Merging social networks based trust and recommender systems can improve the accuracy of recommendations and improve the user's experience.
  • Social trust graph User-item rating matrix Problem Definition
  • TidalTrust: Accumulate recommendation from trusted people only (Golbeck06)
  • Integrating Trust with CF
    • Items recommended by trusted neighbors will be boosted
    • Note that we still can get recommendations from similar non trusted neighbors ( w(a,u) << t(a,v) if v is trusted and u is not)
    • Note:
      • Trust is highly correlated with similarity
        • people usually have more trust in similar people
      • However, trust captures nuances that similarity cannot assess (Golbeck)
  • Inferring Trust The Goal: Select two individuals - the source and sink - and recommend to the source how much to trust the sink . Sink Source
  • Trust computations
    • If the source does not know the sink, the source asks all of its friends how much to trust the sink, and computes a trust value by a weighted average
    • Neighbors repeat the process if they do not have a direct rating for the sink
  • Reputation
    • We can measure the reputation (“global trust”) of the user using Network analysis
      • e.g. PageRank
    • These reputation values can be used to bias recommendations from highly trusted (reliable) people
  • What about Distrust? (Victor 2009)
    • Use the distrust set to filter out `unwanted' individuals from collaborative recommendation processes
      • Ignore users from R Dist in the CF process
    • Distrust as an indicator to reverse deviations
      • consider distrust scores as negative weights
        • Reduce the recommendation score of items recommended by distrusted neighbors
    • Using distrust for recommendation is still an open challenge
  • Trust in Recommendation (by explanations) Good explanations could help inspire user trust and loyalty, increase satisfaction, make it quicker and easier for users to find what they want, and persuade them to try or purchase a recommended item MoviExplain: A Recommender System with Explanations (Symeonidis09)
  • Explanation Aims:
    • Transparency
      • Explain how the system works
    • Curability
      • Allow users to tell the system it is wrong
    • Trust
      • Increase users’ confidence in the system
    • Effectiveness
      • Help users make good decisions
    • Persuasiveness
      • Convince users to try or buy
    • Efficiency
      • Help users make decisions faster
    • Satisfaction
      • Increase the ease of usability or enjoyment
  • Explanation types
    • Nearest neighbor explanation
      • Customers who bought item X also bought items Y,Z
      • Item Y is recommended because you rated related item X
    • Content based explanation
      • This story deals with topics X,Y which belong to your topic of interest
    • Social based explanation
      • People leverage their social network to reach information and make use of trust relationships to filter information .
        • Your friend X wrote that blog
        • 50% of your friends liked this item (while only 5% disliked it)
  • TagSplanations (Vig09)
    • Tagsplanations have two key components:
      • tag relevance, the degree to which your related tag describes an item
      • tag preference, the user's sentiment toward a tag
    Rushmore Rear Window
  • Social-based Explanation (Guy09)
  • References
    • Trust and Distrust Based Recommendations for Controversial Reviews. Victor et al. IEEE Intelligent Systems
    • Inferring binary trust relationships in Web-based social networks. Golbeck et al. TOIT 2006
    • MoviExplain: a recommender system with explanations. Symeonidis et al. RecSys 2009
    • Tagsplanations: explaining recommendations using tags. Vig aet al. IUI 2009
    • Personalized recommendation of social software items based on social relations. Gui et al. RecSys 2009
  • Social Recommender Systems in the Enterprise IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Social Media in the Enterprise
    • Following their success on the web, social media application have emerged in large enterprises
      • Corporate blogs and wikis (Blog Central)
      • Social bookmarking (Dogear)
      • Social network sites (Fringe, Beehive/SocialBlue, Town Square)
      • People tagging
      • Social file sharing
    • Several differences observed from use outside the firewall
      • e.g., people seek more strongly to connect with strangers
    • Information overload analogously created within the enterprise
    • Enterprise social media products in market: IBM Connections, Jive, Yammer
  • IBM Lotus Connections – Enterprise Social Software
  • Social Recommenders in the Enterprise
    • Same user identity in all systems
      • Facilitates aggregation
      • Higher accountability
    • Lower scales, but also lower density
    • Cold start for new employees
      • Mitigated with directory data (location, division, role, etc.)
  • Recommending Content to Create
    • Recommending topics for self-descriptions in online user profiles [Geyer et al., RecSys ’08]
    • ‘ About You’ Q&A pairs in SocialBlue SNS
    • Content algorithm – user profile based on all SocialBlue content
    • Network algorithm – if one r more friends have created this ‘About You’
    • Network algorithm performed best
  • Increasing Engagement in Enterprise Social Media
    • Increasing Engagement through Early Recommender Intervention [Freyne et al, RecSys ’09]
    • Challenge of adoption and engagement of new social software users
    • Research over SocialBlue enterprise SNS
    • Combined people and content-creation recommendations have significant effect in increasing overall contributions and page views
      • Short and long term effects
    • People recommendations ( sorted by overall activity ) have high effect in increasing retention rate
  • Stranger Recommendation
    • Do you want to know? Recommending strangers in the enterprise [Guy et al., CSCW ’11]
    • Recommendation of people who are unknown yet interesting in the organization
    • Maybe useful to
      • Get help or advice
      • Reach new opportunities
      • Discover new routes for career development
      • Learn about new assets that can be leveraged
      • Connect with SMEs and influencers
      • Cultivate organizational social capital
      • Grow own reputation and influence within the organization
    • Complements recommendation of people to connect with, as those are quickly exhausted over time
  • Stranger Recommendation
    • Method - subtracting the familiarity network from the similarity network
    • Similarity : common things and places: tags, communities, wikis
    • Score based on Jaccard’s index
    • Presentation with evidence
    • Two-thirds of the recommendations are strangers
    • Significantly more interesting than a random person
    • Out of 9 recommendations, 67% got at least one stranger rated 3 or above
    • Exploratory recommendation
      • Low accuracy, high serendipity
  • References
    • Freyne J., Jacovi, M., Guy I., & Geyer W. 2009. Increasing engagement through early recommender intervention. Proc RecSys ’09, 85-92.
    • Geyer, W., Dugan, C., Millen, D., Muller, M., & Freyne, J. Recommending topics for self-descriptions in online user profiles. Proc. RecSys ’08 , 59-66.
    • Blog Muse
    • Guy I., Ur, S., Ronen, I., Perer, A., & Jacovi, M. Do you want to know? recommending strangers in the enterprise. Proc. CSCW ’11 .
  • Temporal Aspects in Social Recommendation IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • The Time Factor in RS
    • Most RS works ignore temporal factors
    • In practice, time yields many changes:
      • Changes in user tastes and needs
        • e.g., a scientist moving to a new domain
      • Appearance of new items and users, disappearance of others
        • e.g., a new action movie about aliens
      • Changes in items and their features
        • e.g, a restaurant changing its menu
    • The pace of changes is greater in the social media world, where the masses are involved in constantly creating new content, communities, comments, tags, etc.
    • Real-time web mediums like Twitter even further magnify the importance of time
  • Temporal CF
    • Collaborative filtering with temporal dynamics [Koren, KDD’09]
    • Tracking customer preferences for products over time
    • Both users and products go through a distinct series of changes in their characteristics
    • A mere decay model loses too much signal
    • Separate transient factors from lasting ones
      • Matrix factorization – model both user and product change along time, to distill longer term trends
      • Item-item neighborhood – learning how influence between two items rated by a user decays over time, to reveal the fundamental item-item relations
    • Leading to improved results over Netflix in both models
  • Temporal Diversity
    • Temporal diversity in recommender systems [Lathia et al., SIGIR ’10]
    • Over time, the same items might be recommended again and again
      • Users may lose interest in interacting with the RS
    • Temporal diversity is therefore important
      • Netflix data changes over time: #movie grows, #users grows, #ratings grows
    • User survey – simulated 5 “weeks”
      • S1 – top-10 most popular movies (no diversity)
      • S2 – ~7 popular movies replaces each week
      • S3 – random out of the Netflix dataset
  • Temporal Diversity
    • Comparing diversity of 3 CF algorithms over time:
      • Baseline – item’s mean rating
      • Item-item based kNN
      • Matrix factorization based on SVD
    • Growing Netflix dataset 249 system updates (Simulated 7 days)
  • Temporal User Profile
    • Adaptive web search based on user profile constructed without any effort from users [Sugiyama et al., WWW’04]
      • Personalized search based on browsing history
      • Given a query, search results are adapted based on the user’s information needs
      • User profile evolves over time, combining
        • Persistent user behavior computed over a fixed time decay window
        • Ephemeral aspects captured from the current session (day)
    • Personalizing search via automated analysis of interests and activities [Teevan et al., SIGIR ’05]
      • Wide range of implicit user activities over short and long time periods using a relevance feedback framework
        • Search queries, visited pages – “implicit” short-term relevance feedback
        • documents and email – longer term interests
  • Time-based Evaluation in SRS Google Reader News Recommendation Network effects of people recommendation Increasing engagement of new users
  • Future Directions
    • Enhance user modeling approaches to consider time in a smarter way
      • e.g., someone you “friended” 5 years ago – a very good friend or a one-time encounter?
        • Measure interaction over time
      • e.g., tag usage as a signal to changes of interest over time
        • Distinguish transient vs. lasting interests
    • Learn from user feedback to compensate for decay in interest
      • Exploitations vs. exploration
    • Different recommendation for new users, old users, heavy users, etc.
    • Impose diversity and avoid recommending the same items again and again
    • More live user studies over time
  • References
    • Koren Y. Collaborative filtering with temporal dynamics. Proc. KDD ’09 , 447-456.
    • Lathia N., Hailes, S., Capra L., & Amatriain X. 2010. Temporal diversity in recommender systems. Proc. SIGIR ’10 , 210-217.
    • Sugiyama K., Hatano K., & Yoshikawa M. Adaptive web search based on user profile constructed without any effort from users. Proc. WWW '04, 675-684.
    • Teevan, J., Dumais, S. T., & Horvitz, E. Personalizing search via automated analysis of interests and activities. Proc. SIGIR ’05 , 449-456.
  • Social Recommendation over Activity Streams IBM Research WWW 2011, March 28 th -April 1 st , Hyderabad, India
  • Activity streams and the Real-time Web
    • The emergence of Twitter and Facebook marked a new phase in the evolution of the Web, characterized by streams of updates and news (“new items”)
    • Millions of users sharing activities with friends and followers creating a “firehose” of data in real-time
    • Twitter consists of 140-character status messages (microblogs), allow asymmetric following
    • Facebook Newsfeed shows different activity from Faceook friends: status updates, friend addition, group joining, page liking, photo sharing and tagging, …
    • Friendfeed is an example of social aggregator, including activities from 3 rd party sites: blogs, SNSs, social bookmarking, …
    • Activity stream format: http://activitystrea.ms
    External (aggregated) Hetrogenous Asymmetric (follow) FriendFeed Internal Hetrogenous Symmetric (friends) Facebook Internal Homogenous (status updates) Asymmetric (follow) Twitter
  • Utilizing the Stream for Recommendation
    • Stream data is intensive, fresh, and represents the “crowds”
    • Yet, it is also noisy and sparse in content and meta-data
    • Using the stream smartly can enhance current SRS
      • Especially for modeling recent interests and trends
    • Stream data requires special techniques
      • Frequency is high, so is noise
      • Recency plays special role
      • Content is sparse, no tags (apart from #tags)
  • Enhancing News Recommendation
    • Using twitter to recommend real - time topical news . [Phelan el al., RecSys '09]
    • CB approach based on co-occurrence of popular terms within the user’s RSS and Twitter items
    • 3 Recommendation strategies
      • Most recent public tweets
      • Tweets from the user’s followees
      • Non-tweet, based on top 100 RSS terms
    • First indication that personalized Twitter-based profile can substantially enhance recommendations
  • Enhancing Movie Recommendations
    • On the real-time web as a source of recommendation knowledge [ Esparza et al., RecSys ’10]
    • Blippr – a Twitter-like short textual movie review service , also supporting tags
    • Users are represented based on their blips (and tags)
    • Initial results show superiority over CF
    • Adding tags did not improve the performance in this case
  • Personalized Filtering of the Stream
    • Current stream filtering is done based on the network of “followees” or friends, which is insufficient
    • On the one hand, users often receive hundreds of news items per day
      • Much beyond what users have time to process
      • Need to filter the stream to those items that are indeed of interest
    • On the other hand, there may be other useful news outside the circle of friends or followees
    • Challenges:
      • Frequency – huge flood of news items,
      • Recency plays big role – news items are interesting only when very fresh
      • News items are sparse in content and unstructured (sparse in metadata)
      • Cold start problem of new items
  • Recommending Twitter URLs
    • Short and tweet: experiments on recommending content from information streams [Chen et al., CHI ‘10]
    • Directly recommend content through URLs
    • Candidate selection
      • Followees and followees-of-followees
      • Popular tweets
    • Topic Relevance
      • Cosine similarity between user and URL
      • Both based on TF-IDF
      • User profile based on self tweets and followees’ tweets
      • URL based on tweets (independent of page content)
    • Social process
      • Based on “votes” by followees of followees (fof)
      • A vote increases as an fof is followed by more of the user’s followees
      • A vote increases as an fof tweets less frequently
  • Recommending Twitter URLs
    • Comparing 12 algorithms: (2 candidate) * (3 topic relevance) * (2 social process)
    • Field study, 44 subjects
    • At least 20 followees and 50 tweets
    • Rating 5-top URLs from each algorithm
      • Interesting or not interesting
    • Each URL is shown with up to 3 tweets
      • From the user’s fof, if available
    • Social process beats topic relevance
    • FoF beats popularity
    • Self beats followee (due to taking top 5?)
  • Topic-based Twitter Filtering
    • Eddi: interactive topic-based browsing of social status streams [Bernstein et al., UIST ’10]
  • Topic-based Twitter Filtering
    • TweetTopic – topic assignment algorithm
    • “ macbook died, but the Genius guys gave me a new 1!” -> not just ‘macbook’ but also ‘apple’
    • Search engines as an external knowledge source
    • Transform a tweet to a search query
      • Part-of-speech analysis – identify noun phrases
      • Iterative backoff to get at least 10 results – remove the word with fewest web occurences
    • Retrieve search results through a search engine (e.g., Y!BOSS)
    • Identify popular terms in the results using TF-IDF (5/10 pages popularity threshold)
    • IR-style evaluation over 100 tweets
  • References
    • Bernstein M.S., Suh B., Hong L., Chen, J., Kairam S., & Chi E.H. Eddi: interactive topic-based browsing of social status streams. Proc. UIST ’10 , 303-312.
    • Chen J., Nairn R., Nelson L., Bernstein, M., & Chi E. Short and tweet: experiments on recommending content from information streams. Proc. CHI ’10, 1185-1194.
    • Garcia Esparza, S., O'Mahony, M. P., & Smyth, B. 2010. On the real-time web as a source of recommendation knowledge. Proc. RecSys '10 , 305-308.
    • Phelan O., McCarthy K., & Smyth B . 2009 . Using twitter to recommend real - time topical news . Proc. RecSys '09 , 385-388.
  • Evaluation Methods IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Evaluation goals
    • An application designer who wishes to add a recommendation system to her application must make a decision about the most appropriate algorithm for her goals
    • Most evaluation methods rank systems based on
      • Prediction power — the ability to accurately predict the user’s choices
      • Classification accuracy – the ability to differentiate good items from bad ones.
      • Novelty and Exploration ability - discovering new items, and exploring diverse items
      • Other features
        • Preserving the user privacy
        • Fast response
        • Ease of interaction with the recommendation engine
  • Offline Evaluation
    • Based on a pre-collected data set of users choosing or rating items
      • Usually done by recording historical user data, and then hiding some of these interactions in order to compare the user predicted rating with her actual rating
    • No interaction with real users, thus allow comparing a wide range of candidate algorithms at a low cost
    • Mostly useful for evaluating the prediction power of the system and for system tuning
    • For example: the Netflix challenge
  • The Netflix Challenge (Bernett 07)
    • Open competition for the best collaborative filtering algorithm to predict user ratings for films
      • based on previous movie ratings (1-5 scale)
    • Netflix provided a training data set of
      • 100,480,507 ratings
      • by 480,189 users
      • for 17,770 movies
    • Contestants were judged according to their predictions for 3 million withheld ratings
    • $1 million Prize was given to a team who achieved 10 percent improvement over the accuracy of the Netflix movie recommendation system
  • Evaluating Prediction Accuracy – the user opinions/ratings over items Root Mean Squared Error (RMSE) the most popular metric used in evaluating accuracy of predicted ratings A popular alternative: Mean Absolute Error (MAE)
    • Both measure the average error of the system predictions
    • RMSE disproportionably penalizes large errors, compared to MAE
  • Measuring Ranking Accuracy
    • Evaluating how many recommended items were purchased by the user
      • Recall - how many of acquired items were recommended
      • Precision – how many recommended items were acquired
        • Prec@N – how many top-N recommended items were acquired
      • A trade off is expected
        • long recommendation lists typically improve recall while reduce precision
    • For a ranked list of recommended items use NDCG (Normalized discount Cumulative Gain)
        • g(u,i) – the utility of item i to user u (e.g. the rating)
        • DCG* - the DCG of the optimal rank
  • Evaluating top-N recommendation (Cremonesi, RecSys2010)
    • For each item i rated by user u :
      • (i) Randomly select 1000 additional items unrated by user u.
        • We may assume that most of them will not be of interest to user u.
      • (ii) Predict the ratings for the test item i and for the additional 1000 items.
      • (iii) Form a ranked list by ordering all the 1001 items according to their predicted ratings.
        • The best result corresponds to the case where item i precedes all the random items
      • (iv) Form a top-N recommendation list by picking the N top ranked items from the list.
        • If rank(i) ≤ N we have a hit (i.e., the test item i is recommended to the user).
        • Otherwise we have a miss
    • Recall/Precision are defined by averaging over all rated items T
      • recall@N = #hits / |T|
      • precision@N = #hits/ (|T|*N)
  • Off-line evaluation of a Tag Recommendation System using social bookmarks ( Carmel, CIKM’09)
    • Given a dataset of bookmarks {(u,d,t)}
    • For each (u,d) pair:
      • Hide all bookmarks related to this pair from the bookmark data (all (u,d,t’) triplets)
      • Recommend tags for this pair
      • Score the recommendation lists by Mean Average Precision (MAP), given the actual tags used by u to bookmark d
    (d1,u1): t1, t2 , t3, t4 , t5 ,t6, t7, t8, t 9 , t10 … (d2,u2): t1 , t2, t3 , t4 , t5 , t6 , t7, t8, t9, t10 … (d3,u3): t1, t2, t3, t4, t5 ,t6, t7, t8, t9 , t10 …
  • User Studies
    • A user study is conducted by recruiting a set of users, and asking them to perform several interaction tasks with the recommendation system.
      • While the subjects perform the tasks, we observe and record their behavior
        • the portion of the task completed
        • the accuracy of the task results
        • the time taken to perform the task
        • general impression
        • etc.
  • Questions to measure subjective constructs
  • User studies – Pros and Cons
    • Enable online test of the user behavior when interacting with the recommendation system
    • Are very expensive to conduct
      • typically restricted to a small set of subjects and a relatively small set of tasks, and cannot test all possible scenarios
      • The test subjects must represent the population of users of the real system
        • as closely as possible
  • Online Evaluation
    • Evaluate the system by real users that perform real tasks
      • Provides the strongest evidence for the true value of the system to its users
      • The real effect of the recommendation system depends on a variety of user’s dependent factors that are changed dynamically
        • The user current intent
        • The user’s current context
    • Feedback from the users is collected by observing their feedback to the system’s recommnedation
      • Systems are evaluated according to the acquired vs. non-acquired ratio
    • Such a live user experiment may be controlled
      • Randomly assign users to different conditions
        • e.g test a new version of your system on a test set of users
      • A/B testing: split users to test groups and measure effectiveness of different conditions/algorithms on the groups
    • On-line evaluation studies are done on a regular basis by commercial Recommendation Systems
  • References
    • The Netflix Prize, Bernett et al. KDD CUP 2007
    • Evaluating collaborative filtering recommender systems. Herlocker et al . TOIS 2004
    • Personalized social search based on the user's social network. Carmel et al., CIKM 2009
    • User Evaluation Framework of recommender Systems, Chen et al. (SRS 2010)
    • Performance of recommender algorithms on top-n recommendation tasks, Cremonesi et al. RecSys 2010
  • Open Issues and Research Challenges IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
  • Open Issues and Research challenges
    • Privacy
      • Especially with explanations
    • Aggregation outside the firewall
      • Dealing with the cold start of new users
    • Beyond accuracy
      • Diversity
      • Serendipity, “surprise effect”
    • Crowdsourcing evaluation approach for SRS
    • Inspection of recommendation over time
      • Learning from user feedback
  • Questions?
    • Ido Guy [email_address]
    • David Carmel [email_address]