Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Twitter分散クロールの野望 by Yoji Shidara 4957 views
- Future Directions of Fairness-Aware... by Toshihiro Kamishima 2126 views
- Correcting Popularity Bias by Enhan... by Toshihiro Kamishima 1528 views
- WSDM2016勉強会 資料 by Toshihiro Kamishima 821 views
- OpenOpt の線形計画で圧縮センシング by Toshihiro Kamishima 18044 views
- 20141220 tokyowebmining state_space... by Kenny ISHIMURA 4187 views

1,176 views

Published on

Workshop on Human Decision Making in Recommender Systems, in conjunction with RecSys 2012

Article @ Official Site: http://ceur-ws.org/Vol-893/

Article @ Personal Site: http://www.kamishima.net/archive/2012-ws-recsys-print.pdf

Handnote : http://www.kamishima.net/archive/2012-ws-recsys-HN.pdf

Program codes : http://www.kamishima.net/inrs

Workshop Homepage: http://recex.ist.tugraz.at/RecSysWorkshop2012

Abstract:

This paper proposes an algorithm for making recommendation so that the neutrality toward the viewpoint specified by a user is enhanced. This algorithm is useful for avoiding to make decisions based on biased information. Such a problem is pointed out as the filter bubble, which is the influence in social decisions biased by a personalization technology. To provide such a recommendation, we assume that a user specifies a viewpoint toward which the user want to enforce the neutrality, because recommendation that is neutral from any information is no longer recommendation. Given such a target viewpoint, we implemented information neutral recommendation algorithm by introducing a penalty term to enforce the statistical independence between the target viewpoint and a preference score. We empirically show that our algorithm enhances the independence toward the specified viewpoint by and then demonstrate how sets of recommended items are changed.

Published in:
Data & Analytics

No Downloads

Total views

1,176

On SlideShare

0

From Embeds

0

Number of Embeds

27

Shares

0

Downloads

15

Comments

0

Likes

3

No embeds

No notes for slide

- 1. Enhancement of the Neutrality in Recommendation Toshihiro Kamishima*, Shotaro Akaho*, Hideki Asoh*, and Jun Sakuma† *National Institute of Advanced Industrial Science and Technology (AIST), Japan †University of Tsukuba, Japan; and Japan Science and Technology Agency Workshop on Human Decision Making in Recommender Systems In conjunction with the RecSys 2012 @ Dublin, Ireland, Sep. 9, 2012START 1
- 2. Overview Decision Making and Neutrality in RecommendationDecisions based on biased information brings undesirable results Providing neutral information is important in recommendation Information Neutral Recommender SystemThe absolutely neutral recommendation is intrinsically infeasible,because recommendation is always biased in a sense that it isarranged for a speciﬁc user, A system makes recommendation so as to enhance the neutrality from a viewpoint speciﬁed by a user 2
- 3. OutlineIntroductionImportance of the Neutrality and the Filter Bubble inﬂuence of biased recommendation, ﬁlter bubble problem, discussion in the RecSys 2011 panelNeutrality in Recommendation ugly duckling theorem, information neutral recommendationInformation Neutral Recommender System latent factor model, viewpoint variable, neutrality functionExperimentsConclusion 3
- 4. Importance of the Neutrality and the Filter Bubble 4
- 5. Biased Recommendation Biased Recommendation exclude a good candidate from a set of options rate relatively inferior options higher Inappropriate Decision The Filter Bubble ProblemPariser posed a concern that personalization technologies narrow andbias the topics of information provided to people http://www.theﬁlterbubble.com/ 5
- 6. Filter Bubble [TED Talk by Eli Pariser] Friend Recommendation List in Facebookconservative people are eliminated form Pariser’s recommendation list A summary of Pariser’s claim Users lost opportunities to obtain information about a wide variety of topics Each user obtains too personalized information, and this make it difﬁcult to build consensus in our society 6
- 7. RecSys 2011 Panel on Filter Bubble [RecSys 2011 Panel on the Filter Bubble] RecSys 2011 Panel on Filter BubbleAre there “ﬁlter bubbles?”To what degree is personalized ﬁltering a problem?What should we as a community do to address the ﬁlter bubbleissue?http://acmrecsys.wordpress.com/2011/10/25/panel-on-the-ﬁlter-bubble/ Intrinsic trade-off providing focusing on a diversity of topics users’ interests To select something is not to select other things 7
- 8. RecSys 2011 Panel on Filter Bubble [RecSys 2011 Panel on the Filter Bubble] Intrinsic trade-off providing focusing on diversity of topics users’ interests To select something is not to select other things Personalized ﬁltering is a necessity Personalized ﬁltering is a very effective tool to ﬁnd interesting things from the ﬂood of information 8
- 9. RecSys 2011 Panel on Filter Bubble [RecSys 2011 Panel on the Filter Bubble] Personalized ﬁltering is a necessity Personalized ﬁltering is a very effective tool to ﬁnd interesting things from the ﬂood of information recipes for alleviating undesirable inﬂuence of personalized ﬁlteringcapture the users’ long-term interestsconsider preference of item portfolio, not individual itemsfollow the changes of users’ preference pattern our approach give users to control perspective to see the world through other eyes 9
- 10. Neutrality in Recommendation 10
- 11. Ugly Duckling Theorem [Watanabe 69] Ugly Duckling Theorem on fundamental property of classiﬁcationif the similarity between a pair of ducklings is measured by the number of potential binary classiﬁcation rules that classiﬁes both of them into the same positive class similarity between similarity between an ugly and a normal ducklings = any pair of normal ducklings An ugly and a normal ducklings are indistinguishable Extremely unintuitive! Why? We classify them by methods like SVMs everyday... 11
- 12. Ugly Duckling Theorem [Watanabe 69] Extremely unintuitive! Why? The number of classiﬁcation rules are considered, but properties of rules are completely ignored All features are equally treated ex. the weight of a body color feature equals to that of a length of a duckling The complexity of rules is ignored ex. the number of features included in rulesWhen classiﬁcation, one must emphasize some features of objectsand must ignore the other features 12
- 13. Information Neutral Recommendation Ugly Duckling Theorem A part of aspects must be stressed when classifying objects It is infeasible to make recommendation that is neutral from any viewpoints Information Neutral Recommendation the neutrality from a viewpoint speciﬁed by a user and other viewpoints are not consideredex. A recommender system enhances the neutrality in terms of whether conservative or progressive, but it is allowed to make biased recommendations in terms of other viewpoints, for example, the birthplace or age of friends 13
- 14. Information NeutralRecommender System 14
- 15. Information Neutral Recommender System v : viewpoint variable A binary variable representing a viewpoint speciﬁed by a user Information Neutral Recommender System neutral from a speciﬁed viewpoint maximize statistical independence between a preference score and a viewpoint variable + high prediction accuracy minimize an empirical error plus a L2 regularization term Information neutral version of a latent factor model 15
- 16. Latent Factor Model [Koren 08] Latent Factor Model : basic model of matrix decomposition Predicting Ratings Task predict a preference score of an item y rated by a user x cross effect of global bias users and items s(x, y) = µ + bx + cy + px qy ˆ user-dependent bias item-dependent biasFor a given training data set, model parameters are learned byminimizing the squared loss function with a L2 regularizer 16
- 17. Information Neutral Latent Factor Model modiﬁcations of a latent factor model adjust scores according to the state of a viewpoint incorporate dependency on a viewpoint variable enhance the neutrality of a score from a viewpoint add a neutrality function as a constraint term adjust scores according to the state of a viewpoint viewpoint variables s(x, y, v) = µ(v) + b(v) + c(v) + px q(v) ˆ x y (v) yMultiple latent factor models are built separately, and each of thesemodels corresponds to the each value of a viewpoint variableWhen predicting scores, a model is selected according to the valueof viewpoint variable 17
- 18. Information Neutral Latent Factor Model enhance the neutrality of a score from a viewpointneutrality function, neutral(ˆ, v) : quantify the degree of neutrality s The larger output of a neutrality function, the higher degree of the neutrality of a prediction score from a viewpoint variable neutrality parameter to balance regularization between the neutrality and the accuracy parameter 2 2 (si s(xi , yi , vi )) ˆ neutral(ˆ(xi , yi , vi ), vi ) + s 2 D squared loss function neutrality function L2 regularizer Parameters are learned by minimizing this objective function 18
- 19. Mutual Information as Neutrality Function neutrality = scores are not inﬂuenced by a viewpoint variable neutrality function = negative mutual information We treat the neutrality as the statistical independence, and it is quantiﬁed by mutual information between a predicted score and a viewpoint variable Pr[ˆ|v] is required for computing mutual information s This distribution function modeled by a histogram model Failed to derive an analytical form of gradients of objective functionAn objective function is minimized by a Powell method without gradients 19
- 20. Experiments 20
- 21. Experimental ConditionsGeneral Conditions 9,409 use-item pairs are sampled from the Movielens 100k data set (A Powell optimizer is computationally inefﬁcient and cannot be applied to a large data set) the number of latent factor K = 1 regularization parameter λ = 0.01 Evaluation measures are calculated by using ﬁve-fold cross validationEvaluation Measure MAE (mean absolute error) prediction accuracy NMI (normalized mutual information) the neutrality of a preference score from a speciﬁed viewpoint (mutual information between the predicted scores and the values of viewpoint variable, and it is normalized into the range [0, 1]) 21
- 22. Viewpoint Variables The values of viewpoint variables are determined depending on a user and/or an item“Year” viewpoint : a movie’s release year is newer than 1990 or notThe older movies have a tendency to be rated higher, perhaps becauseonly masterpieces have survived [Koren 2009]“Gender” viewpoint : a user is male or femaleThe movie rating would depend on the user’s gender 22
- 23. Experimental Results prediction accuracy (MAE) degree of neutrality (NMI) 0.90 0.050 Year higher degree of neutrality Genderhigher accuracy 0.85 0.010 Year 0.005 Gender 0.80 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 neutrality parameter η : the lager value enhances the neutrality more As the increase of a neutrality parameter η, prediction accuracies were worsened slightly, but the neutralities were improved drastically INRS could successfully improved the neutrality without seriously sacriﬁcing the prediction accuracy 23
- 24. ConclusionOur Contributions We formulate the neutrality in recommendation based on the ugly duckling theorem We developed a recommender system that can enhance the neutrality in recommendation Our experimental results show that the neutrality is successfully enhanced without seriously sacriﬁcing the prediction accuracyFuture Work Our current formulation is poor in its scalability formulation of the objective function whose gradients can be derived analytically neutrality functions other than mutual information, such as the kurtosis used in ICA Information neutral version of generative recommendation models, such as a pLSA / LDA model 24
- 25. program codes and data sets http://www.kamishima.net/inrs acknowledgementsWe would like to thank for providing a data set for the Grouplensresearch labThis work is supported by MEXT/JSPS KAKENHI Grant Number16700157, 21500154, 22500142, 23240043, and 24500194, and JSTPRESTO 09152492 25

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment