• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Inspectability and Control in Social Recommenders
 

Inspectability and Control in Social Recommenders

on

  • 831 views

My slides for the presentation of the full

My slides for the presentation of the full

Statistics

Views

Total Views
831
Views on SlideShare
828
Embed Views
3

Actions

Likes
3
Downloads
23
Comments
0

2 Embeds 3

https://twitter.com 2
https://si0.twimg.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Inspectability and Control in Social Recommenders Inspectability and Control in Social Recommenders Presentation Transcript

    • INSPECTABILITY AND CONTROL IN SOCIAL RECOMMENDERS
    • ALEX BOSTANDJIEV ALFRED KOBSAJOHN O’DONOVAN BART KNIJNENBURG
    • WHY SHOULD WE SOCIALUSE RECOMMENDERS ?
    • WHY DO DJ’SUSE VINYL?
    • BETTER INSPECTABILITY
    • INSPECTABILITYIN NORMAL RECOMMENDERS
    • BETTER INSPECTABILITYIN SOCIAL RECOMMENDERS
    • THINGS I LIKE ? MAGIC RECOMMENDATIONS VS.THINGS I LIKE FRIENDS RECOMMENDATIONS
    • MORE CONTROL
    • CONTROLIN NORMAL RECOMMENDERS
    • MORE CONTROLIN SOCIAL RECOMMENDERS
    • RECOMMENDATIONS VS.+ RECOMMENDATIONS
    • INTUITIVE INTERFACE
    • Critiquing-based recommenders button “Self 133 specify criteria for ‘Better Features’. 142 L. Chen, P. Pu 130 L. Chen, P. Pu(a) The preference-based organization interface. The product being critiqued System-suggested compound critiques Step 4 User-initiated critiquing facility Fig. 5 The Dynamic Critiquing interface with system suggested compound critiques for users to select (McCarthy et al. 2005c)(b) The user-initiated example critiquing interface. Fig. 4 System showing a new set of alternatives after the user’s critiquesFig. 10 Hybrid critiquing system (version 2): the combination of preference-based organization interface MORE CONTROL & INSPECTABILITY?(Pref-ORG) and user-initiated MAUT-based compound critiques More specifically, after getting users’ 2006)system-suggestedthe conversational dia- 2.2.2 critiquing facility (Chen and Pu 2007b, 2010) and visual critiquing (Zhang and Pu initial Fig. 9 Hybrid critiquing system (version 1): the combination of preferences via compound critiques and user-initiated critiquing facility (Chen and Pu 2007a) a SQL query and passes it to the database. If too log, the system translates them into However, the Dynamic-Critiquing methodmatching goods exist, the is still limited, Asking function would calculate the many (including its extension) Navigation by MORE COMPLEXITY! in that it only reveals what the system can provide, butpossible questions and then ask appropriate questions to the shop- information gain of does not take into account users’ interest in the suggested critiques.forGiven this limitation, Zhang and goods. the matching Pu (2006)organization algorithm (as describedcan Sect. the purposenarrowing isthe of the domain andAfter merchandise records are narrowed in not only per If the adapting interested in compound have proposed an approach with 2.2). user downgeneration of one of easily perform critiquing via the obtain knowledge down toof pre-defined threshold number, the Navigation by Proposing function will athe suggested critiques, she could click “Show All” tomodelmoreuser’s preferences based on first sample goodcritiques on their suggested critiques, see each products under the freely compose is the good record critiques to user preferences. Formally, they but also have the opportunity The show three significantly different samples. tocritique. Among these products, theUtility witheither choose one astheoryfinalmatching goods. Its selling points directly reflect the the multi-Attribute user can theclosest towhich is a point oftaking into account own Theory (MAUT), the center her all choice, or self-initiated critiquing support. of conflicting value preferences and customer’s a sore for each item to represent its the record positioned farthest away producing request. The second sample good is
    • THE POWER OF VISUALIZATION SIMPLE SIMPLECONTROL INSPECTABILITY
    • HYPOTHESIS:BETTERINSPECTABILITY AND MORE CONTROL INCREASES SATISFACTION
    • ONLINE USEREXPERIMENT
    • SYSTEMModified TasteWeights system Facebook friends as recommenders Music recommendations (based on “likes”) Split up control + inspectability
    • PARTICIPANTS267 participants Mechanical Turk + CraigslistAt least 5 music “likes” and overlap with atleast 5 friends at least 10 recommendations lists limited to 10 to avoid cognitive overloadDemographics similar to Facebook userpopulation
    • PROCEDURESTEP 1: Log in to Facebook System collects your music “likes” System collects your friends’ music likes
    • PROCEDURE STEP 2: Control 3 conditions, between subjects <skip> VS VSNOTHING WEIGH ITEMS WEIGH FRIENDS
    • PROCEDURE STEP 3: Inspection 2 conditions, between subjects VSLIST ONLY FULL GRAPH
    • 6 CONDITIONS<skip> --> <skip> --> --> --> --> -->
    • PROCEDURESTEP 4: EvaluationFor each recommendation: Do you know this band/artist? How do you rate this band/artist? (link to LastFM page for reference)
    • PROCEDURESTEP 5: Questionnaires- understandability- perceived control- perceived recommendation quality- system satisfaction- music expertise- trusting propensity- familiarity with recommender systems
    • RESULTS
    • SUBJECTIVE 3 items: - The recommendation process is clear to me - I understand how TasteWeights came up with the recommendations - I am unsure how the recommendations were generated*
    • SUBJECTIVE INSPECTABILITY full graph list only CONTROL
    • SUBJECTIVE full graph list only 4 items: - I had limited control over the way TasteWeights made recommen-dations* - TasteWeights restricted me in my choice of music* - Compared to how I normally get recommendations, TasteWeights was very limited* - I would like to have more control over the recommendations*
    • SUBJECTIVE full graph list only 6 items: - I liked the artists/bands recommended by the TasteWeights system - The recommended artists/ bands fitted my preference - The recommended artists/ bands were well chosen - The recommended artists/ bands were relevant - TasteWeights recommen- ded too many bad artists/ bands* - I didnt like any of the recommended artists/ bands*
    • SUBJECTIVE full graph list only 7 items: - I would recommend TasteWeights to others - TasteWeights is useless* - TasteWeights makes me more aware of my choice options - I can make better music choices with TasteWeights - I can find better music using TasteWeights - Using TasteWeights is a pleasant experience - TasteWeights has no real benefit for me*
    • BEHAVIOR full graph list only Time (min:sec) taken in the inspection phase (step 3) - Including LastFM visits - Not including the control phase (step 2) - Not including the evaluation phase (step 4)
    • BEHAVIOR full graph list only Number of artists the participant claims she already knows Why higher in the full graph condition? - Link to friends reminds the user how she knows the artist - Social conformance
    • BEHAVIOR full graph list only Average rating of the 10 recommendations - Lower when rating items than when rating friends - Slightly higher in full graph condition
    • STRUCTURAL MODELObjective System Aspects Subjective System Aspects (SSA) User Experience (EXP) (OSA)
    • STRUCTURAL MODELObjective System Aspects Subjective System Aspects (SSA) User Experience (EXP) (OSA) + Perceived !2(2) = 10.70** + Understandability control item: 0.428 (0.207)* (R2 = .153) 0.377 (R2 = .311) friend: 0.668 (0.206)** (0.074)*** 0.955 Control (0.148)*** item/friend vs. no control 0.459 + 0.770 + + (0.148)** (0.094)*** Perceived Satisfaction recommendation with the system quality 0.410 + (R2 = .696) (R2 = .512) (0.092)*** Inspectability full graph vs. list only
    • STRUCTURAL MODELObjective System Aspects Subjective System Aspects (SSA) User Experience (EXP) (OSA) + Perceived !2(2) = 10.70** + Understandability control item: 0.428 (0.207)* (R2 = .153) 0.377 (R2 = .311) friend: 0.668 (0.206)** (0.074)*** 0.955 Control + + (0.148)*** item/friend vs. no control 0.459 + 0.770 + + (0.148)** (0.094)*** !2(2) = 10.81** 0.231 0.249 Perceived Satisfaction (0.097)1 (0.114)* (0.049)*** recommendation item: −0.181 with the system friend: −0.389 (0.125)** quality 0.410 + (R2 = .696) (R2 = .512) (0.092)*** + − 0.148 Inspectability (0.051)** −0.152 (0.063)* full graph vs. list only − Interaction (INT) 0.288 Inspection 0.323 (0.091)** + time (min) (0.031)*** (R2 = .092) + + number of known + Average rating recommendations (R2 = .508) 0.695 (0.304)* (R2 = .044) 0.067 (0.022)**
    • STRUCTURAL MODEL Personal Characteristics (PC) Familiarity with Music Trusting recommenders expertise propensity 0.166 (0.077)* −0.332 (0.088)***Objective System Aspects + Subjective System Aspects (SSA) − User Experience (EXP) (OSA) + Perceived 0.375 !2(2) = 10.70** (0.094)*** 0.205 0.257 + Understandability control item: 0.428 (0.207)* 2 0.377 friend: 0.668 (0.206)** (R = .153) (R2 = .311) (0.100)* (0.124)* (0.074)*** 0.955 Control + + (0.148)*** item/friend vs. no control 0.459 + 0.770 + + + + + (0.148)** (0.094)*** !2(2) = 10.81** 0.231 0.249 Perceived Satisfaction (0.097)1 (0.114)* (0.049)*** recommendation item: −0.181 with the system friend: −0.389 (0.125)** quality 0.410 + (R2 = .696) (R2 = .512) (0.092)*** + − 0.148 Inspectability (0.051)** −0.152 (0.063)* full graph vs. list only − Interaction (INT) 0.288 Inspection 0.323 (0.091)** + time (min) (0.031)*** (R2 = .092) + + number of known + Average rating recommendations (R2 = .508) 0.695 (0.304)* (R2 = .044) 0.067 (0.022)**
    • CONCLUSION
    • CONCLUSIONSocial recommenders- Give users inspectability and control- Can be done with a simple user interface!Inspectability:- Graph increases understandability and perceived control- Improves recognition of known recommendationsControl:- Items control: higher novelty (fewer known recs)- Friend control: higher accuracy
    • FUTURE WORKInspectability and control work- Separately- What about together?
    • FUTURE WORKInspectability and control work- Separately- What about together?
    • SOCIAL RECOMMENDERSLET YOU N BE A TIO DA E N MR ECO M DJ
    • THANK YOU! WWW.USABART.NL BART.K@UCI.EDU @USABART
    • CONCLUSIONSocial recommenders- Give users inspectability and control- Can be done with a simple user interface!Inspectability:- Increases understandability and perceived control- Improves recognition of known recommendationsControl:- Friend control: higher accuracy- Items control: higher novelty (fewer known recs)
    • full graphlist only
    • STRUCTURAL MODEL Personal Characteristics (PC) Familiarity with Music Trusting recommenders expertise propensity 0.166 (0.077)* −0.332 (0.088)***Objective System Aspects + Subjective System Aspects (SSA) − User Experience (EXP) (OSA) + Perceived 0.375 !2(2) = 10.70** (0.094)*** 0.205 0.257 + Understandability control item: 0.428 (0.207)* 2 0.377 friend: 0.668 (0.206)** (R = .153) (R2 = .311) (0.100)* (0.124)* (0.074)*** 0.955 Control + + (0.148)*** item/friend vs. no control 0.459 + 0.770 + + + + + (0.148)** (0.094)*** !2(2) = 10.81** 0.231 0.249 Perceived Satisfaction (0.097)1 (0.114)* (0.049)*** recommendation item: −0.181 with the system friend: −0.389 (0.125)** quality 0.410 + (R2 = .696) (R2 = .512) (0.092)*** + − 0.148 Inspectability (0.051)** −0.152 (0.063)* full graph vs. list only − Interaction (INT) 0.288 Inspection 0.323 (0.091)** + time (min) (0.031)*** (R2 = .092) + + number of known + Average rating recommendations (R2 = .508) 0.695 (0.304)* (R2 = .044) 0.067 (0.022)**