Successfully reported this slideshow.
Your SlideShare is downloading. ×

Rush: Repeated Recommendations on Mobile Devices - IUI'10

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Smells like teen_spirit
Smells like teen_spirit
Loading in …3
×

Check these out next

1 of 54 Ad

Rush: Repeated Recommendations on Mobile Devices - IUI'10

Download to read offline

Conference presentation in Intelligent User Interfaces - IUI 2010:
Rush is an interaction technique for mobile touchscreen devices. It combines recommendation with interaction and allows users to create personalized collections from item sets (e.g., music playlists).

Conference presentation in Intelligent User Interfaces - IUI 2010:
Rush is an interaction technique for mobile touchscreen devices. It combines recommendation with interaction and allows users to create personalized collections from item sets (e.g., music playlists).

Advertisement
Advertisement

More Related Content

Recently uploaded (20)

Advertisement

Rush: Repeated Recommendations on Mobile Devices - IUI'10

  1. 1. Rush: Repeated Recommendations on Mobile Devices Dominikus Baur, Sebastian Boring, Andreas Butz Media Informatics Group University of Munich, Germany
  2. 2. Recommendation + Interaction
  3. 3. • Has only been applied to text entry • No study of interface parameters such as orientation or direction
  4. 4. Navigation
  5. 5. Recommendations
  6. 6. Selection
  7. 7. Two studies: Interface I. Parameters Suitability, II. Recommendation Strategies
  8. 8. A, B, C, D.... I.
  9. 9. A, B, C, D.... ? I.
  10. 10. I.
  11. 11. II.
  12. 12. ? I.
  13. 13. iPhone iPhone horizontal vertical I.
  14. 14. rush rush interface interface horizontal vertical I.
  15. 15. I.
  16. 16. I.
  17. 17. I.
  18. 18. • Main goal: Find the optimal parameters for orientation and direction. For more details, see the paper. I.
  19. 19. • 12 participants (2 left-handed) • 16 timed runs (+ 16 trial runs) I.
  20. 20. 40s 30s Task Time 20s 10s 0s Horizontal Vertical Device Orientation I.
  21. 21. • (H2) Dominant hand > non-dominant hand • Only right-handers: No significant difference! (26.39s to 26.34s) I.
  22. 22. II.
  23. 23. • 3900 song set • sufficiently popular (at least 500.000 times played on last.fm) • at least 10 similar songs for each item (data from last.fm) II.
  24. 24. • Task: Create a playlist with ten songs based on the same seed song for a social event. II.
  25. 25. II.
  26. 26. Neutral Milk Hotel 62.25 Two-headed Boy Neutral Milk Hotel 62.11 Communist Daughter The Decemberists 3.85 Neutral Milk Hotel The Crane Wife In the Aeroplane over the The Decemberists Sea 3.68 O Valencia! Sufjan Stevens 3.59 Casimir Pulaski Day The Decemberists 3.53 Yankee Bayonet (I Will Be Home Then) Sufjan Stevens 3.36 Chicago 4 2 1 3 5 3.28 Wolf Parade Shine a Light Arcade Fire 3.24 Neighborhood #1 (Tunnels) Elliott Smith 3.14 Say Yes The Shins 3.10 Kissing the Lipless The National 3.10 Fake Empire 2.92 Fleet Foxes White Winter Hymnal II.
  27. 27. Neutral Milk Hotel 62.25 Two-headed Boy 1 Neutral Milk Hotel 62.11 Communist Daughter 2 Neutral Milk Hotel 3.85 The Decemberists The Crane Wife 3 In the Aeroplane over the Sea 3.68 The Decemberists O Valencia! 4 Sufjan Stevens 3.59 Casimir Pulaski Day 5 The Decemberists 3.53 Yankee Bayonet (I Will Be Home Then) Sufjan Stevens 3.36 Chicago 4 2 1 3 5 3.28 Wolf Parade Shine a Light Arcade Fire 3.24 Neighborhood #1 (Tunnels) Elliott Smith 3.14 Say Yes The Shins 3.10 Kissing the Lipless The National Top-5 3.10 Fake Empire 2.92 Fleet Foxes White Winter Hymnal II.
  28. 28. Neutral Milk Hotel 62.25 Two-headed Boy Neutral Milk Hotel 62.11 Communist Daughter Neutral Milk Hotel 3.85 The Decemberists The Crane Wife 3 In the Aeroplane over the The Decemberists Sea 3.68 O Valencia! Sufjan Stevens 3.59 Casimir Pulaski Day 1 The Decemberists 3.53 Yankee Bayonet (I Will Be Home Then) Sufjan Stevens 3.36 Chicago 4 2 1 3 5 3.28 Wolf Parade 4 Shine a Light Arcade Fire 3.24 Neighborhood #1 (Tunnels) Elliott Smith 3.14 Say Yes The Shins 3.10 Kissing the Lipless 5 The National 2 Random 3.10 Fake Empire 2.92 Fleet Foxes White Winter Hymnal II.
  29. 29. Neutral Milk Hotel 62.25 Two-headed Boy 1 Neutral Milk Hotel 62.11 Communist Daughter The Decemberists 3.85 Neutral Milk Hotel The Crane Wife In the Aeroplane over the The Decemberists Sea 3.68 O Valencia! Sufjan Stevens 3.59 Casimir Pulaski Day The Decemberists 3.53 2 Yankee Bayonet (I Will Be Home Then) Sufjan Stevens 3.36 Chicago 3 4 2 1 3 5 3.28 Wolf Parade Shine a Light Arcade Fire 3.24 Neighborhood #1 (Tunnels) Elliott Smith 3.14 Say Yes The Shins 3.10 Kissing the Lipless The National 4 Hybrid 3.10 Fake Empire 2.92 Fleet Foxes White Winter Hymnal 5 II.
  30. 30. • 12 participants (2 left-handed, 4 from the previous study) • 3 rush (Top-5, Random, Hybrid) 3 automatic (Top-5, Random, Hybrid) 1 manual II.
  31. 31. • Main goals: See if rush works in comparison to fully manual and automatic approaches. Compare the three simple recommendation approaches. For more details, see the paper. II.
  32. 32. Top-5 Random Hybrid 400s 300s Task Time 200s 100s 0s Automatic Rush Manual Tool II.
  33. 33. • Quality of playlists is hard to evaluate (very personal) • Online study with 10 participants II.
  34. 34. Study I Manual (6+) II Rush (Hybrid) (5+, 1-) III Rush (Random) (4+, 2-) IV Automatic (Random) (3+, 3-) V Rush (Top-5) (2+, 4-) VI Automatic (Top-5) (1+, 5-) VII Automatic (Hybrid) (6-) Tool ranking (Study) II.
  35. 35. Online I Rush (Hybrid) (4w, 2u) I Rush (Random) (4w, 2u) I Automatic (Hybrid) (4w, 2u) IV Automatic (Random) (3+, 3-) V Manual (2+, 4-) VI Rush (Top-5) (1+, 5-) VII Automatic (Top-5) (6-) Tool ranking (Online) II.
  36. 36. Manual Rush (Hybrid) +1 Rush (Hybrid) Rush (Random) +2 Rush (Random) Automatic (Hybrid) +6 Automatic (Random) Automatic (Random) +0 Rush (Top-5) Manual -4 Automatic (Top-5) Rush (Top-5) -1 Automatic (Hybrid) Automatic (Top-5) -1 Tool ranking (Study) Tool ranking (Online) II.
  37. 37. • Results: Rush is faster than a manual approach. Hybrid and Random are comparable. Top-5 is too restrictive. II.
  38. 38. • Interaction technique for selecting multiple items on mobile touchscreen devices • Two user studies: Interface parameters Comparison to other approaches
  39. 39. • Future work • Use more sophisticated recommendation strategies • Examine other ways for navigation (flicking, etc.)
  40. 40. Thank you! Image credits: Flickr.com: Daveybot, surroundsound5000, aphasiafilms, widdowquinn, Qfamily, neyugnd, lism, eflon, Andy Ciordia, @jackeliiine, jcolman, anothersamchan, xmacex, tomer.gabel, James Cridland, jm3, hortulus, inconstanti, natpie, Anirudh Koul, Kaptain Kobold Wikipedia Commons: ProfDEH dominikus.baur@ifi.lmu.de

Editor's Notes

  • Hello!
  • creating personalized collections of items is hard
    therefore: most recsys work with only single items and most research in this direction focusses on single items

    hansen et al. identified in 2007 some of the problems in recommending multiple items...
  • 1. Individual item values: each single item has to work for the user
  • 2. Co-occurence interaction effects: items have to work together
    and
  • 3. order interaction effects: items have to work in the right order
    (and you have to know what this order is)

    so, recommending collections is hard. still, there are already products that are doing it...
  • apple genius is one example that creates music playlists based on a seed song and songs “that go great together”. while convenient for users there are also downsides with this fully automatic approach... following the three dimensions of the design space: ...
  • 1. fitness of individual item: system cannot know the user’s current attitude towards an item. maybe she’s just not in the mood for it.
  • 2. co-occurence interaction effects: automatic software mostly creates collections that indeed work together, but mostly because it’s more of the same. and...
  • 3. order interaction effects: finding the right sequence of items is still one of the hardest problems, is mostly ignored by recommender systems and has to be created manually afterwards. finding the right set of items has to be enough, playlists that slowly grow in tempo or scope are not created.
  • still, having recommendations is great to restrict the search space because manually searching through thousands of items is awkward. therefore, why not combine interaction with recommendation to arrive at a suitable result?
  • one example for such an approach is dasher: Dasher’s a system for text entry that works on a variety of devices (there’s even a brain-controlled version). Dasher works like that: After selecting a letter, the system presents the next letters according to their probability, so: After selecting ‘T’, the system enlarges ‘h’, because i probably want to write ‘The’. There’s an underlying language model that allows that and thus increases the speed of the system (the authors claim that Dasher allows up to XXX wpm). Still, there are some downsides of Dasher...
  • For one, Dasher has so far only been applied to text entry and no other selection tasks. And second, there hasn’t been a study of interface parameters so far. Dasher’s layout is always the same.
  • Rush is an interaction technique that works for general selection from item sets on mobile touchscreen devices. It is suitable for single touch interaction and thus works with only one hand.
  • The general idea: All items are place on a virtual canvas. The mobile device works as a view onto this canvas. After selecting a seed item it is placed on the canvas. From there, we have three basic interactions:
  • 1. Navigation: The user navigates the canvas by putting the finger on the screen. The canvas then pans into the opposite direction.
  • 2. Getting recommendations: On touching an item, recommendations are shown for this item
  • And finally Selection: By drawing a stroke through an item it is selected or deselected. With these three basic interactions the user is able to create a collection with arbitrary length.
  • And here’s the system in action on the iphone simulator (think of the mouse cursor as the finger):
  • Having the initial idea for rush, we wanted to make sure that what we built first of all had the right form. That is, we wanted to examine what interface parameters work best. Second, we of course wanted to learn if this approach makes sense and what the best strategies for recommendation would be.
  • To make rush a more solid interaction technique we had two main questions that we answered through studies: 1. for letters it makes sense to display new items on the right-hand side of the screen (like in dasher), due to the reading direction of (western) users...
  • With other recommendation topics like music, travel or food ingredients it’s no longer clear which direction is best.
  • Additionally, on a mobile touchscreen device there’s the problem of occlusion and how to hold the device.
  • Our second question was: How well does rush work compared to automatic recommender systems and a fully manual approach and how do users like it. And also: What recommendation strategies are suitable.
  • For study 1 we examined several variables:
  • Orientation of the device
  • Orientation of the interface
  • Direction of the interface (depending on the orientation of the interface)
  • and finally: hand used by the user, to determine whether it makes sense to flip the interface for left- or right-handed users
  • For the task: we implemented a digit-based version of rush. participants had to select the right ten items. we counted errors and the required time.
  • We had the main goal of finding the optimal values for device and interface orientation and interface direction.
  • so, for the raw numbers: we had 12 participants and 16 timed runs (plus 16 trial runs)
  • here are the results: as you can see, the horizontal device and interface settings were significantly slower than the vertical one. We had no main effect on the interface direction, however.
  • Another interesting thing we found regarded the hands used: we had only two left-handed people, we only used the right-handers’ data and found that there was no significant difference between dominant and non-dominant hand.
  • We arrived at the final design with a vertical device and interface, plus direction from bottom to top (not significantly but a little bit faster than the other way round).
  • To find out whether rush actually works for users we compared it to the results of a fully automatic recommender system and the manual version.
  • again, some numbers: we created a song set of 3900 songs, made sure that they were popular (and thus known) enough and created similarity links between songs
  • Participants were asked to choose a seed song that was the same for all conditions. They were then asked to create a playlist for a social event, to cause them to reflect on what makes a good playlist. We had two variables:
  • 1. The tool: Either automatic, rush or manually.
  • 2. The recommendation approach: Based on the list of similar songs we tried three (admittedly very primitive) approaches for recommending items. As rush can only show five items it’s important to choose the right ones. Therefore, we compared three different approaches:
  • First: Simple the top 5 most similar songs
  • Second: 5 random songs from the list of similar songs
  • Finally: Hybrid - the top song, two from the middle of the list and the two bottom songs.
  • so, again some numbers: we had 12 participants and 7 conditions: 3 different versions of rush, 3 different automatic versions (where the computer picked a random song from the 5 available ones) and a fully manually one (users could freely browse all songs and similarity lists and listen to samples)
  • We had the main goal of finding the optimal values for device and interface orientation and interface direction.
  • as expected, automatic was fastest (with 0 secs), rush was slower (avg: 142.73s), manual was slowest (avg: 388.58)
  • So for evaluating the quality of playlists: Hard problem! We wanted to cause our participants to reflect on what makes a good (objective) playlist by giving them the task of creating playlists for a social event. Still, we expected a bias towards playlists that took longer to create (esp. the manual one), as participants had time to create an emotional bond towards that. Therefore, we additionally performed an additional online study where people were asked to rank seven playlists that were created by the participants.
  • And here are the results: We used Condorcet Ranked Pairs to evaluate the rankings by the participants. Manual clearly won, followed by two rushes and the automatic tools.
  • Here the results for the online participants where two rush versions ranked first place.
  • ...manual playlists, however, ranked much lower which might be due to the (study) participants’ bias. Still, we had only 10 results so all of this should be taken with a grain of salt.
  • so our results from this study are in short: rush is faster than manual but slower than automatic and works reasonably well (but can probably not reach manual quality). also: evaluating playlists is hard! regarding the recommendation strategies: there’s unfortunately not much of a difference between Hybrid and Random, but Top-5 was very restrictive and annoying for our participants.
  • to conclude
  • we presented rush, an interaction technique for selecting multiple items on mobile touchscreen devices. we performed two users studies to determine the optimal orientation and see if rush works better than fully manual or automatic approaches.
  • For future work: It might be worthwhile to see how the quality and user acceptance improves once other personalized recommendations are integrated. Also, examine other ways for navigation.
  • Thanks!

×