Hi, my name is JorisSchelfaut, I’m an Applied Informatics student and my thesis subject is “Visualization of music suggestions”.
In this presentation we will look at recommender systems; these are systems that compute personalized item suggestions based on the user’s interaction with the system, for example by tracking listening history in the case of music recommendation, or ratings given to particular articles in an item catalog. Examples of these systems are Last.fm (music), IMDb (movies), Netflix (movies), Amazon (books), et cetera.
Suppose you’re collecting all kinds of geometric objects, and we have a database full of cubes, cones, spheres et cetera. We have a number of users that have rated these objects. Next we also have one or more algorithms to mine suggestions from the data.
Algorithms to compute these suggestions are for example content-based approaches that use similarity between items to compute a suggestion. For example, in our database rectangles are fairly similar to cubes. So if someone who has rated a rectangle as “amazing”, the system can then suggest a cube to that particular user.
Another approach would be to use similarity between user profiles to classify users into some kind of cluster. For example if two people love cones and pyramids, we’ll assume these users are very similar, i.e., they’re neighbors. If one of these users happens to love cylinders as well, the system might think “aha, the other person might like this as well, let’s recommend this item to that user”. This approach is called collaborative filtering.
One problem that is associated with recommender systems is that the suggestions that they compute are often presented in a way that the user doesn’t have an idea how the suggestion was computed. For example our friend Billy the sphere collector receives a recommendation for a parallelepiped, and thinks “Are you kidding me?”, while there might be a perfectly reasonable explanation why this would be an interesting recommendation. Billy looses trust in the recommender system and goes looking elsewhere for cool volumes.
To solve this problem of decreased levels of trust and acceptance of recommendations, we could try to explain the reasoning behind the suggestions. What we aim for is some level of insight into the recommendation rationale. This is not a trivial task: the system may to too complex to explain efficiently, or revealing too much information on the algorithm may is not what developers want, as significant research efforts were spent to create it.
One way of providing insight is by creating some kind of explanation system. For example the recommendation process can be visualized, which also brings us to the second part of the title of this thesis: “visualization of music suggestions”. A number of such systems exist (see slides).
These systems can be evaluated based on a number of “aims” (see table):Transparency: Explain how the system works.Scrutability: Allow users to tell the system is wrong.Trust: Increase users' confidence in the system.Effectiveness: Help users make good decisions.Persuasiveness: Convince users to try or buy.Efficiency: Help users make decisions faster.Satisfaction: Increase the ease of usability or enjoyment.
Make a visualization that can explain music suggestions, and that is interactive and enables the user to steer the process (if possible). Evaluation based on previously described aims
Now that we have an understanding of the big picture of the problem and context, we will give an overview of the remainder of the presentation.First we will take a closer look at the target audienceNext we will describe the design of the visualizationThen we will present how this was implementedNext we will give an overview of the most important test resultsFinally a conclusion is presented.
The target audience is largely based on the so-called savants and enthusiasts (see table on slide). (Note that the last category is the predominant one in the population. I have to say I found that result a bit remarkable based on studies showing a close relationship between music and emotion)
In the visualization, the items (or artists in this case) of the user are laid out in a circle.
For each pair of items that the user owns, an edge is drawn between the nodes, creating a fully connected sub-graph. By highlighting a user, the corresponding items and edges will be highlighted as well, and vice versa.
In an effort to reduce visual clutter and make the visualization more visually pleasing, edge-bundling is applied to the edges so that the edges are drawn to each other.
To test the visualization we tried to explain Last.fm’s collaborative-based recommender
The visualization was injected into the recommendations page using a chrome browser extension.
The development took place in a number of iterations to gradually improve the application. In the first iteration we used a paper prototype. Here we wanted mainly to find out if the visualization had any potential.
Selecting a user
Visualization of Music Suggestions
Een visueel uitlegsysteem voor collaboratieve filtering
Prof. Dr. Ir. E. Duval, Prof. Dr. K. Verbert, Dr. J. Klerkx
Prof. Dr. K. Verbert, Dr. J. Klerkx
• Compute personalized item suggestions based
on the user’s interaction with the system
– Listening history
– Items ratings
– Item purchases
• Last.fm, Netflix, IMDb, Facebook, Amazon, …
• Database (items / users)
Recommender system > CBF
Recommender system > CF
Explanation system > Examples
Explanation system > evaluation
Explanation system > evaluation
• Make a visualization
...that can explain music suggestions
• Steer the process (if possible)
• Evaluation based on previously described aims
• Non-professional users (learnability)
Conclusion > Objectives
Varying levels of perceived usefulness
SUS score of 80.5 for iteration 4
Learnability can improve
Design can be effective for explaining
• Starting point for further exploration
Conclusion > Future work
Use symmetry in data to retain users instead of artists as nodes
Additional interactions (e.g. edges)
Clutter reduction through opacity
Temporary hide users
– Improve data load times through caching
– Further improve labels and visual clues
– Benchmarks, expert-based, heuristic