We present a prototype recommendation system for mobile applications that exploits a rather general description of the user’s context. One of the main features of the proposed so- lution is the proactive and completely automated procedure of querying the apps marketplace, able to retrieve a set of apps and to rank them on the basis of the current situation of the user. We also present a first experimental evaluation that confirms the effectiveness of the general design and im- plementation choices and sheds some light on the peculiar features and critical issues of recommendation systems for mobile applications.
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
A Context-Aware Retrieval System for Mobile Applications
1. A Context-Aware
Retrieval System for
Mobile Applications
Stefano Mizzaro, Marco Pavan, Ivan Scagnetto, Ivano Zanello
!
Dept. of Mathematics and Computer Science - University of Udine
via delle Scienze, 206
Udine, Italy
{mizzaro, marco.pavan, ivan.scagnetto}@uniud.it, ivano.zanello@gmail.com
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
2. Agenda
• Introduction: mobile systems and context-awareness
• Our approach, and connections with the Context-Aware
Browser (CAB)
• The proposed system: AppCAB (i.e. Apps for CAB)
• Experimental evaluation
• Results and future works
• Questions
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
3. Mobile systems and Context-awareness
• Mobile devices have exceeded computer sales for the
first time in 2012
• Many people have moved several activities from their
computer to their smartphone or tablet
• Smaller screens and (virtual) keyboards lead users to
make more effort to seek and get what they need
• Users are sometimes forced to use the device in
particular situations or in stressful moments
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
4. Mobile systems and Context-awareness
• With huge mobile marketplaces, users are overwhelmed by
the large amount of applications combined with a usage
situation that often implies distraction and time pressure!
• By analyzing the situation in which they are, it is possible to
exploit the information extracted from user’s context to
find the right applications to recommend in that specific
moment
• The context plays the role of “filter” and helps to improve the
information retrieval process
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
5. Our approach
• Contextual information extraction
We used contexts generated by the Context-Aware Browser (CAB)
• Mobile applications’ metadata extraction
We developed a crawler for Apple AppStore in order to get Title,
Description, Category and Average rating for each application
• Recommender system design
It needs to be as precise as possible, since users have to interact with
ergonomically limited devices; It is useless to have a long list of suggestions
on a screen of a few inches
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
6. Our approach
• Experimental evaluation
Test collection based evaluation that follows principles of TREC and
in particular contextual suggestion track
Documents = apps metadata (from Apple Appstore)
Information needs description (“topics”) = context descriptors (12
contexts generated by the CAB system)
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
7. BaseLine System (BLS)
• We developed BLS using basic information retrieval
techniques in order to
have a term of comparison for our proposals and measure future
improvements
test how effective are common IR techniques in the field of
recommender system for mobile applications
• BLS = indexer + query processor
To build the index of all words found in each app metadata
And to retrieve the right set of apps starting from a context description
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
8. BLS - indexer
• TF.IDF of each word extracted from all apps’ Title
and Description
During this process we also keep track of which category is
associated to the app for each occurrence of the word, into an array
of counters
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
C1,1 C2,1
word a1
array of category counters
…
C1,2 C2,2
word a2 …
Cn,1
Cn,2
word an
…
9. BLS - indexer
• Category for each context
By summing up all the arrays of counters of all the words in the user
context we select the category with the highest score
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
C1,1 C2,1
word cx1
array of category counters
…
C1,2 C2,2
word cx2 …
Cn,1
Cn,2
word cxn
…
context
{ C1,n C2,n … Cn,n
+
+
+ +
+ +
= = =
C1,n C2,n … Cn,nCi,n
context category
=
category with the highest score
* *
10. BLS - query processor
• List of apps ordered by their relevance value for
each context term
For each context word it seeks into the index the applications
containing it
• Set of lists obtained by repeating the process for
each context term
• Total list with distinct entries
If an app is present more than once, we sum all its relevance values
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
11. BLS - final steps
• We keep just the first 1000 applications from the total list
The final purpose of the system is to suggest a small set of the most relevant applications to
users
• We apply 2 bonuses as score modifier (boosts)
App average rating boost: we increase or decrease the relevance score based on the user
evaluation of the app as follows
!
!
Category boost: if the retrieved app is associated to the same category of the context, then the
relevance score is boosted up with double value
• Final list with a set of 10 applications
As final task we reorder the apps list, with the new scores, and cut it off in order to have just 10
applications to recommend to users
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
Evaluation 1 2 3 4 5
Score modifier -50% -25% +0% +25% +50%
12. AppCAB
• AppCAB is our proposal presented in 2 versions: “base” and “pro”
To show some differences by applying different bonus scores
• AppCAB = indexer + query processor
The indexing process for both version uses the Lucene search engine library
As for BLS, during this process we keep track of categories, but for context category computation we
keep the entire list instead of just the first one
• We apply 3 bonuses as score modifier, at indexing time
A score reduction to apps present into “Game” category: 70% of original score
App average rating boost: we increase or decrease the relevance score based on the user evaluation of
the app as follows
!
!
App popularity boost: (min(0.5, numberOfReviews/100000))*100
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
Evaluation 1 1.5 2 2.5 3 3.5 4 4.5 5
Score modifier -50% -37% -25% -12% +0% +12% +25% +37% +50%
13. AppCAB
• AppCAB “base” after the indexing process does not apply any boosts at query
time
To test how it works with just boosts included at indexing time
Without taking into account the application category
• AppCAB “pro” = “base” + 2 bonuses as score modifier at query time
Category boost: if the retrieved app is associated one of the first three categories of the context, then the
relevance score is boosted up as follows
!
!
Title boost: if a query term is present in the app title we add +10% to the original score
• Final list with a set of 10 applications
We remove duplicate applications (many apps have free/lite and pro version)
We reorder the apps list, with the new scores, and cut it off in order to have just 10 applications to recommend
to users
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
Category rank 1 2 3
Score modifier +25% +12.5
%
+8%
14. Experimental evaluation
• Benchmark TREC-like
Collection of “documents” = apps metadata (nearly the whole set of apps related to
the Italian marketplace of Apple AppStore)
Statements of information needs (“topics”) = 12 textual context descriptors
generated by the CAB system
A set of relevance judgments = made by 16 people using a Likert scale on a single
item;
The values were numbers between 0 and 5 with the following meaning:
- 5 = highest value
- 1 = lowest value
- 0 = they were not able to evaluate the app due to external factors, such as
related webpage not reachable
We run all three algorithms for 12 contexts in order to get three sets of 10
applications for each context
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
15. Experimental evaluation
• Sample of relevance assessors: 16 people distributed as follows:
• Metric: Normalized Discounted Cumulative Gain (NDCG)
Since we need a limited set of recommendations (10 applications), and since we
collected relevance judgments on five levels scale, we measure the quality of the
system by using NDCG@5 and NDCG@10, in order to consider 5 and 10 retrieved
apps
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
• men
• women
• over 40
• 31-40
• 21-30
• under 20
• development
• advanced
• intermediate
• basic
• iOS
• android
• windows
• other
sex age familiarity with mobile devices mobile platform owned
16. Results - retrieval effectiveness
• 0-values: during the test some assessors chose to
assign a 0-value as app evaluation due to the non-
availability of the related web page
A 0-value is not the lowest relevance value but a score reporting the
impossibility to assess the application, due to external factors
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
17. Results - NDCG@5
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@5 score with 0-values
NDCG@5 score without 0-values
Amount of 0-values:
• 14.5% for BLS
• 16.9% for AppCAB “base”
• 11.7% for AppCAB “pro”
Score over all contexts:
• 0.63 fro AppCAB “pro”
• 0.60 for AppCAB “base”
• 0.49 for BLS
Score over all contexts:
• 0.65 fro AppCAB “pro”
• 0.62 for AppCAB “base”
• 0.52 for BLS
18. Results - NDCG@5
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@5 score with 0-values
NDCG@5 score without 0-values
• AppCAB solutions have higher effectiveness
in most of cases
• In particular the “pro” version has further improved
the results
19. Results - NDCG@5
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@5 score with 0-values
NDCG@5 score without 0-values
• The exception of contexts 8 and 9 shows some
cases where the AppCAB systems failed
• The non-reachability of the related webpage for
some applications does not strongly affect the
score, even for contexts 8 and 9, therefore it is not
the reason of failure in those cases
• The low performance is due to the heterogeneous
set of keywords, suggesting distinct and unrelated
topics
20. Results - NDCG@10
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
Amount of 0-values:
• 14.5% for BLS
• 16.9% for AppCAB “base”
• 11.7% for AppCAB “pro”
Score over all contexts:
• 0.69 fro AppCAB “pro”
• 0.61 for AppCAB “base”
• 0.54 for BLS
Score over all contexts:
• 0.71 fro AppCAB “pro”
• 0.64 for AppCAB “base”
• 0.56 for BLS
NDCG@10 score with 0-values
NDCG@10 score without 0-values
21. Results - NDCG@10
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@10 score without 0-values
NDCG@10 score with 0-values
• AppCAB effectiveness is increased
• The differences between the systems
remain with the same proportion
• AppCAB “pro” confirms the best performance
22. Results - NDCG@10
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@10 score without 0-values
NDCG@10 score with 0-values
• We still have some exceptions such as contexts
6, 8 and 9
• and in particular the context 1 where 0-values
considerably affect the results
• Despite the exception in certain contexts, the overall
score describes how AppCAB improves retrieval
effectiveness by providing a better set of applications
to relevance assessors
• In particular the “pro” version has further improved
the results in general
23. Results - Rating distribution
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
• AppCAB system got many 1-ratings less than
the BLS solution
• also for the other rating values, although less
pronounced, there is an improvement
• The higher mean and median values emphasise
AppCAB “pro” effectiveness
24. Results - statistical significance
• Statistical tests to determine whether there are any significant
differences between the means of relevance judgements
Shapiro-Wilk normality test: the results show that none of them has normal
distribution
Levene test, in order to verify the homogeneity of variances: the results confirm that
we could not accept the hypothesis of homogeneity of variances
Friedman test, in order to verify if datasets have significant differences:
the resulting parameter value indicates a statistically significant difference
between means!
Post-hoc test, in order to know which specific groups differed:
the results show that AppCAB “pro” solution has significant difference compared
to both others, and instead how the AppCAB “base” did not give a noticeable
improvement compared to BLS
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
25. Conclusions and future works
• AppCAB “pro” has been evaluated better than the others in 42% of
cases, measured with NDCG@5, and in 50% of cases with
NDCG@10
• Also from the technical point of view there have been improvements
By using Lucene library the indexing process has been optimized both in terms of building
and read operations
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
NDCG@5 NDCG@10
• AppCAB “pro”
• AppCAB “base”
• BLS
• AppCAB “pro”
• AppCAB “base”
• BLS
26. Conclusions and future works
• Future work:!
a first obvious, and needed, improvement would be to rely on state-of-the-art IR models that
are more effective than TF.IDF, like BM25
AppCAB “pro” can be tested with English contexts and apps to refine it and make it more
versatile
- the index built is ready to work with the English language, in order to provide
applications for international marketplaces
query building process might be improved by means of query enrichment techniques (i.e.
by adding a new set of words) in order to filter apps in a more accurate way
improving the retrieval process over time, by taking into account user choices
- by creating a history of what users install and run, in order to apply new boosts
- to exploit user participation in social networks, in order to get feedback about shared
preferences and habits
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine
27. Thank you for your attention
• Questions…
4th Workshop on Context-awareness in Retrieval and Recommendation
in conjunction with ECIR 2014, Amsterdam
2014 Marco PavanUniversità degli studi di Udine