Successfully reported this slideshow.
Upcoming SlideShare
×

# Combining Top-N Recommenders with Metasearch Algorithms [SIGIR '17 SP Poster]

92 views

Published on

Poster for the SGIR 2017 short paper:

Daniel Valcarce, Javier Parapar, Álvaro Barreiro: Combining Top-N Recommenders with Metasearch Algorithms. SIGIR 2017: 805-808

http://doi.acm.org/10.1145/3077136.3080647

Published in: Data & Analytics
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

### Combining Top-N Recommenders with Metasearch Algorithms [SIGIR '17 SP Poster]

1. 1. Combining Top-N Recommenders with Metasearch Algorithms Daniel Valcarce, Javier Parapar and Álvaro Barreiro Information Retrieval Lab, Computer Science Department, University of A Coruña, Spain Combining Top-N Recommenders with Metasearch Algorithms Daniel Valcarce, Javier Parapar and Álvaro Barreiro Information Retrieval Lab, Computer Science Department, University of A Coruña, Spain Overview Great diversity of good recommendation algorithms → which technique should I choose? A combination of them! Study of metasearch techniques for top-N recommendation (metasearch = fusion of different search engines). These techniques require no training data, nor parameter tuning. We analysed two families of techniques: voting-based and score-based approaches. Score-based approaches ALGORITHMS FOR SCORE COMBINATION CombANZ score(i) = n−1 i ∑k scorek(i) CombSum score(i) = ∑k scorek(i) CombMNZ score(i) = ni ∑k scorek(i) ni = #systems that have a score for item i. SCORE NORMALISATION Standard min = 0, max = 1 Sum min = 0, sum = 1 ZMUV mean = 0, var = 1 ZMUV+1 mean = 1, var = 1 ZMUV+2 mean = 2, var = 1 Voting-based approaches Borda Borda count assigns a decreasing score to the candidates ac- cording to their position in the voting ballots. Condorcet A Condorcet winner is the candidate who would win (or at least tie) against every other candidate in a one-to-one election. Copeland Copeland’s rule solves Condorcet ties by sorting the tied ele- ments by #victories − #defeats. MovieLens 100k dataset 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 1 2 3 4 5 6 7 8 9 10 nDCG@10 #systems combSum combMNZ combANZ Copeland Condorcet Borda Count R3-Yahoo! Music dataset 0.025 0.026 0.027 0.028 0.029 0.030 0.031 1 2 3 4 5 6 7 8 9 10 nDCG@10 #systems combSum combMNZ combANZ Copeland Condorcet Borda Count Results Algorithm MovieLens 100k R3-Yahoo! Music HT 0.0123 0.0164 SVD++ 0.1182a 0.0142 UIR-Item 0.2180ab 0.0174b CHI2-NMLE 0.3659abc 0.0270abci RM2-L-PD 0.3784abcd 0.0272abci BPRMF 0.3869abcde 0.0278abci NNCosNgbr 0.3889abcde 0.0274abci LM-DP-WSR 0.4017abcde f g 0.0277abci PureSVD 0.4152abcde f gh 0.0233abc SLIM 0.4221abcde f ghi 0.0301abcde f i Best aggregation 0.4436abcde f ghi 0.0310abcde f ghij Discussion • CombSum is the best combination method. • If we lack scores, Copeland’s rule is nearly as effective as CombSum. • The standard normalisation works best on the MovieLens dataset and the ZMUV+1 normalisation on the Yahoo! collection. • CombSum and Copeland’s rule tend to choose the same recommenders in their optimal combination. • The best combination is always capable of outperforming the best single recommender. • There exists a combination of algorithms that do not contain the best sin- gle recommender that outperforms that best single recommender. Conclusions and Future Work The combination of methods outperforms state-of-the-art recommenders. The studied metasearch techniques are also very simple and efﬁcient. We plan to study which recommendation algorithms should we merge instead of testing all the possible combinations. → We believe diversity and novelty may be useful for this task. SIGIR 2017, 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, August 7-11, 2017, Tokyo, Japan. Thanks to ACM SIGIR Student Travel Grant! Superscripts indicate statistical signiﬁcance https://github.com/dvalcarce/metarecsys