Rank
all the things!
@jsuchal
@SynopsiTV
Blogs, newsletters

Courses, training

How do you learn things?

Conferences

Work
Research papers?
WHY NOT?
“It’s not useful for the
real-world.”

WHY NOT?
“I wouldn’t
understand any of
that.”
About me

PhD dropout FIIT STU Bratislava
foaf.sk, otvorenezmluvy.sk, govdata.sk
sme.sk news recommender
developer @ Synop...
My workflow
My workflow

MAGIC!

MAGIC!

MAGIC!
Search vs. recommender engine
Search engine

Recommendation engine

input: query
output: list of results

input: movie
out...
Academic Mode
Accurately interpreting clickthrough
data as implicit feedback
Significant on
two-tailed tests
at a 95%
confidence level
!!...
Accurately interpreting clickthrough
data as implicit feedback

Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrook...
Accurately interpreting clickthrough
data as implicit feedback
Evaluation Metrics
● Mean Average Precision @ N
○ probability of target result being in top N items

● Mean Reciprocal Ran...
Optimizing search engines using
clickthrough data

Thorsten Joachims. Optimizing search engines using clickthrough data. I...
Optimizing search engines using
clickthrough data
Query chains: learning to rank from
implicit feedback

Filip Radlinski and Thorsten
Joachims. Query chains: learning
to ra...
On Caption Bias in Interleaving
Experiments

Katja Hofmann, Fritz Behr, and Filip Radlinski: On Caption Bias in Interleavi...
On Caption Bias in Interleaving
Experiments
Fighting Search Engine Amnesia:
Reranking Repeated Results
In this paper, we observed that the same results are often show...
Challenges
Diversification
Group recommendations
Context-aware recommendations
Location

Time of day

Mood

Season

Device
Serious
recommenders and search?
Get in touch!
@synopsitv @jsuchal
Ján Suchal - Rank all the things!
Upcoming SlideShare
Loading in …5
×

Ján Suchal - Rank all the things!

540 views
438 views

Published on

Google search results and Netflix recommendations share more in common that you might think at first look. Let’s take a deep dive into recent research and see what makes search results or recommendations good and why. We’ll talk about how users interact with search or recommendations and what are the key metrics we can measure and improve. Make no mistake, this concerns you even if you never planned to build a search or recommendation engine.

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
540
On SlideShare
0
From Embeds
0
Number of Embeds
280
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Ján Suchal - Rank all the things!

  1. 1. Rank all the things! @jsuchal @SynopsiTV
  2. 2. Blogs, newsletters Courses, training How do you learn things? Conferences Work
  3. 3. Research papers?
  4. 4. WHY NOT?
  5. 5. “It’s not useful for the real-world.” WHY NOT? “I wouldn’t understand any of that.”
  6. 6. About me PhD dropout FIIT STU Bratislava foaf.sk, otvorenezmluvy.sk, govdata.sk sme.sk news recommender developer @ SynopsiTV
  7. 7. My workflow
  8. 8. My workflow MAGIC! MAGIC! MAGIC!
  9. 9. Search vs. recommender engine Search engine Recommendation engine input: query output: list of results input: movie output: list of similar movies
  10. 10. Academic Mode
  11. 11. Accurately interpreting clickthrough data as implicit feedback Significant on two-tailed tests at a 95% confidence level !!! Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM.
  12. 12. Accurately interpreting clickthrough data as implicit feedback Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM.
  13. 13. Accurately interpreting clickthrough data as implicit feedback
  14. 14. Evaluation Metrics ● Mean Average Precision @ N ○ probability of target result being in top N items ● Mean Reciprocal Rank ○ 1 / rank of target result ● Normalized Discounted Cumulative Gain ● Expected Reciprocal Rank
  15. 15. Optimizing search engines using clickthrough data Thorsten Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, NY, USA, 2002. ACM.
  16. 16. Optimizing search engines using clickthrough data
  17. 17. Query chains: learning to rank from implicit feedback Filip Radlinski and Thorsten Joachims. Query chains: learning to rank from implicit feedback. In KDD ’05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 239–248, New York, NY, USA, 2005. ACM.
  18. 18. On Caption Bias in Interleaving Experiments Katja Hofmann, Fritz Behr, and Filip Radlinski: On Caption Bias in Interleaving Experiments In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM) 2012
  19. 19. On Caption Bias in Interleaving Experiments
  20. 20. Fighting Search Engine Amnesia: Reranking Repeated Results In this paper, we observed that the same results are often shown to users multiple times during search sessions. We showed that there are a number of effects at play, which can be leveraged to improve information retrieval performance. In particular, previously skipped results are much less likely to be clicked, and previously clicked results may or may not be re-clicked depending on other factors of the session. Milad Shokouhi, Ryen W. White, Paul Bennett, and Filip Radlinski. Fighting search engine amnesia: reranking repeated results. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’13, pages 273–282, New York, NY, USA, 2013. ACM.
  21. 21. Challenges
  22. 22. Diversification
  23. 23. Group recommendations
  24. 24. Context-aware recommendations Location Time of day Mood Season Device
  25. 25. Serious recommenders and search? Get in touch! @synopsitv @jsuchal

×