Beyond search 
queries 
Jano Suchal 
searchd.co
Search 
as seen by developers 
{ 
"query": { 
"query_string": { 
"query": "elasticsearch book" 
} 
} 
} 
return response.hits.hits
Search 
as experienced by users 
query: elasticsarch 
Typo in query. 
No results. 
query: elasticsearch Too many hits. 
Not relevant. 
query: elasticsearch book 
Click! 
Success! Or?
Measuring 
search quality
Cpt. Obvious: 
“Hits, clicks and order 
do matter.”
Accurately interpreting clickthrough 
data as implicit feedback 
Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri 
Gay. Accurately interpreting clickthrough data as implicit feedback. In 
Proceedings of the 28th annual international ACM SIGIR conference on 
Research and development in Information retrieval, SIGIR ’05, pages 154–161, 
New York, NY, USA, 2005. ACM.
Accurately interpreting clickthrough 
data as implicit feedback
Search quality metrics 
● Mean Average Precision @ N 
○ probability of target result being in top N items 
● Mean Reciprocal Rank 
○ 1 / rank of target result 
● Normalized Discounted Cumulative Gain 
● Expected Reciprocal Rank
Search KPIs 
● CTR trend 
● # of queries w/o results or clicks 
● # of searches per session 
● Search engine latency
Search quality 
optimization
Optimizing search engines using 
clickthrough data 
Thorsten Joachims. Optimizing search engines using clickthrough data. In 
Proceedings of the eighth ACM SIGKDD international conference on 
Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, 
NY, USA, 2002. ACM.
Optimizing search engines using 
clickthrough data
Query chains: learning to rank from 
implicit feedback 
Filip Radlinski and Thorsten 
Joachims. Query chains: learning 
to rank from implicit feedback. In 
KDD ’05: Proceeding of the eleventh 
ACM SIGKDD international 
conference on Knowledge discovery 
in data mining, pages 239–248, 
New York, NY, USA, 2005. ACM.
Fighting Search Engine Amnesia: 
Reranking Repeated Results 
In this paper, we observed that the same results are often shown to 
users multiple times during search sessions. We showed that there are 
a number of effects at play, which can be leveraged to improve information 
retrieval performance. In particular, previously skipped results are much 
less likely to be clicked, and previously clicked results may or may not 
be re-clicked depending on other factors of the session. 
Milad Shokouhi, Ryen W. White, Paul Bennett, and Filip Radlinski. Fighting 
search engine amnesia: reranking repeated results. In Proceedings of the 
36th international ACM SIGIR conference on Research and development in 
information retrieval, SIGIR ’13, pages 273–282, New York, NY, USA, 2013. 
ACM.
searchd.co 
Search Analytics
searchd.co dashboard
A/B testing
A/B testing lists
A/B testing 
A B
A/B testing with interleaving 
A B
Interleaving & scoring 
● Balanced 
● Team Draft 
● Probabilistic 
● Binary preference 
● Linear rank difference 
● Inverse rank difference
A/B testing with interleaving
A/B testing with interleaving
A/B testing with interleaving 
● Pros 
○ Lower risk of loosing conversions 
● Cons 
○ Harder to interpret 
○ Harder to implement
searchd.co 
Search Analytics 
● Identify and fix key search problems 
● KPIs for site search 
● Actionable tips for search tuning 
● Safe A/B testing 
● Easy setup 
● In Beta, sending out invites
Bad search experience is a lost 
opportunity. Let's fix it. 
searchd.co 
Search Analytics 
www.searchd.co 
info@searchd.co

Beyond search queries

  • 1.
    Beyond search queries Jano Suchal searchd.co
  • 2.
    Search as seenby developers { "query": { "query_string": { "query": "elasticsearch book" } } } return response.hits.hits
  • 3.
    Search as experiencedby users query: elasticsarch Typo in query. No results. query: elasticsearch Too many hits. Not relevant. query: elasticsearch book Click! Success! Or?
  • 4.
  • 5.
    Cpt. Obvious: “Hits,clicks and order do matter.”
  • 6.
    Accurately interpreting clickthrough data as implicit feedback Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM.
  • 7.
    Accurately interpreting clickthrough data as implicit feedback
  • 8.
    Search quality metrics ● Mean Average Precision @ N ○ probability of target result being in top N items ● Mean Reciprocal Rank ○ 1 / rank of target result ● Normalized Discounted Cumulative Gain ● Expected Reciprocal Rank
  • 9.
    Search KPIs ●CTR trend ● # of queries w/o results or clicks ● # of searches per session ● Search engine latency
  • 10.
  • 11.
    Optimizing search enginesusing clickthrough data Thorsten Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, NY, USA, 2002. ACM.
  • 12.
    Optimizing search enginesusing clickthrough data
  • 13.
    Query chains: learningto rank from implicit feedback Filip Radlinski and Thorsten Joachims. Query chains: learning to rank from implicit feedback. In KDD ’05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 239–248, New York, NY, USA, 2005. ACM.
  • 14.
    Fighting Search EngineAmnesia: Reranking Repeated Results In this paper, we observed that the same results are often shown to users multiple times during search sessions. We showed that there are a number of effects at play, which can be leveraged to improve information retrieval performance. In particular, previously skipped results are much less likely to be clicked, and previously clicked results may or may not be re-clicked depending on other factors of the session. Milad Shokouhi, Ryen W. White, Paul Bennett, and Filip Radlinski. Fighting search engine amnesia: reranking repeated results. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’13, pages 273–282, New York, NY, USA, 2013. ACM.
  • 15.
  • 16.
  • 21.
  • 22.
  • 23.
  • 24.
    A/B testing withinterleaving A B
  • 25.
    Interleaving & scoring ● Balanced ● Team Draft ● Probabilistic ● Binary preference ● Linear rank difference ● Inverse rank difference
  • 26.
    A/B testing withinterleaving
  • 27.
    A/B testing withinterleaving
  • 28.
    A/B testing withinterleaving ● Pros ○ Lower risk of loosing conversions ● Cons ○ Harder to interpret ○ Harder to implement
  • 29.
    searchd.co Search Analytics ● Identify and fix key search problems ● KPIs for site search ● Actionable tips for search tuning ● Safe A/B testing ● Easy setup ● In Beta, sending out invites
  • 30.
    Bad search experienceis a lost opportunity. Let's fix it. searchd.co Search Analytics www.searchd.co info@searchd.co