Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
In the last few years, Artificial Intelligence applications have become more and more sophisticated and often operate like algorithmic “black boxes” for decision-making. Due to this fact, some questions naturally arise when working with these models: why should we trust a certain decision taken by these algorithms? Why and how was this prediction made? Which variables mostly influenced the prediction? The most crucial challenge with complex machine learning models is therefore their interpretability and explainability. This talk aims to illustrate an overview of the most popular explainability techniques and their application in Learning to Rank. In particular, we will examine in depth a powerful library called SHAP with both theoretical and practical insights; we will talk about its amazing tools to give an explanation of the model behaviour, especially how each feature impacts the model’s output, and we will explain to you how to interpret the results in a Learning to Rank scenario.
A Learning to Rank Project on a Daily Song Ranking ProblemSease
Ranking data, i.e., ordered list of items, naturally appears in a wide variety of situation; understanding how to adapt a specific dataset and to design the best approach to solve a ranking problem in a real-world scenario is thus crucial.This talk aims to illustrate how to set up and build a Learning to Rank (LTR) project starting from the available data, in our case a Spotify Dataset (available on Kaggle) on the Worldwide Daily Song Ranking, and ending with the implementation of a ranking model. A step by step (phased) approach to cope with this task using open source libraries will be presented.We will examine in depth the most important part of the pipeline that is the data preprocessing and in particular how to model and manipulate the features in order to create the proper input dataset, tailored to the machine learning algorithm requirements.
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
How to Build your Training Set for a Learning To Rank Project - HaystackSease
Presented by Alessandro Benedetti of Sease, Learning to Rank (LTR) is the application of machine learning techniques (typically supervised), in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular, organizations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
- model and collect the necessary feedback from the users (implicit or explicit)
- calculate for each training sample a relevance label that is meaningful and not ambiguous (Click Through Rate, Sales Rate ...)
- transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training libraries expect)
Join us as we explore real-world scenarios and dos and don'ts from the e-commerce industry.
Interactive Questions and Answers - London Information Retrieval MeetupSease
Answers to some questions about Natural Language Search, Language Modelling (Google Bert, OpenAI GPT-3), Neural Search and Learning to Rank made during our London Information Retrieval Meetup (December).
Evaluating Your Learning to Rank Model: Dos and Don’ts in Offline/Online Eval...Sease
For more details:
https://sease.io/2020/04/the-importance-of-online-testing-in-learning-to-rank-part-1.html
https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017 and Elasticsearch has an Open Source plugin released in 2018), organizations struggle with the problem of how to evaluate the quality of the models they train.
This talk explores all the major points in both Offline and Online evaluation.
Setting up correct infrastructures and processes for a fair and effective evaluation of the trained models is vital for measuring the improvements/regressions of a LTR system.
The talk is intended for:
– Product Owners, Search Managers, Business Owners
– Software Engineers, Data Scientists, and Machine Learning Enthusiast
Expect to learn :
the importance of Offline testing from a business perspective
how Offline testing can be done with Open Source libraries
how to build a realistic test set from the original data set in input avoiding common mistakes in the process
the importance of Online testing from a business perspective
A/B testing and Interleaving approaches: details and Pros/ Cons
common mistakes and how they can false the obtained results
Join us as we explore real-world scenarios and dos and don’ts from the e-commerce industry!
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Sease
RRE is an open-source search quality evaluation tool that can be used to produce a set of reports about the quality of a system, iteration after iteration, and that can be integrated within a continuous integration infrastructure to monitor quality metrics after each release.
Many aspects remained problematic though:
– how to directly evaluate a middle layer search-API that communicates with Apache Solr or Elasticsearch?
– how to easily generate explicit and implicit ratings without spending hours on tedious json files?
– how to better explore the evaluation results? with nice widgets and interesting insights?
Rated Ranking Evaluator Enterprise solves these problems and much more.
Join us as we introduce the next generation of open-source search quality evaluation tools, exploring the internals and real-world scenarios!
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
In the last few years, Artificial Intelligence applications have become more and more sophisticated and often operate like algorithmic “black boxes” for decision-making. Due to this fact, some questions naturally arise when working with these models: why should we trust a certain decision taken by these algorithms? Why and how was this prediction made? Which variables mostly influenced the prediction? The most crucial challenge with complex machine learning models is therefore their interpretability and explainability. This talk aims to illustrate an overview of the most popular explainability techniques and their application in Learning to Rank. In particular, we will examine in depth a powerful library called SHAP with both theoretical and practical insights; we will talk about its amazing tools to give an explanation of the model behaviour, especially how each feature impacts the model’s output, and we will explain to you how to interpret the results in a Learning to Rank scenario.
A Learning to Rank Project on a Daily Song Ranking ProblemSease
Ranking data, i.e., ordered list of items, naturally appears in a wide variety of situation; understanding how to adapt a specific dataset and to design the best approach to solve a ranking problem in a real-world scenario is thus crucial.This talk aims to illustrate how to set up and build a Learning to Rank (LTR) project starting from the available data, in our case a Spotify Dataset (available on Kaggle) on the Worldwide Daily Song Ranking, and ending with the implementation of a ranking model. A step by step (phased) approach to cope with this task using open source libraries will be presented.We will examine in depth the most important part of the pipeline that is the data preprocessing and in particular how to model and manipulate the features in order to create the proper input dataset, tailored to the machine learning algorithm requirements.
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
How to Build your Training Set for a Learning To Rank Project - HaystackSease
Presented by Alessandro Benedetti of Sease, Learning to Rank (LTR) is the application of machine learning techniques (typically supervised), in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular, organizations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
- model and collect the necessary feedback from the users (implicit or explicit)
- calculate for each training sample a relevance label that is meaningful and not ambiguous (Click Through Rate, Sales Rate ...)
- transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training libraries expect)
Join us as we explore real-world scenarios and dos and don'ts from the e-commerce industry.
Interactive Questions and Answers - London Information Retrieval MeetupSease
Answers to some questions about Natural Language Search, Language Modelling (Google Bert, OpenAI GPT-3), Neural Search and Learning to Rank made during our London Information Retrieval Meetup (December).
Evaluating Your Learning to Rank Model: Dos and Don’ts in Offline/Online Eval...Sease
For more details:
https://sease.io/2020/04/the-importance-of-online-testing-in-learning-to-rank-part-1.html
https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017 and Elasticsearch has an Open Source plugin released in 2018), organizations struggle with the problem of how to evaluate the quality of the models they train.
This talk explores all the major points in both Offline and Online evaluation.
Setting up correct infrastructures and processes for a fair and effective evaluation of the trained models is vital for measuring the improvements/regressions of a LTR system.
The talk is intended for:
– Product Owners, Search Managers, Business Owners
– Software Engineers, Data Scientists, and Machine Learning Enthusiast
Expect to learn :
the importance of Offline testing from a business perspective
how Offline testing can be done with Open Source libraries
how to build a realistic test set from the original data set in input avoiding common mistakes in the process
the importance of Online testing from a business perspective
A/B testing and Interleaving approaches: details and Pros/ Cons
common mistakes and how they can false the obtained results
Join us as we explore real-world scenarios and dos and don’ts from the e-commerce industry!
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Sease
RRE is an open-source search quality evaluation tool that can be used to produce a set of reports about the quality of a system, iteration after iteration, and that can be integrated within a continuous integration infrastructure to monitor quality metrics after each release.
Many aspects remained problematic though:
– how to directly evaluate a middle layer search-API that communicates with Apache Solr or Elasticsearch?
– how to easily generate explicit and implicit ratings without spending hours on tedious json files?
– how to better explore the evaluation results? with nice widgets and interesting insights?
Rated Ranking Evaluator Enterprise solves these problems and much more.
Join us as we introduce the next generation of open-source search quality evaluation tools, exploring the internals and real-world scenarios!
How to Build your Training Set for a Learning To Rank ProjectSease
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017), organisations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
– model and collect the necessary feedback from the users (implicit or explicit)
– calculate for each training sample a relevance label which is meaningful and not ambiguous (Click Through Rate, Sales Rate …)
– transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training library expect)
Join us as we explore real world scenarios and dos and don’ts from the e-commerce industry.
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
From Academic Papers To Production : A Learning To Rank StoryAlessandro Benedetti
This talk is about the journey to bring Learning To Rank (LTR from now on) to the e-commerce domain in a real world scenario, including all the pitfalls and disillutions involved.
LTR is a fantastic approach to solve complex ranking problems but industry domains are far from being the ideal world where those technologies were designed and experimented : open source software implementations are not working perfectly out of the box and require advanced tuning; industry training data is dirty, noisy and incomplete.
This talk will guide you through the different phases and technologies involved in a LTR project with a pragmatic approach.
Feature Engineering, Domain Modelling, Training Set Building, Model Training, Search Integration and Online Evaluation : each of them presents different challenges in the real world and must be carefully approached.
The More Like This search functionality is a key feature in Apache Lucene that allows to find similar documents to an input one (text or document). Being widely used but rarely explored, this presentation will start introducing how the MLT works internally. The focus of the talk is to improve the general understanding of MLT and the way you could benefit from it. Building on the introduction the focus will be on the BM25 text similarity function and how this has been (tentatively) included in the MLT through a conspicious refactor and testing process, to improve the identification of the most interesting terms from the input that can drive the similarity search. The presentation will include real world usage examples, proposed patches, pending contributions and future developments such as improved query building through positional phrase queries.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
flexible and highly configurable for a technical user
immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Being your core domain involving real world entities ( such as hotels, restaurant, cars ...) or text documents, searching for similar entities, given one in input, is a very common use case for most of the systems that involve information retrieval. This presentation will start describing how much this problem is present across a variety of different scenarios and how you can use the More Like This feature in the Apache Lucene library to solve it. Building on the introduction the focus will be on how the More Like This module internally works, all the components involved end to end, BM25 text similarity metric and how this has been included through a cospicuos refactor and testing process. The presentation will include real world usage examples and future developments such as improved query building through positional phrase queries and term relevancy scoring pluggability.
Let's Build an Inverted Index: Introduction to Apache Lucene/SolrSease
The University Seminar series aim to provide a basic understanding of Open Source Information Retrieval and its application in the real world through the Apache Lucene/Solr technologies.
Entity Search on Virtual Documents Created with Graph EmbeddingsSease
Entity Search is a search paradigm that aims to retrieve entities and all the information related to them. In the last few years the importance of this topic has become greater and greater due to the fact that 40% of the queries made by users mention specific entities nowdays.
This talk wants to give a first overview of the state-of-the-art methods used for entities retrieval and then describe the new approach Anna has implemented and proposed in her master thesis. The novelty introduced with this work exploits two machine learning techniques: neural network and clustering.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. Building on the introduction the focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation in to Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored, such as how it works, and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Feature Extraction for Large-Scale Text CollectionsSease
Feature engineering is a fundamental but poorly documented component in LTR search applications.
As a result, there are still few open access software packages that allow researchers and practitioners to easily simulate a feature extraction pipeline and conduct experiments in a lab setting.
This talk introduces Fxt, an open-source framework to perform efficient and scalable feature extraction. Fxt may be integrated into complex, high-performance software applications to help solve a wide variety of text-based machine learning problems.
The talk details how we built and documented a reproducible feature extraction pipeline with LTR experiments using the ClueWeb09B collection.
This LTR dataset is publicly available.
We’ll also discuss some of the benefits (feature extraction efficiency, model interpretation) of having open access tooling in this area for researchers and practitioners alike.
Semantic & Multilingual Strategies in Lucene/SolrTrey Grainger
When searching on text, choosing the right CharFilters, Tokenizer, stemmers, and other TokenFilters for each supported language is critical. Additional tools of the trade include language detection through UpdateRequestProcessors, parts of speech analysis, entity extraction, stopword and synonym lists, relevancy differentiation for exact vs. stemmed vs. conceptual matches, and identification of statistically interesting phrases per language. For multilingual search, you also need to choose between several strategies such as: searching across multiple fields, using a separate collection per language combination, or combining multiple languages in a single field (custom code is required for this and will be open sourced). These all have their own strengths and weaknesses depending upon your use case. This talk will provide a tutorial (with code examples) on how to pull off each of these strategies as well as compare and contrast the different kinds of stemmers, review the precision/recall impact of stemming vs. lemmatization, and describe some techniques for extracting meaningful relationships between terms to power a semantic search experience per-language. Come learn how to build an excellent semantic and multilingual search system using the best tools and techniques Lucene/Solr has to offer!
Building Search & Recommendation EnginesTrey Grainger
In this talk, you'll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We'll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We'll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world's top companies. You'll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data.
Webinar: Simpler Semantic Search with SolrLucidworks
Hear from Lucidworks Senior Solutions Consultant Ted Sullivan about how you can leverage Apache Solr and Lucidworks Fusion to improve semantic awareness of your search applications.
Search Quality Evaluation: a Developer PerspectiveSease
Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. To satisfy these requirements an helpful tool must be: - flexible and highly configurable for a technical user - immediate, visual and concise for an optimal business utilization In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows. This talk will introduce RRE, it will describe its functionalities and demonstrate how it can be integrated in a project and how it can help to measure and assess the search quality of your search application. The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationSease
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
How to Build your Training Set for a Learning To Rank ProjectSease
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017), organisations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
– model and collect the necessary feedback from the users (implicit or explicit)
– calculate for each training sample a relevance label which is meaningful and not ambiguous (Click Through Rate, Sales Rate …)
– transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training library expect)
Join us as we explore real world scenarios and dos and don’ts from the e-commerce industry.
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
From Academic Papers To Production : A Learning To Rank StoryAlessandro Benedetti
This talk is about the journey to bring Learning To Rank (LTR from now on) to the e-commerce domain in a real world scenario, including all the pitfalls and disillutions involved.
LTR is a fantastic approach to solve complex ranking problems but industry domains are far from being the ideal world where those technologies were designed and experimented : open source software implementations are not working perfectly out of the box and require advanced tuning; industry training data is dirty, noisy and incomplete.
This talk will guide you through the different phases and technologies involved in a LTR project with a pragmatic approach.
Feature Engineering, Domain Modelling, Training Set Building, Model Training, Search Integration and Online Evaluation : each of them presents different challenges in the real world and must be carefully approached.
The More Like This search functionality is a key feature in Apache Lucene that allows to find similar documents to an input one (text or document). Being widely used but rarely explored, this presentation will start introducing how the MLT works internally. The focus of the talk is to improve the general understanding of MLT and the way you could benefit from it. Building on the introduction the focus will be on the BM25 text similarity function and how this has been (tentatively) included in the MLT through a conspicious refactor and testing process, to improve the identification of the most interesting terms from the input that can drive the similarity search. The presentation will include real world usage examples, proposed patches, pending contributions and future developments such as improved query building through positional phrase queries.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
flexible and highly configurable for a technical user
immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Being your core domain involving real world entities ( such as hotels, restaurant, cars ...) or text documents, searching for similar entities, given one in input, is a very common use case for most of the systems that involve information retrieval. This presentation will start describing how much this problem is present across a variety of different scenarios and how you can use the More Like This feature in the Apache Lucene library to solve it. Building on the introduction the focus will be on how the More Like This module internally works, all the components involved end to end, BM25 text similarity metric and how this has been included through a cospicuos refactor and testing process. The presentation will include real world usage examples and future developments such as improved query building through positional phrase queries and term relevancy scoring pluggability.
Let's Build an Inverted Index: Introduction to Apache Lucene/SolrSease
The University Seminar series aim to provide a basic understanding of Open Source Information Retrieval and its application in the real world through the Apache Lucene/Solr technologies.
Entity Search on Virtual Documents Created with Graph EmbeddingsSease
Entity Search is a search paradigm that aims to retrieve entities and all the information related to them. In the last few years the importance of this topic has become greater and greater due to the fact that 40% of the queries made by users mention specific entities nowdays.
This talk wants to give a first overview of the state-of-the-art methods used for entities retrieval and then describe the new approach Anna has implemented and proposed in her master thesis. The novelty introduced with this work exploits two machine learning techniques: neural network and clustering.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. Building on the introduction the focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation in to Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored, such as how it works, and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Feature Extraction for Large-Scale Text CollectionsSease
Feature engineering is a fundamental but poorly documented component in LTR search applications.
As a result, there are still few open access software packages that allow researchers and practitioners to easily simulate a feature extraction pipeline and conduct experiments in a lab setting.
This talk introduces Fxt, an open-source framework to perform efficient and scalable feature extraction. Fxt may be integrated into complex, high-performance software applications to help solve a wide variety of text-based machine learning problems.
The talk details how we built and documented a reproducible feature extraction pipeline with LTR experiments using the ClueWeb09B collection.
This LTR dataset is publicly available.
We’ll also discuss some of the benefits (feature extraction efficiency, model interpretation) of having open access tooling in this area for researchers and practitioners alike.
Semantic & Multilingual Strategies in Lucene/SolrTrey Grainger
When searching on text, choosing the right CharFilters, Tokenizer, stemmers, and other TokenFilters for each supported language is critical. Additional tools of the trade include language detection through UpdateRequestProcessors, parts of speech analysis, entity extraction, stopword and synonym lists, relevancy differentiation for exact vs. stemmed vs. conceptual matches, and identification of statistically interesting phrases per language. For multilingual search, you also need to choose between several strategies such as: searching across multiple fields, using a separate collection per language combination, or combining multiple languages in a single field (custom code is required for this and will be open sourced). These all have their own strengths and weaknesses depending upon your use case. This talk will provide a tutorial (with code examples) on how to pull off each of these strategies as well as compare and contrast the different kinds of stemmers, review the precision/recall impact of stemming vs. lemmatization, and describe some techniques for extracting meaningful relationships between terms to power a semantic search experience per-language. Come learn how to build an excellent semantic and multilingual search system using the best tools and techniques Lucene/Solr has to offer!
Building Search & Recommendation EnginesTrey Grainger
In this talk, you'll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We'll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We'll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world's top companies. You'll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data.
Webinar: Simpler Semantic Search with SolrLucidworks
Hear from Lucidworks Senior Solutions Consultant Ted Sullivan about how you can leverage Apache Solr and Lucidworks Fusion to improve semantic awareness of your search applications.
Search Quality Evaluation: a Developer PerspectiveSease
Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. To satisfy these requirements an helpful tool must be: - flexible and highly configurable for a technical user - immediate, visual and concise for an optimal business utilization In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows. This talk will introduce RRE, it will describe its functionalities and demonstrate how it can be integrated in a project and how it can help to measure and assess the search quality of your search application. The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationSease
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
Haystack 2019 - Rated Ranking Evaluator: an Open Source Approach for Search Q...OpenSource Connections
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
- flexible and highly configurable for a technical user
- immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Measure performance of the application using open source performance testing...BugRaptors
Bugraptors know that the performance of any product is key component of that project, to measure the performance of the project we have expert team of performance testers who perform the testing by using various performance testing tools like Jmeter, LoadUI, Selenium web driver and deliver project which has good performance according to the requirements and end user’s perspective.
Building multi billion ( dollars, users, documents ) search engines on open ...Andrei Lopatenko
How to use open source technologies to build search engines for billions of users, billions of revenue, billions of documents
Keynote talk at The 16th International Conference on Open Source Systems.
LSP ( Logic Score Preference ) _ Rajan_Dhabalia_San Francisco State Universitydhabalia
The software Quality Analysis is a measure of properties of a piece of software or its
specifications. The direct measurement of software quality is quite difficult due to lack of
quality factor measurement. To resolve this measurement problem, there is a model which
measures the quality of the software in terms of the attributes, specifications and
characteristics. This model is known as LSP (Logic Score Preference) .When client gives
specifications of the software to the developer then client expects the good quality of
software from developers. Hence, to decide the quality of software we can use this LSP
model.
This model validates following software quality attributes.
(1) Functionality
Suitability
Accuracy
Security
Interoperability
Compliance
(2) Usability
Understandability
Learn ability
Operability
(3) Performance
Processing time
Throughput
Resource consumption
(4) Maintainability
(5) Portability
(6) Reusability
In LSP, the features are decomposed into above aggregation blocks. And this decomposition
continues with in the each block until the all the lowest level features are directly measurable
and makes tree of decomposed features. And for each feature, an elementary criterion is
defined. And LSP calculates elementary preference for each criterion and then aggregate all
of them to calculate final global preference. And this global preference shows the quality of
the software. We can calculate global preference for different systems and we can analyze
and compare the systems’ quality.
Performance Evaluation of Open Source Data Mining Toolsijsrd.com
This is an attempt at evaluation of Open Source Data mining tools. Initially the paper deliberates on what can be and what cannot be the focus of inquiry, for the evaluation. Then it outlines the framework under which the evaluation is to be done. Next it defines the performance criteria to be measured. The tool selection strategy for the study is framed using various online resources and tools selected based on it. A table lists the different set of criteria and the findings of each tool against it. After capturing the findings of the study in a tabular fashion, a framework implementation strategy is made. This details the relative scaling for the evaluation. Based on the scorings, a conclusion remark with some suggestions summarizes the findings of the study. Lastly some assumptions/Limitations are discussed.
Automating Speed: A Proven Approach to Preventing Performance Regressions in ...HostedbyConfluent
"Regular performance testing is one of the pillars of Kafka Streams’ reliability and efficiency. Beyond ensuring dependable releases, regular performance testing supports engineers in new feature development with the ability to easily test the performance impact of their features, compare different approaches, etc.
In this session, Alex and John share their experience from developing, using, and maintaining a performance testing framework for Kafka Streams that has prevented multiple performance regressions over the last 5 years. They cover guiding principles and architecture, how to ensure statistical significance and stability of results, and how to automate regression detection for actionable notifications.
This talk sheds light on how Apache Kafka is able to foster a vibrant open-source community while maintaining a high performance bar across many years and releases. It also empowers performance-minded engineers to avoid common pitfalls and bring high-quality performance testing to their own systems."
Tutorial given at the European Conference for Machine Learning (ECMLPKDD 2015). It covers OpenML, how to use it in your research, interfaces in Java, R, Python, use through machine learning tools such as WEKA and MOA. Also covers topics in open science and reproducible research.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
5. Apache Lucene/Solr
LondonSearch Quality Evaluation / Context Overview
Search engineering is the production of quality
search systems.
Search quality (and in general software quality) is a
huge topic which can be described using internal
and external factors.
In the end, only external factors matter, those that
can be perceived by users and customers. But the
key for getting optimal levels of those external
factors are the internal ones.
One of the main differences between search and
software quality (especially from a correctness
perspective) is in the ok / ko judgment, which is
more “deterministic” in case of software development.
Context OverviewSearch Quality
Internal Factors
External Factors
Correctness
RobustnessExtendibility
Reusability
Efficiency
Timeliness
Modularity
Readability
Maintainability
Testability
Maintainability
Understandability
Reusability
….
Focused on
Primarily focused on
6. Apache Lucene/Solr
LondonSearch Quality Evaluation / Correctness
Correctness is the ability of a system to perform its
exact task, as defined by its specification.
Search domain is critical from this perspective
because correctness depends on arbitrary user
judgments.
For each internal (gray) and external (red) iteration
we need to find a way to measure the correctness.
Evaluation measures for an information retrieval
system are used to assert how well the search results
satisfied the user's query intent.
Correctness
Swimlane A: a new system Swimlane B: an existing system
Here are the requirements
Ok
V1.0 has been released
Cool!
a month later…
We have a change request.We found a bug
We need to improve our search
system, users are complaining
about junk in search results.
Ok
v0.1
…
v0.9
v1.1
v1.2
v1.3
…
v2.0
v2.0 How can we know where our system is going
between versions, in terms of correctness,
relevancy?
7. Apache Lucene/Solr
LondonSearch Quality Evaluation / Measures
Evaluation measures for an information retrieval
system try to formalise how well a search system
satisfies its user information needs.
Measures are generally split into two categories:
online and offline measures.
In this context we will focus on offline measures.
We will talk about something that can help a search
engineer during his ordinary day (i.e. in those phases
previously called “internal iterations”)
We will also see how the same tool can be used for
a broader usage, like contributing in the continuous
integration pipeline or even for delivering value to
functional stakeholders (i.e. external iterations).
Evaluation MeasuresEvaluation Measures
Online Measures
Offline Measures
Average Precision
Mean Reciprocal Rank
Recall
NDCG
Precision Click-through rate
F-Measure
Zero result rate
Session abandonment rate
Session success rate
….
….
We are mainly focused here, in this talk
8. Agenda
Apache Lucene/Solr
London
➢ Search Quality Evaluation
✓Rated Ranking Evaluator (RRE)
‣ What is it?
‣ How does it work?
‣ Domain Model
‣ Apache Maven binding
‣ RRE Server
➢ Future Works
➢ Q&A
9. Apache Lucene/Solr
LondonRRE / What is it?
• A set of search quality evaluation tools
• A search quality evaluation framework
• Multi (search) platform
• Written in Java
• It can be used also in non-Java projects
• Licensed under Apache 2.0
• Open to contributions
• Extremely dynamic!
RRE: What is it?
https://github.com/SeaseLtd/rated-ranking-evaluator
10. Apache Lucene/Solr
LondonRRE / at a glance
2____________________________________________________________________________________________________________________________________________________________________________________________________________________________
Months
2____________________________________________________________________________________________________________________________________________________________________________________________________________________________
People
10____________________________________________________________________________________________________________________________________________________________________________________________________________________________
modules
48950____________________________________________________________________________________________________________________________________________________________________________________________________________________________
lines of code
11. Apache Lucene/Solr
LondonRRE / ecosystem
The picture illustrates the main modules composing
the RRE ecosystem.
All modules with a dashed border are planned for a
future release.
RRE CLI has a double border because although the
rre-cli module hasn’t been developed, you can run
RRE from a command line using RRE Maven
archetype, which is part of the current release.
As you can see, the system development takes in
account two target search platforms: Apache Solr
and Elasticsearch.
The Search Platform API module provide a search
platform abstraction for plugging-in additional
search systems.
RRE Ecosystem
CORE
Plugin
Plugin
Reporting Plugin
Search
Platform
API
RequestHandler
RRE Server
RRE CLI
Plugin
Plugin
Plugin
Archetypes
12. Apache Lucene/Solr
LondonRRE / Domain Model
RRE Domain Model is organized into a composite /
tree-like structure where the relationships between
entities are always 1 to many.
The top level entity is a placeholder representing an
evaluation execution.
Versioned metrics are computed at query level and
then reported, using an aggregation function, at
upper levels.
The benefit of having a composite structure is clear:
we can see a metric value at different levels (e.g. a
query, all queries belonging to a query group, all
queries belonging to a topic or at corpus level)
RRE Domain Model
Evaluation
Corpus
1..*
v1.0
P@10
NDCG
AP
F-MEASURE
….
v1.1
P@10
NDCG
AP
F-MEASURE
….
v1.2
P@10
NDCG
AP
F-MEASURE
….
v1.n
P@10
NDCG
AP
F-MEASURE
….
Topic
Query Group
Query
1..*
1..*
1..*
…
Top level domain entity
Test dataset / collection
Information need
Query variants
Queries
13. Apache Lucene/Solr
LondonRRE / Domain Model, an example
The domain model provides all the abstractions
needed for articulating a complex judgment sets.
If, from one side, it is able to capture a complicated
and composite ratings model, on the other side
there are some cases where such complexity is not
needed.
Variants (i.e. queries which belong to the same
group) could be automatically generated starting
from one query.
This query could be manually entered, retrieved
from the query logs or generated in some other way.
Domain Model
Evaluation
Corpus
1..*
Topic
Query Group
Query
1..*
1..*
1..*
“Ranking Evaluation Report - created on …
bfa_2018_15_5.json
Coloured Mini Bar Fridges
Black Fridges
• Black mini fridges
• black mini fridge
• black minibar fridge
• mini bar fridge black
14. Apache Lucene/Solr
LondonRRE / Process overview
Runtime Container
RRE Core
For each ratings set
For each dataset
For each topic
For each query group
For each query
Starts the search
platform
Stops the search
platform
Creates & configure the index
Indexes data
For each version Executes query
Computes metric
2
3
4
5
6
7
8
9 12
13
1
11
outputs the evaluation data
14
uses the evaluation data
15
15. Apache Lucene/Solr
LondonRRE / Output
The RRE Core itself is a library, so it outputs its
result as a Plain Java object that must be
programmatically used.
However when wrapped within a runtime container,
like the RRE Maven Plugin, the Evaluation object is
marshalled in JSON format.
Being interoperable, the JSON format can be used by
some other component for producing a different kind
of output.
An example of such usage is the RRE Apache
Maven Reporting Plugin which can
• output a spreadsheet
• send the evaluation data to a running RRE Server
Evaluation output
16. Apache Lucene/Solr
LondonRRE / Available Metrics
These are the RRE built-in metrics which can be
used out of the box.
The most part of them are computed at query level
and then aggregated at upper levels.
However, compound metrics (e.g. MAP, or GMAP)
are not explicitly declared or defined, because the
computation doesn’t happen at query level. The result
of the aggregation executed on the upper levels will
automatically produce these metric.
For example, the Average Precision computed for
Q1, Q2, Q3, Qn becomes the Mean Average
Precision at Query Group or Topic levels.
Available MetricsPrecision
Recall
Precision at 1 (P@1)
Precision at 2 (P@2)
Precision at 3 (P@3)
Precision at 10 (P@10)
Average Precision (AP)
Reciprocal Rank
Mean Reciprocal Rank
Mean Average Precision (MAP)
Normalised Discounted Cumulative Gain
17. Apache Lucene/Solr
LondonRRE / What we need to provide
▪ Dataset / collection which consists of
representative data in your domain.
▪ In general, we could say it should have a
reasonable size.
▪ For those functional scenarios where
we are managing different entity kinds,
RRE allows to provide more than one
dataset.
▪ Although formally a dataset is provided
in JSON files, the actual content depends
on the target search platform.
▪ It
CONFIGURATION SETS
PROCESSINGRATINGS
CORPUS
▪ a structured set of judgements (i.e. relevant documents for
a given query)
▪ It’s not a plain list because it is structured on top of the
composite RRE domain model
▪ Here we can define all things that compose the RRE domain
model: topics, query groups, and queries.
▪ At query group level we can list all documents which are
relevant to all queries belonging to that group
▪ For each relevant document we can express a gain, which
indicates how much a document is relevant.
▪ In the current implementation we are using a three-level
judgement, but this is one thing that most probably will be
generalised in future versions:
▪ 1 => marginally relevant
▪ 2 => relevant
▪ 3 => very relevant
▪ configuration instances at a
given time. This concept is often
captured by introducing “versions”
▪ For each version of our system,
we assume there’s a different
configuration set.
▪ The actual content of each
configuration set depends on the
target platform
▪ For example, if we are using Solr,
each version would contain one or
more core definitions.
18. Apache Lucene/Solr
LondonRRE / What we need to provide: corpora
An evaluation execution can involve more than one
dataset targeting a given search platform.
Within RRE, corpus, dataset, collection are
synonyms.
Each corpus must be located under the corpora
configuration folder. It is then referenced in one or
more ratings file.
The internal format depends on the target search
platform.
Solr datasets are provided using a plain JSON
format (no JSON Update Commands!).
Elasticsearch is instead using the pseudo-json bulk
format.
Corpora
19. Apache Lucene/Solr
LondonRRE / What we need to provide: configuration sets
RRE encourages a configuration immutability
approach.
Even for internal iterations, each time we make a
relevant change to the current configuration, it’s
better to clone it and move forward with a new
version.
In this way we’ll end up having the historical
progression of our system, and RRE will be able to
make comparisons.
The actual content of the configuration sets actually
depends on the target search platform.
Configuration Sets
20. Apache Lucene/Solr
LondonRRE / What we need to provide: ratings
Ratings files (i.e. judgments sets) must be located
under the “ratings” configured folder.
There must be at least one ratings file (otherwise no
evaluation happens)
You can define here all the compounding blocks of
the RRE domain model: reference dataset, topics,
query groups, queries and judgements.
Judgments, the most fundamental part of this input,
consist of a list of all relevant documents for the
owning query group, with a corresponding “gain”
which is the actual relevancy judgment.
If a document is in this list, that means it is relevant
for the current query.
Ratings
21. Apache Lucene/Solr
LondonRRE / Where we need to provide
At configuration level, RRE needs to know where the
following folders are located:
• configuration sets
• corpora
• ratings
• query templates
Although the RRE core itself requires these
information, when it’s wrapped within a container
defaults values are provided.
For example, the Maven Plugin assumes the $
{project.dir}/src/etc folder as the parent of all folders
above.
Folders
22. Apache Lucene/Solr
LondonRRE / Query templates
For each query (or for each query group) it’s
possible to define a query template, which is a kind
of query shape containing one or more
placeholders.
Then, in the ratings file you can reference one of
those defined templates and you can provide a value
for each placeholder.
Templates have been introduced in order to:
• allow a common query management between
search platforms
• define complex queries
• define runtime parameters that cannot be
statically determined (e.g. filters)
Query templates
only_q.json
filter_by_language.json
23. Apache Lucene/Solr
LondonRRE / Maven (Solr | Elasticsearch) Plugin
The RRE Apache Maven Plugin is a runtime
container which is able to execute the evaluation
process within a (Maven) build cycle.
As you can see from the picture on the left, all things
described in the previous slides can be configured.
You don’t need to provide all listed parameters, this is
just a demonstration example. They have a good
defaults that can work in many cases.
The plugin can be attached to whatever phase,
although usually it is executed in the test,
integration-test or install phase.
In this way, if you have some custom code (e.g an
UpdateRequestProcessor), it will be correctly put in
the runtime classpath.
Maven Plugin
24. Apache Lucene/Solr
LondonRRE / Maven Reporting Plugin
The RRE Maven Plugin produces its output in JSON
format, which is interoperable but not so human-
readable.
The RRE Maven Reporting Plugin can be used for
transforming such output in different ways.
The plugin is configured within the pom.xml following
the standard procedures.
The output formats can be configured. Allowed
values, at time of writing, are:
• rre-server: sends the evaluation data to a running
RRE server instance
• spreadsheet: produces an xls files
Reporting Plugin
25. Apache Lucene/Solr
LondonRRE / Maven (Solr | Elasticsearch) Archetype
Very useful if
• you’re starting from scratch
• you don’t use Java as main programming
language.
The RRE Maven archetype generates a Maven
project skeleton with all required folders and
configuration. In each folder there’s a README and a
sample content.
The skeleton can be used as a basis for your Java
project.
The skeleton can be used as it is, just for running
the quality evaluation (and in this case your main
project could be somewhere else)
Maven Archetype
> mvn archetype:generate …
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] ………
[INFO] BUILD SUCCESS
26. Apache Lucene/Solr
LondonRRE / Output, the “spreadsheet” format
After the build cycle, a rre-report.xls is generated
under the target/rre folder.
Spreadsheet format
27. Apache Lucene/Solr
LondonRRE / Output, RRE Server (1/2)
The RRE console is a simple SpringBoot application
which starts a web server.
It provides real-time information about evaluation
results.
Each time a build happens, the RRE reporting plugin
sends the evaluation result to a RESTFul endpoint
provided by RRE Server.
The web console is an AngularJS app which gets
refreshed with that incoming data.
Useful during the development / tuning phase
iterations (as you don’t have to open and open again
an excel file)
RRE Server
28. Apache Lucene/Solr
LondonRRE / Output, RRE Server (2/2)
The evaluation data, at query / version level, collects the top n search results.
In the web console, under each query, there’s a little arrow which allows to open / hide the section which contains those results.
In this way you can get immediately the meaning of each metric and its values between different versions.
In the example above, you can immediately see why there’s a loss of precision (first metric) between v1.0, v1.1, which got fixed in v1.2
30. Apache Lucene/Solr
LondonFuture Works / Building the Input
The main input for RRE is the Ratings file, in JSON format.
Writing a comprehensive JSON to detail the ratings sets for your Search ecosystem can be expensive!
1. Explicit feedback from users judgements
2. An intuitive UI allow judges to run queries, see documents and rate them
3. Relevance label is explicitly assigned by domain experts
1. Implicit feedback from users interactions (Clicks, Sales …)
2. Log to disk / internal Solr instance for analytics
3. Estimate <q,d> relevance label based on Click Through Rate, Sales Rate
Users Interactions Logger
Judgement Collector UI
Quality
Metrics
Ratings
SetInteractions Logger
Judgements Collector
Explicit
Feedback
Implicit
Feedback
RRE
31. Apache Lucene/Solr
LondonFuture Works / Jenkins Plugin
RRE Maven plugin already produces the evaluation
data in a machine-readable format (JSON) which can
be consumed by another component.
The Maven RRE Report plugin or the RRE Server are
just two examples of such consumers.
RRE can be already integrated in a Jenkins CI build
cycle.
By means of a dedicated Jenkins plugin, the
evaluation data will be graphically displayed in the
Jenkins dashboard.
Jenkins Plugin
32. Apache Lucene/Solr
LondonFuture Works / Solr Rank Eval API
The RRE core will be used for implementing a
RequestHandler which will be able to expose a
Ranking Evaluation endpoint.
That would result in the same functionality introduced
in Elasticsearch 6.2 [1] with some differences.
• rich tree data model
• metrics framework
Here it doesn’t make so much sense to provide
comparisons between versions.
As part of the same module we will have a
SearchComponent, for evaluating a single query
interaction.
[1] https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-rank-eval.html
Rank Eval API
/rank_eval
?q=something&evaluate=true
+
RRE
RequestHandler
+
RRE
SearchComponent