Rated Ranking Evaluator Enterprise is an enterprise version of the open source Rated Ranking Evaluator search quality evaluation tool. It features query discovery to automatically extract queries from a search API, rating generation using both explicit ratings and implicit feedback, and an interactive UI for exploring and comparing evaluation results. The UI provides overview, exploration, and comparison views of evaluation data to meet the needs of business stakeholders and software engineers. Future work aims to improve the tool's capabilities around configuration, multimedia support, insights generation, and click modeling.
Search Quality Evaluation: a Developer PerspectiveSease
Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend.
Haystack 2019 - Query relaxation - a rewriting technique between search and r...OpenSource Connections
In search quality optimisation, various techniques are used to improve recall, especially in order to avoid empty search result sets. In most of the solutions, such as spelling correction and query expansion, the search query is modified while the original query intent is normally preserved.
In my talk, I shall describe my experiments with different approaches to query relaxation. Query relaxation is a query rewriting technique which removes one or more terms from multi-term queries that would otherwise lead to zero results. In many cases the removal of a query term entails a change of the query intent, making it difficult to judge the quality of the rewritten query and hence to decide which query term should be removed.
I argue that query relaxation might be best understood if it is seen as a technique on the border between search and recommendations. My focus is on a solution in the context of e-commerce search which is based on using Word2Vec embeddings and which finally made it into production.
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
How to Build your Training Set for a Learning To Rank ProjectSease
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017), organisations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
– model and collect the necessary feedback from the users (implicit or explicit)
– calculate for each training sample a relevance label which is meaningful and not ambiguous (Click Through Rate, Sales Rate …)
– transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training library expect)
Join us as we explore real world scenarios and dos and don’ts from the e-commerce industry.
Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users’ queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end user’s query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances.
This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience.
Talent Search and Recommendation Systems at LinkedIn: Practical Challenges an...Qi Guo
*** Please check out our LinkedIn Engineering blog post: https://engineering.linkedin.com/blog/2019/04/ai-behind-linkedin-recruiter-search-and-recommendation-systems ***
LinkedIn Talent Solutions business contributes to around 65% of LinkedIn’s annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities. LinkedIn’s job ecosystem has been designed as a platform to connect job providers and job seekers, and to serve as a marketplace for efficient matching between potential candidates and job openings. A key mechanism to help achieve these goals is the LinkedIn Recruiter product, which enables recruiters to search for relevant candidates and obtain candidate recommendations for their job postings.
We highlight a few unique information retrieval, system, and modeling challenges associated with talent search and recommendation systems.
In this talk, we will present how we formulated and addressed the problems, the overall system design and architecture, the challenges encountered in practice, and the lessons learned from the production deployment of these systems at LinkedIn. By presenting our experiences of applying techniques at the intersection of recommender systems, information retrieval, machine learning, and statistical modeling in a large-scale industrial setting and highlighting the open problems, we hope to stimulate further research and collaborations within the SIGIR community.
Learning to Rank Datasets for Search with Oscar CastanedaDatabricks
Learning to rank methods automatically learn from user interaction instead of relying on labeled data prepared manually. Learning to rank, also referred to as machine-learned ranking, is an application of reinforcement learning concerned with building ranking models for information retrieval. Learning to rank has been successfully applied in building intelligent search engines, but has yet to show up in dataset search.
Dataset search is ripe for innovation with learning to rank specifically by automating the process of index construction. Oscar will recap previous presentations on dataset search and introduce learning to rank as a way to automate relevance scoring of dataset search results. He will also give a demo of a dataset search engine that makes use of an automatically constructed index using learning to rank on Elasticsearch and Spark.
Oscar will explain the motivation and use case of learning to rank in dataset search focusing on why it is interesting to rank datasets through machine-learned relevance scoring and how to improve indexing efficiency by tapping into user interaction data from clicks. Dataset Search and Learning to Rank are IR and ML topics that should be of interest to Spark Summit attendees who are looking for use cases and new opportunities to organize and rank Datasets in Data Lakes to make them searchable and relevant to users.
In preparation for this talk it is recommend that attendees watch previous two talks on dataset search from prior Spark Summit events as they build up to the present talk:
[1] https://spark-summit.org/east-2017/events/building-a-dataset-search-engine-with-spark-and-elasticsearch/
[2] https://spark-summit.org/eu-2016/events/spark-cluster-with-elasticsearch-inside/
Search Quality Evaluation: a Developer PerspectiveSease
Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend.
Haystack 2019 - Query relaxation - a rewriting technique between search and r...OpenSource Connections
In search quality optimisation, various techniques are used to improve recall, especially in order to avoid empty search result sets. In most of the solutions, such as spelling correction and query expansion, the search query is modified while the original query intent is normally preserved.
In my talk, I shall describe my experiments with different approaches to query relaxation. Query relaxation is a query rewriting technique which removes one or more terms from multi-term queries that would otherwise lead to zero results. In many cases the removal of a query term entails a change of the query intent, making it difficult to judge the quality of the rewritten query and hence to decide which query term should be removed.
I argue that query relaxation might be best understood if it is seen as a technique on the border between search and recommendations. My focus is on a solution in the context of e-commerce search which is based on using Word2Vec embeddings and which finally made it into production.
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
How to Build your Training Set for a Learning To Rank ProjectSease
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017), organisations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
– model and collect the necessary feedback from the users (implicit or explicit)
– calculate for each training sample a relevance label which is meaningful and not ambiguous (Click Through Rate, Sales Rate …)
– transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training library expect)
Join us as we explore real world scenarios and dos and don’ts from the e-commerce industry.
Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users’ queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end user’s query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances.
This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience.
Talent Search and Recommendation Systems at LinkedIn: Practical Challenges an...Qi Guo
*** Please check out our LinkedIn Engineering blog post: https://engineering.linkedin.com/blog/2019/04/ai-behind-linkedin-recruiter-search-and-recommendation-systems ***
LinkedIn Talent Solutions business contributes to around 65% of LinkedIn’s annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities. LinkedIn’s job ecosystem has been designed as a platform to connect job providers and job seekers, and to serve as a marketplace for efficient matching between potential candidates and job openings. A key mechanism to help achieve these goals is the LinkedIn Recruiter product, which enables recruiters to search for relevant candidates and obtain candidate recommendations for their job postings.
We highlight a few unique information retrieval, system, and modeling challenges associated with talent search and recommendation systems.
In this talk, we will present how we formulated and addressed the problems, the overall system design and architecture, the challenges encountered in practice, and the lessons learned from the production deployment of these systems at LinkedIn. By presenting our experiences of applying techniques at the intersection of recommender systems, information retrieval, machine learning, and statistical modeling in a large-scale industrial setting and highlighting the open problems, we hope to stimulate further research and collaborations within the SIGIR community.
Learning to Rank Datasets for Search with Oscar CastanedaDatabricks
Learning to rank methods automatically learn from user interaction instead of relying on labeled data prepared manually. Learning to rank, also referred to as machine-learned ranking, is an application of reinforcement learning concerned with building ranking models for information retrieval. Learning to rank has been successfully applied in building intelligent search engines, but has yet to show up in dataset search.
Dataset search is ripe for innovation with learning to rank specifically by automating the process of index construction. Oscar will recap previous presentations on dataset search and introduce learning to rank as a way to automate relevance scoring of dataset search results. He will also give a demo of a dataset search engine that makes use of an automatically constructed index using learning to rank on Elasticsearch and Spark.
Oscar will explain the motivation and use case of learning to rank in dataset search focusing on why it is interesting to rank datasets through machine-learned relevance scoring and how to improve indexing efficiency by tapping into user interaction data from clicks. Dataset Search and Learning to Rank are IR and ML topics that should be of interest to Spark Summit attendees who are looking for use cases and new opportunities to organize and rank Datasets in Data Lakes to make them searchable and relevant to users.
In preparation for this talk it is recommend that attendees watch previous two talks on dataset search from prior Spark Summit events as they build up to the present talk:
[1] https://spark-summit.org/east-2017/events/building-a-dataset-search-engine-with-spark-and-elasticsearch/
[2] https://spark-summit.org/eu-2016/events/spark-cluster-with-elasticsearch-inside/
Dense Retrieval with Apache Solr Neural Search.pdfSease
Neural Search is an industry derivation from the academic field of Neural information Retrieval. More and more frequently, we hear about how Artificial Intelligence (AI) permeates every aspect of our lives and this includes also software engineering and Information Retrieval.
In particular, the advent of Deep Learning introduced the use of deep neural networks to solve complex problems that could not be solved simply by an algorithm. Deep Learning can be used to produce a vector representation of both the query and the documents in a corpus of information. Search, in general, comprises of performing four primary steps:
- generate a representation of the query that describes the information need - generate a representation of the document that captures the information contained in it
- match the query and the document representations from the corpus of information
- assign a score to each matched document in order to establish a meaningful document ranking by relevance in the results.
With the Neural Search module, Apache Solr is introducing support for neural network based techniques that can improve these four aspects of search.
Learning to Rank (LTR) presentation at RELX Search Summit 2018. Contains information about history of LTR, taxonomy of LTR algorithms, popular algorithms, and case studies of applying LTR using the TMDB dataset using Solr, Elasticsearch and without index support.
Anatomy of an eCommerce Search Engine by Mayur DatarNaresh Jain
In this talk, the chief Data scientist of Flipkart will uncover the various challenges in running an e-commerce search platform like scale, recency, update rates, business shaping etc. He will also explain the overall system architecture of the search platform and get into the details of some of the sub-systems, including the query understanding and rewriting sub-system.
Opinion-based Article Ranking for Information Retrieval Systems: Factoids and...Koray Tugberk GUBUR
How Search Engines Leverage Opinion-based Articles for Ranking?
Search engines use opinions, and factoids to understand the consensus. News search engines use different reports, and opinions in their search results to satisfy the urgent news information needed by the newsreaders. The news search engines differentiate disinformation from information to protect the newsreaders. Google, Microsoft Bing, Yandex, and DuckDuckGo have different algorithms and prioritization for classifications of the news sources, or prioritization of the news, and newsworthy topics.
Corroboration of the Web Answers from the Open Web is a research paper from Amelia Marian and Minji Wu explaining how a search engine can rank information according to its accuracy.
Google started to explain that the Expertise-Authoriteveness-Trustworthiness is the most important group of signals to be sure that a result won't shame the search engine. Embarrassment factors for the search engines involve wrong information on a news title on the news story, or a wrong featured snippet. A search engine might be shame due to the bad result that is ranking on the SERP.
Dense-retrieval, context scoring, named entity recognition, semantic role labeling, truth ranges, fix points, confidence score, query processing, and parsing.
Context understanding requires processing the text, and tokenizing the words by recognizing the word sense. Processing the text of the news articles requires time. And, most of the time, news search engines do not have enough time for processing the text. Thus, PageRank provides a sustainable timeline for the news sources for rankings.
PageRank is a quick signal for search engines to show the authenticity of the news web source. The highly cited sources are ranked higher, and longer on the top stories. Usually, Google protects the high PageRank sources by trusting the judgment of the websites. But, fact-finding algorithms do not use PageRank mostly, unless they couldn't decide by looking at other factors, or they do not have enough resources to process the text among the hundreds of sources.
News ranking algorithms differentiate opinions, reports, and breaking news from each other. News-related entities, their co-existence, and contextual relations change. Google inventors suggest differentiation of these entities from each other for a proper news categorization.
News categorization is important to match the interested topics of the users in queryless news feeds such as Google Discover. Google Discover is a queryless news feed that serves news stories according to the users' interest areas.
An opinion for news might be misleading. Some news titles might be too harsh, or strict. Search engines use these headlines to differentiate the non-trustworthy news sources from the trustworthy ones. And, opinions of journalists or their different interpretations of the events might change the rankings of a document according to the fact-finding algorithms.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. The focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation into Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Google Lighthouse is super valuable but it only checks one page at a time.
Hamlet will show you how to get it to check all pages of a site, and how to run automated Lighthouse checks on-demand at scheduled intervals and from automated tests.
He'll also cover how to set performance budgets, how to get alerts when budgets are exceeded, and how to aggregate page reports using BigQuery and Google Data Studio.
A Learning to Rank Project on a Daily Song Ranking ProblemSease
Ranking data, i.e., ordered list of items, naturally appears in a wide variety of situation; understanding how to adapt a specific dataset and to design the best approach to solve a ranking problem in a real-world scenario is thus crucial.This talk aims to illustrate how to set up and build a Learning to Rank (LTR) project starting from the available data, in our case a Spotify Dataset (available on Kaggle) on the Worldwide Daily Song Ranking, and ending with the implementation of a ranking model. A step by step (phased) approach to cope with this task using open source libraries will be presented.We will examine in depth the most important part of the pipeline that is the data preprocessing and in particular how to model and manipulate the features in order to create the proper input dataset, tailored to the machine learning algorithm requirements.
Search Query Processing: The Secret Life of Queries, Parsing, Rewriting & SEOKoray Tugberk GUBUR
Query Processing is the process of query term weight calculation, query augmentation, query context defining, and more. Query understanding and Query clustering are related to Information Retrieval tasks for the search engines. To provide a better search engine optimization effort and project result, the organic search performance optimizers need to implement query processing methodologies. Digital marketing and SEO are connected to each other. Understanding a query includes query parsing, query rewriting, question generation, and answer pairing. Multi-stages Query Processing, Candidate Answer Passages, or Candidate Answer Passages and Answer Term Weighting are some of the concepts from the Google Search Engine to parse the queries.
The presentation of The Secret Life of Queries, Parsing, Rewriting & SEO has been presented at the Brighton SEO Event in April 2022. The event speech focused on explaining the theoretical SEO and practical SEO examples together.
Query Processing methodologies are beyond synonym matching or synonym finding. It involves multiple aspects of the words, and meanings of the words. The theme of words, the centrality of words, attention windows, context windows, and word co-occurrence matrices, GloVe, Word2Vec, word embeddings, character embeddings, and more.
Themes of words contain the word probability like in Continues Bag of Window.
The search engine optimization community focuses on keyword research by matching the queries. Query processing involves query word order change, query word type change, query word combination change, query phrase synonym usage, query question generation, query clustering. Query processing and document processing are correlational. Query processing is to understand a query while document processing is to process a web document. Both of the processes are for ranking algorithms. Providing a better ranking algorithm requires a better query understanding. And providing better rankings as SEOs require better search engine understanding. Thus, understanding the methods of query processing is necessary.
Search Query Processing is implementing the query processing for thesearch engines. Search query refers to the phrase that search engine users use for searching. Search intent understanding and search intent grouping are two different things. But, query templates, questions templates, and document templates work together. Search query is for organic search behaviors. A web search engine answers millions of queries every day. Search query processing is a fundamental task for search engine optimization and search engine result page optimization.
The "Semantic Search Engine: Query Processing" slides from Koray Tuğberk GÜBÜR supported the presentation of "Search Query Processing: The Secret Life of Queries, Parsing, Rewriting & SEO". The presentation has been created by Dear Rebecca Berbel.
Many thanks to the Google engineers that created the Semantic Search Engine patents including Larry Page.
Better Search Through Query Understanding
Presented as a Data Talk at Intuit on April 22, 2014
Search is a fundamental problem of our time — we use search engines daily to satisfy a variety of personal and professional information needs. But search engine development still feels stuck in an information retrieval paradigm that focuses on result ranking. In this talk, I’ll advocate an emphasis on query understanding. I’ll talk about how we implement query understanding at LinkedIn, and I’ll present examples from the broader web. Hopefully you’ll come out with a different perspective on search and share my appreciation for how we can improve search through query understanding.
About the Speaker
Daniel Tunkelang leads LinkedIn's efforts around query understanding. Before that, he led LinkedIn's product data science team. He previously led a local search quality team at Google and was a founding employee of Endeca (acquired by Oracle in 2011). He has written a textbook on faceted search, and is a recognized advocate of human-computer interaction and information retrieval (HCIR). He has a PhD in Computer Science from CMU, as well as BS and MS degrees from MIT.
Thought Vectors and Knowledge Graphs in AI-powered SearchTrey Grainger
While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences.
In this talk, we'll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We'll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions.
Tutorial on metrics of user engagement -- Applications to Search & E- commerceMounia Lalmas-Roelleke
User engagement plays a central role in companies operating online services, such as search engines, news portals, e-commerce sites, and social networks. A main challenge is to leverage collected knowledge about the daily online behavior of millions of users to understand what engage them short-term and more importantly long-term. The most common way that engagement is measured is through various online metrics, acting as proxy measures of user engagement. This tutorial reviews these metrics and proposes a taxonomy of metrics. As case studies, it focuses on two types of services, search and e-commerce. The tutorial also discusses how to develop better machine learning models to optimize online metrics, and design experiments to test these models.
This tutorial was given by Mounia Lalmas from Spotify and Liangjie Long from Etsy Inc.
This tutorial was presented at WSDM 2018 (11th ACM International Conference on Web Search and Data Mining). It is the first delivery of this tutorial, so feedbacks and comments are welcome. We intend to continue working on this material.
Learning to Rank Presentation (v2) at LexisNexis Search GuildSujit Pal
An introduction to Learning to Rank, with case studies using RankLib with and without plugins provided by Solr and Elasticsearch. RankLib is a library of learning to rank algorithms, which includes some popular LTR algorithms such as LambdaMART, RankBoost, RankNet, etc.
What really are recommendations engines nowadays?
This presentation introduces the foundations of recommendation algorithms, and covers common approaches as well as some of the most advanced techniques. Although more focused on efficiency than theoretical properties, basics of matrix algebra and optimization-based machine learning are used through the presentation.
Table of Contents:
1. Collaborative Filtering
1.1 User-User
1.2 Item-Item
1.3 User-Item
* Matrix Factorization
* Stochastic Gradient Descent (SGD)
* Truncated Singular Value Decomposition (SVD)
* Alternating Least Square (ALS)
* Deep Learning
2. Content Extraction
* Item-Item Similarities
* Deep Content Extraction: NLP, CNN, LSTM
3. Hybrid Models
4. In Production
4.1 Problematics
4.2 Solutions
4.3 Tools
Haystack 2019 - Rated Ranking Evaluator: an Open Source Approach for Search Q...OpenSource Connections
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
- flexible and highly configurable for a technical user
- immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
flexible and highly configurable for a technical user
immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Dense Retrieval with Apache Solr Neural Search.pdfSease
Neural Search is an industry derivation from the academic field of Neural information Retrieval. More and more frequently, we hear about how Artificial Intelligence (AI) permeates every aspect of our lives and this includes also software engineering and Information Retrieval.
In particular, the advent of Deep Learning introduced the use of deep neural networks to solve complex problems that could not be solved simply by an algorithm. Deep Learning can be used to produce a vector representation of both the query and the documents in a corpus of information. Search, in general, comprises of performing four primary steps:
- generate a representation of the query that describes the information need - generate a representation of the document that captures the information contained in it
- match the query and the document representations from the corpus of information
- assign a score to each matched document in order to establish a meaningful document ranking by relevance in the results.
With the Neural Search module, Apache Solr is introducing support for neural network based techniques that can improve these four aspects of search.
Learning to Rank (LTR) presentation at RELX Search Summit 2018. Contains information about history of LTR, taxonomy of LTR algorithms, popular algorithms, and case studies of applying LTR using the TMDB dataset using Solr, Elasticsearch and without index support.
Anatomy of an eCommerce Search Engine by Mayur DatarNaresh Jain
In this talk, the chief Data scientist of Flipkart will uncover the various challenges in running an e-commerce search platform like scale, recency, update rates, business shaping etc. He will also explain the overall system architecture of the search platform and get into the details of some of the sub-systems, including the query understanding and rewriting sub-system.
Opinion-based Article Ranking for Information Retrieval Systems: Factoids and...Koray Tugberk GUBUR
How Search Engines Leverage Opinion-based Articles for Ranking?
Search engines use opinions, and factoids to understand the consensus. News search engines use different reports, and opinions in their search results to satisfy the urgent news information needed by the newsreaders. The news search engines differentiate disinformation from information to protect the newsreaders. Google, Microsoft Bing, Yandex, and DuckDuckGo have different algorithms and prioritization for classifications of the news sources, or prioritization of the news, and newsworthy topics.
Corroboration of the Web Answers from the Open Web is a research paper from Amelia Marian and Minji Wu explaining how a search engine can rank information according to its accuracy.
Google started to explain that the Expertise-Authoriteveness-Trustworthiness is the most important group of signals to be sure that a result won't shame the search engine. Embarrassment factors for the search engines involve wrong information on a news title on the news story, or a wrong featured snippet. A search engine might be shame due to the bad result that is ranking on the SERP.
Dense-retrieval, context scoring, named entity recognition, semantic role labeling, truth ranges, fix points, confidence score, query processing, and parsing.
Context understanding requires processing the text, and tokenizing the words by recognizing the word sense. Processing the text of the news articles requires time. And, most of the time, news search engines do not have enough time for processing the text. Thus, PageRank provides a sustainable timeline for the news sources for rankings.
PageRank is a quick signal for search engines to show the authenticity of the news web source. The highly cited sources are ranked higher, and longer on the top stories. Usually, Google protects the high PageRank sources by trusting the judgment of the websites. But, fact-finding algorithms do not use PageRank mostly, unless they couldn't decide by looking at other factors, or they do not have enough resources to process the text among the hundreds of sources.
News ranking algorithms differentiate opinions, reports, and breaking news from each other. News-related entities, their co-existence, and contextual relations change. Google inventors suggest differentiation of these entities from each other for a proper news categorization.
News categorization is important to match the interested topics of the users in queryless news feeds such as Google Discover. Google Discover is a queryless news feed that serves news stories according to the users' interest areas.
An opinion for news might be misleading. Some news titles might be too harsh, or strict. Search engines use these headlines to differentiate the non-trustworthy news sources from the trustworthy ones. And, opinions of journalists or their different interpretations of the events might change the rankings of a document according to the fact-finding algorithms.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. The focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation into Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Google Lighthouse is super valuable but it only checks one page at a time.
Hamlet will show you how to get it to check all pages of a site, and how to run automated Lighthouse checks on-demand at scheduled intervals and from automated tests.
He'll also cover how to set performance budgets, how to get alerts when budgets are exceeded, and how to aggregate page reports using BigQuery and Google Data Studio.
A Learning to Rank Project on a Daily Song Ranking ProblemSease
Ranking data, i.e., ordered list of items, naturally appears in a wide variety of situation; understanding how to adapt a specific dataset and to design the best approach to solve a ranking problem in a real-world scenario is thus crucial.This talk aims to illustrate how to set up and build a Learning to Rank (LTR) project starting from the available data, in our case a Spotify Dataset (available on Kaggle) on the Worldwide Daily Song Ranking, and ending with the implementation of a ranking model. A step by step (phased) approach to cope with this task using open source libraries will be presented.We will examine in depth the most important part of the pipeline that is the data preprocessing and in particular how to model and manipulate the features in order to create the proper input dataset, tailored to the machine learning algorithm requirements.
Search Query Processing: The Secret Life of Queries, Parsing, Rewriting & SEOKoray Tugberk GUBUR
Query Processing is the process of query term weight calculation, query augmentation, query context defining, and more. Query understanding and Query clustering are related to Information Retrieval tasks for the search engines. To provide a better search engine optimization effort and project result, the organic search performance optimizers need to implement query processing methodologies. Digital marketing and SEO are connected to each other. Understanding a query includes query parsing, query rewriting, question generation, and answer pairing. Multi-stages Query Processing, Candidate Answer Passages, or Candidate Answer Passages and Answer Term Weighting are some of the concepts from the Google Search Engine to parse the queries.
The presentation of The Secret Life of Queries, Parsing, Rewriting & SEO has been presented at the Brighton SEO Event in April 2022. The event speech focused on explaining the theoretical SEO and practical SEO examples together.
Query Processing methodologies are beyond synonym matching or synonym finding. It involves multiple aspects of the words, and meanings of the words. The theme of words, the centrality of words, attention windows, context windows, and word co-occurrence matrices, GloVe, Word2Vec, word embeddings, character embeddings, and more.
Themes of words contain the word probability like in Continues Bag of Window.
The search engine optimization community focuses on keyword research by matching the queries. Query processing involves query word order change, query word type change, query word combination change, query phrase synonym usage, query question generation, query clustering. Query processing and document processing are correlational. Query processing is to understand a query while document processing is to process a web document. Both of the processes are for ranking algorithms. Providing a better ranking algorithm requires a better query understanding. And providing better rankings as SEOs require better search engine understanding. Thus, understanding the methods of query processing is necessary.
Search Query Processing is implementing the query processing for thesearch engines. Search query refers to the phrase that search engine users use for searching. Search intent understanding and search intent grouping are two different things. But, query templates, questions templates, and document templates work together. Search query is for organic search behaviors. A web search engine answers millions of queries every day. Search query processing is a fundamental task for search engine optimization and search engine result page optimization.
The "Semantic Search Engine: Query Processing" slides from Koray Tuğberk GÜBÜR supported the presentation of "Search Query Processing: The Secret Life of Queries, Parsing, Rewriting & SEO". The presentation has been created by Dear Rebecca Berbel.
Many thanks to the Google engineers that created the Semantic Search Engine patents including Larry Page.
Better Search Through Query Understanding
Presented as a Data Talk at Intuit on April 22, 2014
Search is a fundamental problem of our time — we use search engines daily to satisfy a variety of personal and professional information needs. But search engine development still feels stuck in an information retrieval paradigm that focuses on result ranking. In this talk, I’ll advocate an emphasis on query understanding. I’ll talk about how we implement query understanding at LinkedIn, and I’ll present examples from the broader web. Hopefully you’ll come out with a different perspective on search and share my appreciation for how we can improve search through query understanding.
About the Speaker
Daniel Tunkelang leads LinkedIn's efforts around query understanding. Before that, he led LinkedIn's product data science team. He previously led a local search quality team at Google and was a founding employee of Endeca (acquired by Oracle in 2011). He has written a textbook on faceted search, and is a recognized advocate of human-computer interaction and information retrieval (HCIR). He has a PhD in Computer Science from CMU, as well as BS and MS degrees from MIT.
Thought Vectors and Knowledge Graphs in AI-powered SearchTrey Grainger
While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences.
In this talk, we'll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We'll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions.
Tutorial on metrics of user engagement -- Applications to Search & E- commerceMounia Lalmas-Roelleke
User engagement plays a central role in companies operating online services, such as search engines, news portals, e-commerce sites, and social networks. A main challenge is to leverage collected knowledge about the daily online behavior of millions of users to understand what engage them short-term and more importantly long-term. The most common way that engagement is measured is through various online metrics, acting as proxy measures of user engagement. This tutorial reviews these metrics and proposes a taxonomy of metrics. As case studies, it focuses on two types of services, search and e-commerce. The tutorial also discusses how to develop better machine learning models to optimize online metrics, and design experiments to test these models.
This tutorial was given by Mounia Lalmas from Spotify and Liangjie Long from Etsy Inc.
This tutorial was presented at WSDM 2018 (11th ACM International Conference on Web Search and Data Mining). It is the first delivery of this tutorial, so feedbacks and comments are welcome. We intend to continue working on this material.
Learning to Rank Presentation (v2) at LexisNexis Search GuildSujit Pal
An introduction to Learning to Rank, with case studies using RankLib with and without plugins provided by Solr and Elasticsearch. RankLib is a library of learning to rank algorithms, which includes some popular LTR algorithms such as LambdaMART, RankBoost, RankNet, etc.
What really are recommendations engines nowadays?
This presentation introduces the foundations of recommendation algorithms, and covers common approaches as well as some of the most advanced techniques. Although more focused on efficiency than theoretical properties, basics of matrix algebra and optimization-based machine learning are used through the presentation.
Table of Contents:
1. Collaborative Filtering
1.1 User-User
1.2 Item-Item
1.3 User-Item
* Matrix Factorization
* Stochastic Gradient Descent (SGD)
* Truncated Singular Value Decomposition (SVD)
* Alternating Least Square (ALS)
* Deep Learning
2. Content Extraction
* Item-Item Similarities
* Deep Content Extraction: NLP, CNN, LSTM
3. Hybrid Models
4. In Production
4.1 Problematics
4.2 Solutions
4.3 Tools
Haystack 2019 - Rated Ranking Evaluator: an Open Source Approach for Search Q...OpenSource Connections
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
- flexible and highly configurable for a technical user
- immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on Information Retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(at a specific point in time and historically).
Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders.
To satisfy these requirements an helpful tool must be:
flexible and highly configurable for a technical user
immediate, visual and concise for an optimal business utilization
In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort.
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationSease
To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows.
This talk will introduce RRE, it will describe its latest developments and demonstrate how it can be integrated in a project to measure and assess the search quality of your search application.
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachAlessandro Benedetti
Every information retrieval practitioner ordinarily struggles with the task of evaluating how well a search engine is performing and to reproduce the performance achieved in a specific point in time.
Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
Additionally it is extremely important to track the evolution of the search system in time and to be able to reproduce and measure the same performance (through metrics of interest such as precison@k, recall, NDCG@k...).
The talk will describe the Rated Ranking Evaluator from a researcher and software engineer perspective.
RRE is an open source search quality evaluation tool, that can be used to produce a set of reports about the quality of a system, iteration after iteration and that could be integrated within a continuous integration infrastructure to monitor quality metrics after each release .
Focus of the talk will be to raise public awareness of the topic of search quality evaluation and reproducibility describing how RRE could help the industry.
Rated Ranking Evaluator: An Open Source Approach for Search Quality EvaluationAlessandro Benedetti
Every team working on information retrieval software struggles with the task of evaluating how well their system performs in terms of search quality(currently and historically). Evaluating search quality is important both to understand and size the improvement or regression of your search application across the development cycles, and to communicate such progress to relevant stakeholders. To satisfy these requirements an helpful tool must be: - flexible and highly configurable for a technical user - immediate, visual and concise for an optimal business utilization In the industry, and especially in the open source community, the landscape is quite fragmented: such requirements are often achieved using ad-hoc partial solutions that each time require a considerable amount of development and customization effort. To provide a standard, unified and approachable technology, we developed the Rated Ranking Evaluator (RRE), an open source tool for evaluating and measuring the search quality of a given search infrastructure. RRE is modular, compatible with multiple search technologies and easy to extend. It is composed by a core library and a set of modules and plugins that give it the flexibility to be integrated in automated evaluation processes and in continuous integrations flows. This talk will introduce RRE, it will describe its functionalities and demonstrate how it can be integrated in a project and how it can help to measure and assess the search quality of your search application. The focus of the presentation will be on a live demo showing an example project with a set of initial relevancy issues that we will solve iteration after iteration: using RRE output feedbacks to gradually drive the improvement process until we reach an optimal balance between quality evaluation measures.
Search Quality Evaluation: a Developer PerspectiveAndrea Gazzarini
Search quality evaluation is an ever-green topic every search engineer ordinarily struggles with. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going.
The slides will focus on how a search quality evaluation tool can be seen under a practical developer perspective, how it could be used for producing a deliverable artifact and how it could be integrated within a continuous integration infrastructure.
LSP ( Logic Score Preference ) _ Rajan_Dhabalia_San Francisco State Universitydhabalia
The software Quality Analysis is a measure of properties of a piece of software or its
specifications. The direct measurement of software quality is quite difficult due to lack of
quality factor measurement. To resolve this measurement problem, there is a model which
measures the quality of the software in terms of the attributes, specifications and
characteristics. This model is known as LSP (Logic Score Preference) .When client gives
specifications of the software to the developer then client expects the good quality of
software from developers. Hence, to decide the quality of software we can use this LSP
model.
This model validates following software quality attributes.
(1) Functionality
Suitability
Accuracy
Security
Interoperability
Compliance
(2) Usability
Understandability
Learn ability
Operability
(3) Performance
Processing time
Throughput
Resource consumption
(4) Maintainability
(5) Portability
(6) Reusability
In LSP, the features are decomposed into above aggregation blocks. And this decomposition
continues with in the each block until the all the lowest level features are directly measurable
and makes tree of decomposed features. And for each feature, an elementary criterion is
defined. And LSP calculates elementary preference for each criterion and then aggregate all
of them to calculate final global preference. And this global preference shows the quality of
the software. We can calculate global preference for different systems and we can analyze
and compare the systems’ quality.
How To Implement Your Online Search Quality Evaluation With KibanaSease
Online testing represents a fundamental method to assess the performance of a ranking model in practical applications, providing the information needed to improve and better understand its behavior. Despite the advantages, the currently available evaluation tools have certain limitations. For this reason, we will present an alternative and customized approach to evaluate ranking models using Kibana. The talk will begin with an overview of online testing, including its benefits and drawbacks. Then, we will provide an in-depth exploration of our Kibana implementation, detailing the reasons behind our approach. Attendees will learn about the various tools provided by Kibana, and with practical examples, we will show how to create visualizations and dashboards, complete with queries and code, to compare different rankers. Attending this presentation will provide participants with valuable knowledge on how to leverage Kibana for the purpose of evaluating ranking models on custom metrics and on specific contexts such as the most popular and “populous” queries.
SharePoint 2013 Search Topology and OptimizationMike Maadarani
In this presentation, I am explaining the details of all search components, how to properly configure the search topology, and the options to extend the search farm in a hybrid “cloud/on-premises” scenario. This presentation will explain what you need to consider to design your search, in order to handle your organization's needs. We will dive into scripting a high availability search topology, keeping it healthy and manage your day-to-day search operations.
Learn about how to optimize your search for best performance and search relevancy, to support reliable search applications.
This presentation explains the details of all search components, how to properly configure your search topology, and your options to extend your search farm in a hybrid “cloud/on-prem” scenario. You will learn what you need to consider to design your search, in order to handle your organization's needs. We will dive into scripting a high availability search topology, keeping it healthy and manage your day-to-day search operations.
Learn about how to optimize your search for best performance and search relevancy, to support reliable search applications. Together, we will review where Search lives in the farm, the crawl components of search to implement a scalable farm.
Since the introduction of native vector-based search in Apache Lucene happened, many features have been developed, but the support for multiple vectors in a dedicated KNN vector field remained to explore. Having the possibility of indexing (and searching) multiple values per field unlocks the possibility of working with long textual documents, splitting them into paragraphs, and encoding each paragraph as a separate vector: a scenario that is often encountered by many businesses. This talk explores the challenges, the technical design and the implementation activities that happened during the work for this contribution to the Apache Lucene project. The audience is expected to get an understanding of how multi-valued fields can work in a vector-based search use case and how this feature has been implemented.
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...Sease
This intervention draws on experimentations ongoing in the context of the OECD-led Statistical Information System Collaboration Community (SIS-CC) to enable AI applications with SDMX. One important use case is to use AI for better accessibility and discoverability of the data: whilst UX techniques, lexical search improvements, and data harmonisation can take statistical organisations to a good level of accessibility, however, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints. That is where AI – and most importantly, NLP and LLM techniques – could potentially make a difference. The “StatsBot” could be this natural language, conversational engine that could facilitate access and usage of the data. The “StatsBot” could leverage the semantics of any SDMX source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal and create the StatsBot as a universal, open asset usable by all statistical organisations. In a first step, the concept tested is to use Large Language Models with the Apache Solr index of SDMX objects so as to transform natural language queries into SDMX queries. In a second step, results could be framed as a natural language statement complementing the top-k search results. For the purpose of initial PoCs – aimed to demonstrate functional features and feasibility – a commercial LLM (such as OpenAI GPT-4) will be used; in a later stage substitution with an open source LLM will be analysed. The presentation will include the results of the first experimental work, lessons learnt, and scope future work that should lead to defining the path for production-grade, fully open source, and universal StatsBot.
Introducing Multi Valued Vectors Fields in Apache LuceneSease
Since the introduction of native vector-based search in Apache Lucene happened, many features have been developed, but the support for multiple vectors in a dedicated KNN vector field remained to explore. Having the possibility of indexing (and searching) multiple values per field unlocks the possibility of working with long textual documents, splitting them in paragraphs and encoding each paragraph as a separate vector: scenario that is often encountered by many businesses. This talk explores the challenges, the technical design and the implementation activities happened during the work for this contribution to the Apache Lucene project. The audience is expected to get an understanding of how multi-valued fields can work in a vector-based search use-case and how this feature has been implemented.
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...Sease
Interleaving is an online evaluation approach for information retrieval systems that compares the effectiveness of ranking functions in interpreting the users’ implicit feedback. Previous work such as Hofmann et al. (2011) has evaluated the most promising interleaved methods at the time, on uniform distributions of queries. In the real world, usually, there is an unbalanced distribution of repeated queries that follows a long-tailed users’ search demand curve. This paper first aims to reproduce the Team Draft Interleaving accuracy evaluation on uniform query distributions and then focuses on assessing how this method generalises to long-tailed real-world scenarios. The replicability work raised interesting considerations on how the winning ranking function for each query should impact the overall winner for the entire evaluation. Based on what was observed, we propose that not all the queries should contribute to the final decision in equal proportion. As a result of these insights, we designed two variations of the ∆AB score winner estimator that assign to each query a credit based on statistical hypothesis testing. To reproduce, replicate and extend the original work, we have developed from scratch a system that simulates a search engine and users’ interactions from datasets from the industry. Our experiments confirm our intuition and show that our methods are promising in terms of accuracy, sensitivity, and robustness to noise.
How does ChatGPT work: an Information Retrieval perspectiveSease
In this talk, we will explore the underlying mechanisms of ChatGPT, a large-scale language model developed by OpenAI, from the perspective of Information Retrieval (IR). We will delve into the process of training the model using massive amounts of data, the techniques used to optimize the model’s performance, and how the IR concepts such as tokenization, vectorization, and ranking are used in generating responses. We will also discuss how ChatGPT handles contextual understanding and how it leverages the power of transfer learning to generate high-quality and relevant responses. Software engineers will gain insights into how a modern conversational AI system like ChatGPT works, providing a better understanding of its strengths and limitations, and how to best integrate it into their software applications.
This abstract has been fully written by ChatGPT with the simple prompt in input <Write an abstract for a talk called “How does ChatGPT work? An Information Retrieval perspective”, the audience is software engineers>.
How To Implement Your Online Search Quality Evaluation With KibanaSease
Online testing remains the optimal way to prove how your ranking model performs in your real-world scenario. It can lead to many advantages such as having a direct interpretation of the results and confirming the estimation of offline tests. It gives a better understanding of the ranking model behaviour and builds a solid foundation to learn from to improve it.
Nowadays, the available evaluation tools have some limitations and in this talk, we will describe an alternative and customised approach for evaluating ranking models through the use of Kibana.
First of all, we give an overview of online testing, highlighting the pros and cons and describing the state-of-the-art.
We then dive into Kibana’s implementation and the reasons behind it. We will explore the tools Kibana provides, with their constraints for real-world applications, and show, through practical examples, how to create dashboards (with queries and code) to compare different models.
Learning To Rank has been the first integration of machine learning techniques with Apache Solr allowing you to improve the ranking of your search results using training data.
One limitation is that documents have to contain the keywords that the user typed in the search box in order to be retrieved(and then reranked). For example, the query “jaguar” won’t retrieve documents containing only the terms “panthera onca”. This is called the vocabulary mismatch problem.
Neural search is an Artificial Intelligence technique that allows a search engine to reach those documents that are semantically similar to the user’s information need without necessarily containing those query terms; it learns the similarity of terms and sentences in your collection through deep neural networks and numerical vector representation(so no manual synonyms are needed!).
This talk explores the first Apache Solr official contribution about this topic, available from Apache Solr 9.0.
We start with an overview of neural search (Don’t worry - we keep it simple!): we describe vector representations for queries and documents, and how Approximate K-Nearest Neighbor (KNN) vector search works. We show how neural search can be used along with deep learning techniques (e.g, BERT) or directly on vector data, and how we implemented this feature in Apache Solr, giving usage examples!
Join us as we explore this new exciting Apache Solr feature and learn how you can leverage it to improve your search experience!
SHARE Virtual Discovery Environment (Share-VDE) is a library-driven initiative that brings together the bibliographic catalogues and authority files of a community of libraries in a shared discovery environment based on linked data.
One of the main challenges is the massive amount of data the system is supposed to manage in terms of Search, Manipulation, and Presentation.
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...Sease
The first integrations of machine learning techniques with search allowed to improve the ranking of your search results (Learning To Rank) – but one limitation has always been that documents had to contain the keywords that the user typed in the search box in order to be retrieved. For example, the query “tiger” won’t retrieve documents containing only the terms “panthera tigris”. This is called the vocabulary mismatch problem and over the years it has been mitigated through query and document expansion approaches.
Neural search is an Artificial Intelligence technique that allows a search engine to reach those documents that are semantically similar to the user’s query without necessarily containing those terms; it avoids the need for long lists of synonyms by automatically learning the similarity of terms and sentences in your collection through the utilisation of deep neural networks and numerical vector representation.
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfSease
f you want to expand your query/documents with synonyms in Apache Lucene, you need to have a predefined file containing the list of terms that share the same semantic. It’s not always easy to find a list of basic synonyms for a language and, even if you find it, this doesn’t necessarily match with your contextual domain.
The term “daemon” in the domain of operating system articles is not a synonym of “devil” but it’s closer to the term “process”.
Word2Vec is a two-layer neural network that takes as input a text and outputs a vector representation for each word in the dictionary. Two words with similar meanings are identified with two vectors close to each other.
How to cache your searches_ an open source implementation.pptxSease
Caches are used in IT systems to store data in dedicated structures for fast access so that future requests can be served faster. They are an effective tool to store the query results and speed up future query executions in information retrieval systems.
An open-source system like Apache Solr uses three different caches: queryResultCache, filterCache, and documentCache.
In this talk, we will focus on queryResultCache and filterCache and we will see, through practical examples, how they are used to handle different types of queries.
Advanced Document Similarity with Apache LuceneSease
Being your core domain involving real world entities ( such as hotels, restaurant, cars ...) or text documents, searching for similar entities, given one in input, is a very common use case for most of the systems that involve information retrieval. This presentation will start describing how much this problem is present across a variety of different scenarios and how you can use the More Like This feature in the Apache Lucene library to solve it. Building on the introduction the focus will be on how the More Like This module internally works, all the components involved end to end, BM25 text similarity metric and how this has been included through a cospicuos refactor and testing process. The presentation will include real world usage examples and future developments such as improved query building through positional phrase queries and term relevancy scoring pluggability.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
In the last few years, Artificial Intelligence applications have become more and more sophisticated and often operate like algorithmic “black boxes” for decision-making. Due to this fact, some questions naturally arise when working with these models: why should we trust a certain decision taken by these algorithms? Why and how was this prediction made? Which variables mostly influenced the prediction? The most crucial challenge with complex machine learning models is therefore their interpretability and explainability. This talk aims to illustrate an overview of the most popular explainability techniques and their application in Learning to Rank. In particular, we will examine in depth a powerful library called SHAP with both theoretical and practical insights; we will talk about its amazing tools to give an explanation of the model behaviour, especially how each feature impacts the model’s output, and we will explain to you how to interpret the results in a Learning to Rank scenario.
Interactive Questions and Answers - London Information Retrieval MeetupSease
Answers to some questions about Natural Language Search, Language Modelling (Google Bert, OpenAI GPT-3), Neural Search and Learning to Rank made during our London Information Retrieval Meetup (December).
Feature Extraction for Large-Scale Text CollectionsSease
Feature engineering is a fundamental but poorly documented component in LTR search applications.
As a result, there are still few open access software packages that allow researchers and practitioners to easily simulate a feature extraction pipeline and conduct experiments in a lab setting.
This talk introduces Fxt, an open-source framework to perform efficient and scalable feature extraction. Fxt may be integrated into complex, high-performance software applications to help solve a wide variety of text-based machine learning problems.
The talk details how we built and documented a reproducible feature extraction pipeline with LTR experiments using the ClueWeb09B collection.
This LTR dataset is publicly available.
We’ll also discuss some of the benefits (feature extraction efficiency, model interpretation) of having open access tooling in this area for researchers and practitioners alike.
Enterprise Search – How Relevant Is Relevance?Sease
Enterprise search is the outlier in search applications. It has to work effectively with very large collections of un-curated content, often in multiple languages, to meet the requirements of employees who need to make business-critical decisions.
In this talk, I will outline the challenges of searching enterprise content. Recent research is revealing a unique pattern of search behaviour in which relevance is both very important and yet also irrelevant, and where recall is just as important as precision. This behaviour has implications for the use of standard metrics for search performance (especially in the case of federated search across multiple applications) and for the adoption of AI/ML techniques.
Evaluating Your Learning to Rank Model: Dos and Don’ts in Offline/Online Eval...Sease
For more details:
https://sease.io/2020/04/the-importance-of-online-testing-in-learning-to-rank-part-1.html
https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html
Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017 and Elasticsearch has an Open Source plugin released in 2018), organizations struggle with the problem of how to evaluate the quality of the models they train.
This talk explores all the major points in both Offline and Online evaluation.
Setting up correct infrastructures and processes for a fair and effective evaluation of the trained models is vital for measuring the improvements/regressions of a LTR system.
The talk is intended for:
– Product Owners, Search Managers, Business Owners
– Software Engineers, Data Scientists, and Machine Learning Enthusiast
Expect to learn :
the importance of Offline testing from a business perspective
how Offline testing can be done with Open Source libraries
how to build a realistic test set from the original data set in input avoiding common mistakes in the process
the importance of Online testing from a business perspective
A/B testing and Interleaving approaches: details and Pros/ Cons
common mistakes and how they can false the obtained results
Join us as we explore real-world scenarios and dos and don’ts from the e-commerce industry!
How to Build your Training Set for a Learning To Rank Project - HaystackSease
Presented by Alessandro Benedetti of Sease, Learning to Rank (LTR) is the application of machine learning techniques (typically supervised), in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular, organizations struggle with the problem of how to collect and structure relevance signals necessary to train their ranking models.
This talk is a technical guide to explore and master various techniques to generate your training set(s) correctly and efficiently.
Expect to learn how to :
- model and collect the necessary feedback from the users (implicit or explicit)
- calculate for each training sample a relevance label that is meaningful and not ambiguous (Click Through Rate, Sales Rate ...)
- transform the raw data collected in an effective training set (in the numerical vector format most of the LTR training libraries expect)
Join us as we explore real-world scenarios and dos and don'ts from the e-commerce industry.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
Rated Ranking Evaluator Enterprise: the next generation of free Search Quality Evaluation Tools
1. Rated Ranking Evaluator Enterprise:
the Next Generation of Free Search
Quality Evaluation Tools
Alessandro Benedetti, Director
Andrea Gazzarini, Co-Founder
15th
September 2021
2. ‣ Born in Tarquinia(ancient Etruscan city)
‣ R&D Software Engineer
‣ Director
‣ Master in Computer Science
‣ Apache Lucene/Solr PMC member/committer
‣ Elasticsearch expert
‣ Semantic, NLP, Machine Learning
technologies passionate
‣ Beach Volleyball player and Snowboarder
Who We Are
Alessandro Benedetti
3. ‣ Born in Viterbo
‣ Hermit Software Engineer
‣ Master in Economy
‣ Programming Passionate
‣ RRE Creator
‣ Apache Lucene/Solr Expert
‣ Elasticsearch Expert
‣ Apache Qpid Committer
‣ Father, Husband
‣ Bass player, aspiring (still frustrated at the moment) Chapman
Stick player
Who We Are
Andrea Gazzarini
4. ‣ Headquarter in London/distributed
‣ Open Source Enthusiasts
‣ Apache Lucene/Solr experts
‣ Elasticsearch experts
‣ Community Contributors
‣ Active Researchers
‣ Hot Trends : Learning To Rank,
Document Similarity,
Search Quality Evaluation,
Relevancy Tuning
Search Services
www.sease.io
8. ‣ Open Source library for Search Quality Evaluation
‣ Ratings are expected in input (Json supported)
‣ Many offline metrics available out-of-the-box
(Precision@k, NDCG@k, F-Measure…)
‣ Apache Solr and Elasticsearch support
‣ Development-centric approach
‣ Evaluation on the fly and results in various formats
‣ Community building up!
‣ RRE-User mailing list:
https://groups.google.com/g/rre-user
‣ https://github.com/SeaseLtd/rated-ranking-evaluator
Rated Ranking Evaluator : RRE Open Source
What is it ?
9. Rated Ranking Evaluator : The Genesis
2018
Search Consultancy Project
A customer explicitly asked for a
rudimental search quality evaluation
tool while we were working on its
search infrastructure.
Jun
Search Quality Evaluation
A Developer Perspective
0.9
Search Quality Evaluation
Tools And Techniques
Oct
1.0
to be continued...
mumble mumble
10. Rated Ranking Evaluator: The Idea
RRE has been thought as a development tool that
executes search quality evaluations as part of a
project build process.
It’s like a read–eval–print loop (REPL) tied on top of an
Information Retrieval subsystem that encourages an
incremental / iterative approach.
The underlying idea
New system Existing system
Here are the requirements
Ok
V1.0 has been released
Cool!
a month later…
We have a change request.
We found a bug
We need to improve our search
system, users are complaining about
junk in search results.
Ok
v0.1
…
v0.9
v1.1
v1.2
v1.3
…
v2.0
v2.0
In terms of retrieval effectiveness,
how can we know the system
performance across various versions?
11. Rated Ranking Evaluator: Domain Model
RRE Domain Model is organized into a composite /
tree-like structure where the relationships between
entities are always 1 to many.
The top level entity is a placeholder representing an
evaluation execution.
Versioned metrics are computed at query level and
then reported, using an aggregation function, at
upper levels.
The benefit of having a composite structure is clear:
we can see a metric value at different levels (e.g. a
query, all queries belonging to a query group, all
queries belonging to a topic or at corpus level)
Domain Model
Evaluation
Corpus
1..*
v1.0
P@10
NDCG
AP
F-MEASURE
….
v1.1
P@10
NDCG
AP
F-MEASURE
….
v1.2
P@10
NDCG
AP
F-MEASURE
….
v1.n
P@10
NDCG
AP
F-MEASURE
….
Topic
Query Group
Query
1..*
1..*
1..*
…
Top level domain entity
Test dataset / collection
Information need
Query variants
Queries
12. Rated Ranking Evaluator : How it works
Data
Configuration
Ratings
Search Platform
uses a
produces
Evaluation Data
INPUT LAYER EVALUATION LAYER OUTPUT LAYER
JSON
RRE Console
…
used for generating
13. Explainability - Why is important in Information Retrieval?
Dev, tune & Build
Check evaluation results
We are thinking about how
to fill a third monitor
15. RRE Enterprise : The Genesis
2018
Search Consultancy Project
A customer explicitly asked for a rudimental search quality evaluation
tool while we were working on its search infrastructure.
2019
Rated Ranking Evaluator
An Open Source Approach
to be continued...
The first sketches depicting an idea
about an enterprise-level version of
RRE.
The development starts few months
later.
16. Rated Ranking Evaluator : The Genesis
2018
Search Consultancy Project
A customer explicitly asked for a rudimental search quality evaluation
tool while we were working on its search infrastructure.
2019
to be continued...
2020
2021
JBCP
ID Discovery
Query Discovery
18. RRE Open Source Recap: How it works (½)
Data
Configuration
Ratings
targets produces
Evaluation Data
INPUT LAYER EVALUATION LAYER OUTPUT LAYER
JSON
RRE Console
…
used for generating
OR
19. RRE Open Source Recap: How it works (2/2)
Data
Configuration
Ratings
targets produces
Evaluation Data
INPUT LAYER EVALUATION LAYER OUTPUT LAYER
JSON
RRE Console
…
used for generating
OR
20. RRE Enterprise: Query Discovery?
API
Problem: I have an intermediate Search-API that builds complex Apache Solr/Elasticsearch queries
21. RRE Open Source: No Query Discovery
targets
OR
1
Evaluation Data
produces
2
22. Query Discovery
RRE Enterprise: Query Discovery & Evaluation
2
Evaluation Data
produces
3
targets
OR
{SEARCH API}
1
correlates on
targets
23. Input Rating: RRE Open Source vs RREE
RRE Enterprise
RRE Open Source
Topic
Query Group
Query
Information need
Query variant
(Search Engine) Query
+ 3
Rated Document
Topic
Query Group
API Request
Information need
Query variant
(Search API) Request
+ 3
Rated Document
Query (Search Engine) Query
25. RRE Enterprise: Rating Generation
A fundamental requirement of
offline search quality
evaluation is to gather
<query,document,rating>
triples that represent the
relevance(rating) of a
document given a user
information need(query).
Before assessing the retrieval
effectiveness of a system it is
necessary to associate a
relevance rating to each pair
<query, document> involved in
our evaluation.
26. RRE Enterprise: Explicit Ratings (1/2)
• Explicitly provided by domain experts
• High accuracy
• High effort / time / resources
• RRE Open Source accepts only explicit ratings
Explicit Ratings
RRE Rating Structure
Topic
Query Group
Query
Information need
Query variant
Query
+ 3
Rated Document
Question: how to minimize the relevant effort required
to provide explicit ratings?
27. RRE Enterprise: Explicit Ratings (2/2)
• Chrome plugin which applies an
evaluation layer on top of an arbitrary
website
• Ratings are generated using directly the
customer website
• Users Lowest learning curve
• Generated ratings are sent to RREE
through a specific endpoint
• An “ID Discovery” component translates
the received data (rated web items) in RRE
Ratings (rated Solr/Elasticsearch
documents)
Judgment Collector
47. ‣ Release with free usage plan
‣ Configuration support
‣ Support for multimedia document properties
‣ Intelligent insights on weak performing queries,
groups, topics
‣ Improvements on Click Modelling for Implicit
Relevance estimation
RRE Enterprise: Future Work