For the longest time, term-based vector representations based on whole-document statistics, such as TF-IDF, have been the staple of efficient and effective information retrieval. The popularity of Deep Learning over the past decade has resulted in the development of many interesting embedding schemes. Like term-based vector representations, these embeddings depend on structure implicit in language and user behavior. Unlike them, they leverage the distributional hypothesis, which states that the meaning of a word is determined by the context in which it appears. These embeddings have been found to better encode the semantics of the word, compared to term-based representations. Despite this, it has only recently become practical to use embeddings in Information Retrieval at scale.
In this presentation, we will describe how we applied two new embedding schemes to Scopus, Elsevier’s broad coverage database of scientific, technical, and medical literature. Both schemes are based on the distributional hypothesis but come from very different backgrounds. The first embedding is a graph embedding called node2vec, that encodes papers using citation relationships between them as specified by their authors. The second embedding leverages Transformers, a recent innovation in the area of Deep Learning, that are essentially language models trained on large bodies of text. These two embeddings exploit the signal implicit in these data sources and produce semantically rich user and content-based vector representations respectively. We will evaluate these embedding schemes and describe how we used the Vespa search engine to search these embeddings for similar documents within the Scopus dataset. Finally, we will describe how RELX staff can access these embeddings for their own data science needs, independent of the search application.
Tableau Conference 2018: Binging on Data - Enabling Analytics at NetflixBlake Irvine
In this conference session we share how we are using Tableau “out of the box” and also describe how it fits into our overall data environment. In addition, we’ll describe how we expect to use the Data Catalog and Object Model, our explorations of large-scale data stores, and challenges we are working on including governance and data lineage. Video of session can be viewed here: https://youtu.be/Nr24tw3dmZQ
User Behavior Analytics at Netflix, presented at Predictive Analytics World in 2017. Slides include the data processing architecture, the analytic component that identifies abnormal patterns, a rules engine and the overall modular framework that fits all these pieces together to provide an end-to-end solution.
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
Nearest neighbors refers to something that is conceptually very simple. For a set of points in some space (possibly many dimensions), we want to find the closest k neighbors quickly.
This presentation covers a library called Annoy built my me that that helps you do (approximate) nearest neighbor queries in high dimensional spaces. We're going through vector models, how to measure similarity, and why nearest neighbor queries are useful.
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
Dmitry Kan, Principal AI Scientist at Silo AI and host of the Vector Podcast [1], will give an overview of the landscape of vector search databases and their role in NLP, along with the latest news and his view on the future of vector search. Further, he will share how he and his team participated in the Billion-Scale Approximate Nearest Neighbor Challenge and improved recall by 12% over a baseline FAISS.
Presented at https://www.meetup.com/open-nlp-meetup/events/282678520/
YouTube: https://www.youtube.com/watch?v=RM0uuMiqO8s&t=179s
Follow Vector Podcast to stay up to date on this topic: https://www.youtube.com/@VectorPodcast
Recommendation Systems - Why How and Real Life ApplicationsLiron Zighelnic
These slides were created for a presentation at MIT - Massachusetts Institution of Technology
- Data Analytics Club
Recommendations become very popular in almost every field of our lives, from movies, to news to dating. Many systems try to give us personal recommendations.
In this presentation we will examine:
- Why recommendations are important?
- What are the main methods and algorithms being used?
- Real life applications & who use it? (the question should be: who doesn’t?)
About CurtainApp:
CurtainApp is an intelligent mobile app that learns your taste and gives you personal fashion recommendations, making shopping fun and efficient
Visit: www.curtainapp.com
Join us on Facebook: facebook.com/CurtainApp
Follow us on Twitter: twitter.com/thecurtainapp
#MIT #mobileapp #recommendation #fashion #recommendersystems #paradoxofchoice #Google #Netflix #OkCupid #Pandora #Curtain
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
Tableau Conference 2018: Binging on Data - Enabling Analytics at NetflixBlake Irvine
In this conference session we share how we are using Tableau “out of the box” and also describe how it fits into our overall data environment. In addition, we’ll describe how we expect to use the Data Catalog and Object Model, our explorations of large-scale data stores, and challenges we are working on including governance and data lineage. Video of session can be viewed here: https://youtu.be/Nr24tw3dmZQ
User Behavior Analytics at Netflix, presented at Predictive Analytics World in 2017. Slides include the data processing architecture, the analytic component that identifies abnormal patterns, a rules engine and the overall modular framework that fits all these pieces together to provide an end-to-end solution.
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
Nearest neighbors refers to something that is conceptually very simple. For a set of points in some space (possibly many dimensions), we want to find the closest k neighbors quickly.
This presentation covers a library called Annoy built my me that that helps you do (approximate) nearest neighbor queries in high dimensional spaces. We're going through vector models, how to measure similarity, and why nearest neighbor queries are useful.
What is Rated Ranking Evaluator and how to use it (for both Software Engineer and IT Manager). Talk made during Chorus Workshops at Plainschwarz Salon.
Dmitry Kan, Principal AI Scientist at Silo AI and host of the Vector Podcast [1], will give an overview of the landscape of vector search databases and their role in NLP, along with the latest news and his view on the future of vector search. Further, he will share how he and his team participated in the Billion-Scale Approximate Nearest Neighbor Challenge and improved recall by 12% over a baseline FAISS.
Presented at https://www.meetup.com/open-nlp-meetup/events/282678520/
YouTube: https://www.youtube.com/watch?v=RM0uuMiqO8s&t=179s
Follow Vector Podcast to stay up to date on this topic: https://www.youtube.com/@VectorPodcast
Recommendation Systems - Why How and Real Life ApplicationsLiron Zighelnic
These slides were created for a presentation at MIT - Massachusetts Institution of Technology
- Data Analytics Club
Recommendations become very popular in almost every field of our lives, from movies, to news to dating. Many systems try to give us personal recommendations.
In this presentation we will examine:
- Why recommendations are important?
- What are the main methods and algorithms being used?
- Real life applications & who use it? (the question should be: who doesn’t?)
About CurtainApp:
CurtainApp is an intelligent mobile app that learns your taste and gives you personal fashion recommendations, making shopping fun and efficient
Visit: www.curtainapp.com
Join us on Facebook: facebook.com/CurtainApp
Follow us on Twitter: twitter.com/thecurtainapp
#MIT #mobileapp #recommendation #fashion #recommendersystems #paradoxofchoice #Google #Netflix #OkCupid #Pandora #Curtain
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Sease
RRE is an open-source search quality evaluation tool that can be used to produce a set of reports about the quality of a system, iteration after iteration, and that can be integrated within a continuous integration infrastructure to monitor quality metrics after each release.
Many aspects remained problematic though:
– how to directly evaluate a middle layer search-API that communicates with Apache Solr or Elasticsearch?
– how to easily generate explicit and implicit ratings without spending hours on tedious json files?
– how to better explore the evaluation results? with nice widgets and interesting insights?
Rated Ranking Evaluator Enterprise solves these problems and much more.
Join us as we introduce the next generation of open-source search quality evaluation tools, exploring the internals and real-world scenarios!
Haystack 2019 - Query relaxation - a rewriting technique between search and r...OpenSource Connections
In search quality optimisation, various techniques are used to improve recall, especially in order to avoid empty search result sets. In most of the solutions, such as spelling correction and query expansion, the search query is modified while the original query intent is normally preserved.
In my talk, I shall describe my experiments with different approaches to query relaxation. Query relaxation is a query rewriting technique which removes one or more terms from multi-term queries that would otherwise lead to zero results. In many cases the removal of a query term entails a change of the query intent, making it difficult to judge the quality of the rewritten query and hence to decide which query term should be removed.
I argue that query relaxation might be best understood if it is seen as a technique on the border between search and recommendations. My focus is on a solution in the context of e-commerce search which is based on using Word2Vec embeddings and which finally made it into production.
Learning to Rank (LTR) presentation at RELX Search Summit 2018. Contains information about history of LTR, taxonomy of LTR algorithms, popular algorithms, and case studies of applying LTR using the TMDB dataset using Solr, Elasticsearch and without index support.
Battle of the Stream Processing Titans – Flink versus RisingWaveYingjun Wu
The world of real-time data processing is constantly evolving, with new technologies and platforms emerging to meet the ever-increasing demands of modern data-driven businesses. Apache Flink and RisingWave are two powerful stream processing solutions that have gained significant traction in recent years. But which platform is right for your organization? Karin Wolok and Yingjun Wu go head-to-head to compare and contrast the strengths and limitations of Flink and RisingWave. They’ll also share real-world use cases, best practices for optimizing performance and efficiency, and key considerations for selecting the right solution for your specific business needs.
Deep Natural Language Processing for Search and Recommender SystemsHuiji Gao
Tutorial for KDD 2019:
Search and recommender systems process rich natural language text data such as user queries and documents. Achieving high-quality search and recommendation results requires processing and understanding such information effectively and efficiently, where natural language processing (NLP) technologies are widely deployed. In recent years, the rapid development of deep learning models has been proven successful for improving various NLP tasks, indicating their great potential of promoting search and recommender systems.
In this tutorial, we summarize the current effort of deep learning for NLP in search/recommender systems. We first give an overview of search/recommender systems with NLP, then introduce basic concept of deep learning for NLP, covering state-of-the-art technologies in both language understanding and language generation. After that, we share our hands-on experience with LinkedIn applications. In the end, we highlight several important future trends.
Talk with Yves Raimond at the GPU Tech Conference on Marth 28, 2018 in San Jose, CA.
Abstract:
In this talk, we will survey how Deep Learning methods can be applied to personalization and recommendations. We will cover why standard Deep Learning approaches don't perform better than typical collaborative filtering techniques. Then we will survey we will go over recently published research at the intersection of Deep Learning and recommender systems, looking at how they integrate new types of data, explore new models, or change the recommendation problem statement. We will also highlight some of the ways that neural networks are used at Netflix and how we can use GPUs to train recommender systems. Finally, we will highlight promising new directions in this space.
Personalization at Netflix - Making Stories Travel Sudeep Das, Ph.D.
I give a high level overview of how personalization at Netflix helps our members find titles that spark joy, as well as help stories travel across the world.
Dense Retrieval with Apache Solr Neural Search.pdfSease
Neural Search is an industry derivation from the academic field of Neural information Retrieval. More and more frequently, we hear about how Artificial Intelligence (AI) permeates every aspect of our lives and this includes also software engineering and Information Retrieval.
In particular, the advent of Deep Learning introduced the use of deep neural networks to solve complex problems that could not be solved simply by an algorithm. Deep Learning can be used to produce a vector representation of both the query and the documents in a corpus of information. Search, in general, comprises of performing four primary steps:
- generate a representation of the query that describes the information need - generate a representation of the document that captures the information contained in it
- match the query and the document representations from the corpus of information
- assign a score to each matched document in order to establish a meaningful document ranking by relevance in the results.
With the Neural Search module, Apache Solr is introducing support for neural network based techniques that can improve these four aspects of search.
The talk explores following topics:
- What is the search relevance and why is it important?
- Relevance scoring in Elasticsearch
- Manipulating relevance with Query DSL structure
- Pros and cons in using Machine Learning for improving search relevance
- Using Learning to Rank (aka Machine Learning for better relevance) in Elasticsearch
Vector Database is a new vertical of databases used to index and measure the similarity between different pieces of data. While it works well with structured data, when utilized for Vector Similarity Search (VSS) it really shines when comparing similarity in unstructured data, such as vector embedding of images, audio, or long pieces of text
Presentation covers core lucene/solr stuff which is used in numeric range queries. There are several examples, algorithm discovered by Uwe is briefly explained.
Find it! Nail it!Boosting e-commerce search conversions with machine learnin...Rakuten Group, Inc.
Over the past decade, e-commerce has leapt from lighthouse customers to mainstream consumers, offering online inventories with millions of products readily available to shoppers. To help buyers easily find products and fulfill their goals, it is necessary to provide effective search methods that retrieve highly-relevant items. However, manual and/or rule-based approaches to search optimization are not scalable. In this talk, we illustrate machine learning methods that have been successfully applied at web-scale to optimize search relevance for e-commerce. Additionally, we present techniques to extract semantic information from queries and precisely match product attributes to improve search relevance against structured products.
Learned Embeddings for Search and Discovery at InstacartSharath Rao
Learned word embeddings such as Word2vec/Glove were initially found to be effective for broad range of tasks in Natural Language Processing (NLP). More recently though, these are being used successfully in areas well beyond text such as graphs and event streams. In this talk Sharath will speak about how we use learned embeddings at Instacart for search ranking, personalization and product recommendations.
Presented at: SF Data Mining Meetup https://www.meetup.com/Data-Mining/events/237164197/
The release of TensorFlow 2.0 comes with a significant number of improvements over its 1.x version, all with a focus on ease of usability and a better user experience. We will give an overview of what TensorFlow 2.0 is and discuss how to get started building models from scratch using TensorFlow 2.0’s high-level api, Keras. We will walk through an example step-by-step in Python of how to build an image classifier. We will then showcase how to leverage a transfer learning to make building a model even easier! With transfer learning, we can leverage other pretrained models such as ImageNet to drastically speed up the training time of our model. TensorFlow 2.0 makes this incredibly simple to do.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Sease
RRE is an open-source search quality evaluation tool that can be used to produce a set of reports about the quality of a system, iteration after iteration, and that can be integrated within a continuous integration infrastructure to monitor quality metrics after each release.
Many aspects remained problematic though:
– how to directly evaluate a middle layer search-API that communicates with Apache Solr or Elasticsearch?
– how to easily generate explicit and implicit ratings without spending hours on tedious json files?
– how to better explore the evaluation results? with nice widgets and interesting insights?
Rated Ranking Evaluator Enterprise solves these problems and much more.
Join us as we introduce the next generation of open-source search quality evaluation tools, exploring the internals and real-world scenarios!
Haystack 2019 - Query relaxation - a rewriting technique between search and r...OpenSource Connections
In search quality optimisation, various techniques are used to improve recall, especially in order to avoid empty search result sets. In most of the solutions, such as spelling correction and query expansion, the search query is modified while the original query intent is normally preserved.
In my talk, I shall describe my experiments with different approaches to query relaxation. Query relaxation is a query rewriting technique which removes one or more terms from multi-term queries that would otherwise lead to zero results. In many cases the removal of a query term entails a change of the query intent, making it difficult to judge the quality of the rewritten query and hence to decide which query term should be removed.
I argue that query relaxation might be best understood if it is seen as a technique on the border between search and recommendations. My focus is on a solution in the context of e-commerce search which is based on using Word2Vec embeddings and which finally made it into production.
Learning to Rank (LTR) presentation at RELX Search Summit 2018. Contains information about history of LTR, taxonomy of LTR algorithms, popular algorithms, and case studies of applying LTR using the TMDB dataset using Solr, Elasticsearch and without index support.
Battle of the Stream Processing Titans – Flink versus RisingWaveYingjun Wu
The world of real-time data processing is constantly evolving, with new technologies and platforms emerging to meet the ever-increasing demands of modern data-driven businesses. Apache Flink and RisingWave are two powerful stream processing solutions that have gained significant traction in recent years. But which platform is right for your organization? Karin Wolok and Yingjun Wu go head-to-head to compare and contrast the strengths and limitations of Flink and RisingWave. They’ll also share real-world use cases, best practices for optimizing performance and efficiency, and key considerations for selecting the right solution for your specific business needs.
Deep Natural Language Processing for Search and Recommender SystemsHuiji Gao
Tutorial for KDD 2019:
Search and recommender systems process rich natural language text data such as user queries and documents. Achieving high-quality search and recommendation results requires processing and understanding such information effectively and efficiently, where natural language processing (NLP) technologies are widely deployed. In recent years, the rapid development of deep learning models has been proven successful for improving various NLP tasks, indicating their great potential of promoting search and recommender systems.
In this tutorial, we summarize the current effort of deep learning for NLP in search/recommender systems. We first give an overview of search/recommender systems with NLP, then introduce basic concept of deep learning for NLP, covering state-of-the-art technologies in both language understanding and language generation. After that, we share our hands-on experience with LinkedIn applications. In the end, we highlight several important future trends.
Talk with Yves Raimond at the GPU Tech Conference on Marth 28, 2018 in San Jose, CA.
Abstract:
In this talk, we will survey how Deep Learning methods can be applied to personalization and recommendations. We will cover why standard Deep Learning approaches don't perform better than typical collaborative filtering techniques. Then we will survey we will go over recently published research at the intersection of Deep Learning and recommender systems, looking at how they integrate new types of data, explore new models, or change the recommendation problem statement. We will also highlight some of the ways that neural networks are used at Netflix and how we can use GPUs to train recommender systems. Finally, we will highlight promising new directions in this space.
Personalization at Netflix - Making Stories Travel Sudeep Das, Ph.D.
I give a high level overview of how personalization at Netflix helps our members find titles that spark joy, as well as help stories travel across the world.
Dense Retrieval with Apache Solr Neural Search.pdfSease
Neural Search is an industry derivation from the academic field of Neural information Retrieval. More and more frequently, we hear about how Artificial Intelligence (AI) permeates every aspect of our lives and this includes also software engineering and Information Retrieval.
In particular, the advent of Deep Learning introduced the use of deep neural networks to solve complex problems that could not be solved simply by an algorithm. Deep Learning can be used to produce a vector representation of both the query and the documents in a corpus of information. Search, in general, comprises of performing four primary steps:
- generate a representation of the query that describes the information need - generate a representation of the document that captures the information contained in it
- match the query and the document representations from the corpus of information
- assign a score to each matched document in order to establish a meaningful document ranking by relevance in the results.
With the Neural Search module, Apache Solr is introducing support for neural network based techniques that can improve these four aspects of search.
The talk explores following topics:
- What is the search relevance and why is it important?
- Relevance scoring in Elasticsearch
- Manipulating relevance with Query DSL structure
- Pros and cons in using Machine Learning for improving search relevance
- Using Learning to Rank (aka Machine Learning for better relevance) in Elasticsearch
Vector Database is a new vertical of databases used to index and measure the similarity between different pieces of data. While it works well with structured data, when utilized for Vector Similarity Search (VSS) it really shines when comparing similarity in unstructured data, such as vector embedding of images, audio, or long pieces of text
Presentation covers core lucene/solr stuff which is used in numeric range queries. There are several examples, algorithm discovered by Uwe is briefly explained.
Find it! Nail it!Boosting e-commerce search conversions with machine learnin...Rakuten Group, Inc.
Over the past decade, e-commerce has leapt from lighthouse customers to mainstream consumers, offering online inventories with millions of products readily available to shoppers. To help buyers easily find products and fulfill their goals, it is necessary to provide effective search methods that retrieve highly-relevant items. However, manual and/or rule-based approaches to search optimization are not scalable. In this talk, we illustrate machine learning methods that have been successfully applied at web-scale to optimize search relevance for e-commerce. Additionally, we present techniques to extract semantic information from queries and precisely match product attributes to improve search relevance against structured products.
Learned Embeddings for Search and Discovery at InstacartSharath Rao
Learned word embeddings such as Word2vec/Glove were initially found to be effective for broad range of tasks in Natural Language Processing (NLP). More recently though, these are being used successfully in areas well beyond text such as graphs and event streams. In this talk Sharath will speak about how we use learned embeddings at Instacart for search ranking, personalization and product recommendations.
Presented at: SF Data Mining Meetup https://www.meetup.com/Data-Mining/events/237164197/
The release of TensorFlow 2.0 comes with a significant number of improvements over its 1.x version, all with a focus on ease of usability and a better user experience. We will give an overview of what TensorFlow 2.0 is and discuss how to get started building models from scratch using TensorFlow 2.0’s high-level api, Keras. We will walk through an example step-by-step in Python of how to build an image classifier. We will then showcase how to leverage a transfer learning to make building a model even easier! With transfer learning, we can leverage other pretrained models such as ImageNet to drastically speed up the training time of our model. TensorFlow 2.0 makes this incredibly simple to do.
Overview of the TREC 2019 Deep Learning TrackNick Craswell
Overview talk presented at TREC 2019, describing our benchmarking efforts for neural and non-neural information retrieval models in a large data regime.
Benchmarking search relevance in industry vs academiaNick Craswell
Update of my WSDM2017 practice and experience talk (also on slideshare) talking about lessons from industry on the use of offline metrics in information retrieval. Since a key thing is to have more training and test sets, this talk describes our more recent data releases.
In this presentation, we discuss the key steps for cleaning and managing data in SPSS. We will review removal of participants, imputation, creating composite scores, and checking for outliers.
Presentation made during the Intelligent User-Adapted Interfaces: Design and Multi-Modal Evaluation Workshop (IUadaptME) workshop conducted as part of UMAP 2018
Temporal and semantic analysis of richly typed social networks from user-gene...Zide Meng
We propose an approach to detect topics, overlapping communities of interest, expertise, trends and activities in user-generated content sites and in particular in question-answering forums such as StackOverflow. We first describe QASM (Question & Answer Social Media), a system based on social network analysis to manage the two main resources in question-answering sites: users and content. We also introduce the QASM vocabulary used to formalize both the level of interest and the expertise of users on topics. We then propose an efficient approach to detect communities of interest. It relies on another method to enrich questions with a more general tag when needed. We compared three detection methods on a dataset extracted from the popular Q&A site StackOverflow. Our method based on topic modeling and user membership assignment is shown to be much simpler and faster while preserving the quality of detection. We then propose an additional method to automatically generate a label for a detected topic by analyzing the meaning and links of its bag of words. We conduct a user study to compare different algorithms to choose a label. Finally we extend our probabilistic graphical model to jointly model topics, expertise, activities and trends. We performed experiments with real-world data to confirm the effectiveness of our joint model, studying user behaviors and topic dynamics.
http://www-sop.inria.fr/members/Zide.Meng/
SQLBits Module 2 RStats Introduction to R and StatisticsJen Stirrup
SQLBits Module 2 RStats Introduction to R and Statistics. This is a 90 minute segment of a full preconference workshop, focusing on data analytics with R.
Validate data
Questionnaire checking
Edit acceptable questionnaires
Code the questionnaires
Keypunch the data
Clean the data set
Statistically adjust the data
Store the data set for analysis
Analyse data
Similar to Using Graph and Transformer Embeddings for Vector Based Retrieval (20)
Supporting Concept Search using a Clinical Healthcare Knowledge GraphSujit Pal
We describe our Dictionary based Named Entity Recognizer and Semantic Matcher that enables us to leverage our Knowledge Graph to provide Concept Search. We also describe our Named Entity Linking based Concept Recommender to support manual curation of our Knowledge Graph.
Youtube URL for talk: https://youtu.be/5UWrS_j8dDg
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
Slides accompanying project submission video for Google AI Hackathon. Describes a LCEL and DSPy based evaluation framework inspired by the RAGAS project.
Accompanying video URL: https://youtu.be/yOIU65chc98
Building Learning to Rank (LTR) search reranking models using Large Language ...Sujit Pal
Search engineers have many tools to address relevance. Older tools are typically unsupervised (statistical, rule based) and require large investments in manual tuning effort. Newer ones involve training or fine-tuning machine learning models and vector search, which require large investments in labeling documents with their relevance to queries.
Learning to Rank (LTR) models are in the latter category. However, their popularity has traditionally been limited to domains where user data can be harnessed to generate labels that are cheap and plentiful, such as e-commerce sites. In domains where this is not true, labeling often involves human experts, and results in labels that are neither cheap nor plentiful. This effectively becomes a roadblock to adoption of LTR models in these domains, in spite of their effectiveness in general.
Generative Large Language Models (LLMs) with parameters in the 70B+ range have been found to perform well at tasks that require mimicking human preferences. Labeling query-document pairs with relevance judgements for training LTR models is one such task. Using LLMs for this task opens up the possibility of obtaining a potentially unlimited number of query judgment labels, and makes LTR models a viable approach to improving the site’s search relevancy.
In this presentation, we describe work that was done to train and evaluate four LTR based re-rankers against lexical, vector, and heuristic search baselines. The models were a mix of pointwise, pairwise and listwise, and required different strategies to generate labels for them. All four models outperformed the lexical baseline, and one of the four models outperformed the vector search baseline as well. None of the models beat the heuristics baseline, although two came close – however, it is important to note that the heuristics were built up over months of trial and error and required familiarity of the search domain, whereas the LTR models were built in days and required much less familiarity.
The ability to handle long question style queries is often de rigueur for modern search engines. Search giants such as Bing and Google are addressing this by building Large Language Models (LLMs) into their search pipelines. Unfortunately, this approach requires large investments in infrastructure and involves high operational costs. It can also lead to loss of confidence when the LLM hallucinates non-factual answers.
A best practice for designing search pipelines is to make the search layer as cheap and fast as possible, and move heavyweight operations into the indexing layer. With that in mind, we present an approach that combines the use of LLMs during indexing to generate questions from passages, and matching them to incoming questions during search, using either text based or vector based matching. We believe this approach can provide good quality question answering capabilities for search applications and address the cost and confidence issues mentioned above.
Vector search goes far beyond just text, and, in this interactive workshop, you will learn how to use it for multimodal search through an in-depth look at CLIP, a vision and language model, developed by OpenAI. Sujit Pal, technology research director at Elsevier, and Raphael Pisoni, senior computer vision engineer at Partium.io, will walk you through two applications of image search and then have a panel discussion with our staff developer advocate, James, on how to use CLIP for image and text search.
Learning a Joint Embedding Representation for Image Search using Self-supervi...Sujit Pal
Image search interfaces either prompt the searcher to provide a search image (image-to-image search) or a text description of the image (text-to-image search). Image to Image search is generally implemented as a nearest neighbor search in a dense image embedding space, where the embedding is derived from Neural Networks pre-trained on a large image corpus such as ImageNet. Text to image search can be implemented via traditional (TF/IDF or BM25 based) text search against image captions or image tags.
In this presentation, we describe how we fine-tuned the OpenAI CLIP model (available from Hugging Face) to learn a joint image/text embedding representation from naturally occurring image-caption pairs in literature, using contrastive learning. We then show this model in action against a dataset of medical image-caption pairs, using the Vespa search engine to support text based (BM25), vector based (ANN) and hybrid text-to-image and image-to-image search.
The power of community: training a Transformer Language Model on a shoestringSujit Pal
I recently participated in a community event to train an ALBERT language model for the Bengali language. The event was organized by Neuropark, Hugging Face, and Yandex Research. The training was done collaboratively in a distributed manner using free GPU resources provided by Colab and Kaggle. Volunteers were recruited on Twitter and project coordination happened on Discord. At its peak, there were approximately 50 volunteers from all over the world simultaneously engaged in training the model. The distributed training was done on the Hivemind platform from Yandex Research, and the software to train the model in a data-parallel manner was developed by Hugging Face. In this talk I provide my perspective of the project as a somewhat curious participant. I will describe the Hivemind platform, the training regimen, and the evaluation of the language model on downstream tasks. I will also cover some challenges we encountered that were peculiar to the Bengali language (and Indic languages in general).
Accelerating NLP with Dask and Saturn CloudSujit Pal
Slides for talk delivered at NY NLP Meetup. Abstract -- Python has a great ecosystem of tools for natural language processing (NLP) pipelines, but challenges arise when data sizes and computational complexity grows. Best case, a pipeline is left to run overnight or even over several days. Worst case, certain analyses or computations are just not possible. Dask is a Python-native parallel processing tool that enables Python users to easily scale their code across a cluster of machines. This talk presents an example of an NLP entity extraction pipeline using SciSpacy with Dask for parallelization. This pipeline extracts named entities from the CORD-19 dataset, using trained models from the SciSpaCy project, and makes them available for downstream tasks in the form of structured Parquet files. The pipeline was built and executed on Saturn Cloud, a platform that makes it easy to launch and manage Dask clusters. The talk will present an introduction to Dask and explain how users can easily accelerate Python and NLP code across clusters of machines.
Accelerating NLP with Dask on Saturn Cloud: A case study with CORD-19Sujit Pal
Python has a great ecosystem of tools for natural language processing (NLP) pipelines, but challenges arise when data sizes and computational complexity grows. Best case, a pipeline is left to run overnight or even over several days. Worst case, certain analyses or computations are just not possible. Dask is a Python-native parallel processing tool that enables Python users to easily scale their code across a cluster of machines.
This talk presents an example of an NLP entity extraction pipeline using SciSpacy with Dask for parallelization, which was built and executed on Saturn Cloud. Saturn Cloud is an end-to-end data science and machine learning platform that provides an easy interface for Python environments and Dask clusters, removing many barriers to accessing parallel computing. This pipeline extracts named entities from the CORD-19 dataset, using trained models from the SciSpaCy project, and makes them available for downstream tasks in the form of structured Parquet files. We will provide an introduction to Dask and Saturn Cloud, then walk through the NLP code.
Leslie Smith's Papers discussion for DL Journal ClubSujit Pal
This deck discusses two papers by Dr Leslie Smith. The first paper discusses empirical findings around learning rate (LR) and other regularization parameters for neural networks, and leads to the idea of Cyclic Learning Rates (CLR). The second paper discusses CLR in depth, as well as how to estimate its parameters. The slides also covers LR Finder, a tool first introduced in the Fast.AI library to find optimal parameters for CLR, including how to run it and interpret its outputs.
Transformer Mods for Document Length InputsSujit Pal
The Transformer architecture is responsible for many state of the art results in Natural Language Processing. A central feature behind its superior performance over Recurrent Neural Networks is its multi-headed self-attention mechanism. However, the superior performance comes at a cost, an O(n2) time and memory complexity, where n is the size of the input sequence. Because of this, it is computationally infeasible to feed large documents to the standard transformer. To overcome this limitation, a number of approaches have been proposed, which involve modifying the self-attention mechanism in interesting ways.
In this presentation, I will describe the transformer architecture, and specifically the self-attention mechanism, and then describe some of the approaches proposed to address the O(n2) complexity. Some of these approaches have also been implemented in the HuggingFace transformers library, and I will demonstrate some code for doing document level operations using one of these approaches.
Question Answering as Search - the Anserini Pipeline and Other StoriesSujit Pal
In the last couple of years, we have seen enormous breakthroughs in automated Open Domain Restricted Context Question Answering, also known as Reading Comprehension, where the task is to find an answer to a question from a single document or paragraph. A potentially more useful task is to find an answer for a question from a corpus representing an entire body of knowledge, also known as Open Domain Open Context Question Answering.
To do this, we adapted the BERTSerini architecture (Yang, et al., 2019), using it to answer questions about clinical content from our corpus of 5000+ medical textbooks. The BERTSerini pipeline consists of two components -- a BERT model fine-tuned for Question Answering, and an Anserini (Yang, Fang, and Lin, 2017) IR pipeline for Passage Retrieval. Anserini, in turn, consists of pluggable components for different kinds of query expansion and result reranking. Given a question, Anserini retrieves candidate passages, which the BERT model uses to retrieve the answer from. The best answer is determined using a combination of passage retrieval and answer scores.
Evaluating this system using a locally developed dataset of medical passages, questions, and answers, we adapted the BERT Question Answering component to our content using a combination of fine-tuning with third party SQuAD data, and pre-training the model using our medical content. However, when we replaced the canned passages with passages retrieved using the Anserini pipeline, performance dropped significantly, indicating that the relevance of the retrieved passages was a limiting factor.
The presentation will describe the actions taken to improve the relevance of passages returned by the Anserini pipeline.
Building Named Entity Recognition Models Efficiently using NERDSSujit Pal
Named Entity Recognition (NER) is foundational for many downstream NLP tasks such as Information Retrieval, Relation Extraction, Question Answering, and Knowledge Base Construction. While many high-quality pre-trained NER models exist, they usually cover a small subset of popular entities such as people, organizations, and locations. But what if we need to recognize domain specific entities such as proteins, chemical names, diseases, etc? The Open Source Named Entity Recognition for Data Scientists (NERDS) toolkit, from the Elsevier Data Science team, was built to address this need.
NERDS aims to speed up development and evaluation of NER models by providing a set of NER algorithms that are callable through the familiar scikit-learn style API. The uniform interface allows reuse of code for data ingestion and evaluation, resulting in cleaner and more maintainable NER pipelines. In addition, customizing NERDS by adding new and more advanced NER models is also very easy, just a matter of implementing a standard NER Model class.
Our presentation will describe the main features of NERDS, then walk through a demonstration of developing and evaluating NER models that recognize biomedical entities. We will then describe a Neural Network based NER algorithm (a Bi-LSTM seq2seq model written in Pytorch) that we will then integrate into the NERDS NER pipeline.
We believe NERDS addresses a real need for building domain specific NER models quickly and efficiently. NER is an active field of research, and the hope is that this presentation will spark interest and contributions of new NER algorithms and Data Adapters from the community that can in turn help to move the field forward.
Graph Techniques for Natural Language ProcessingSujit Pal
Natural Language embodies the human ability to make “infinite use of finite means” (Humboldt, 1836; Chomsky, 1965). A relatively small number of words can be combined using a grammar in myriad different ways to convey all kinds of information. Languages model inter-relationships between their words, just like graphs model inter-relationships between their vertices. It is not surprising then, that graphs are a natural tool to study Natural Language and glean useful information from it, automatically, and at scale. This presentation will focus on NLP techniques to convert raw text to graphs, and present Graph Theory based solutions to some common NLP problems. Solutions presented will use Apache Spark or Neo4j depending on problem size and scale. Examples of Graph Theory solutions presented include PageRank for Document Summarization, Link Prediction from raw text for Knowledge Graph enhancement, Label Propagation for entity classification, and Random Walk techniques to find similar documents.
Learning to Rank Presentation (v2) at LexisNexis Search GuildSujit Pal
An introduction to Learning to Rank, with case studies using RankLib with and without plugins provided by Solr and Elasticsearch. RankLib is a library of learning to rank algorithms, which includes some popular LTR algorithms such as LambdaMART, RankBoost, RankNet, etc.
Search summit-2018-content-engineering-slidesSujit Pal
Slides accompanying content engineering tutorial presented at RELX Search Summit 2018. Contains techniques for keyword extraction using various statistical, rule based and machine learning methods, keyword de-duplication using SimHash and Dedupe, and dimensionality reduction techniques such as Topic Modeling, NMF, Word vectors, etc.
SoDA v2 - Named Entity Recognition from streaming textSujit Pal
Covers the services supported by SoDA v2. Includes some background on Named Entity Recognition and Resolution, popular approaches to Named Entity Recognition, hybrid approaches, scaling SoDA using Spark and Spark streaming, deployment strategies, etc.
Evolving a Medical Image Similarity SearchSujit Pal
Slides for talk at Haystack Conference 2018. Covers evolution of an Image Similarity Search Proof of Concept built to identify similar medical images. Discusses various image vectorizing techniques that were considered in order to convert images into searchable entities, an evaluation strategy to rank these techniques, as well as various indexing strategies to allow searching for similar images at scale.
Embed, Encode, Attend, Predict – applying the 4 step NLP recipe for text clas...Sujit Pal
Slides for talk at PyData Seattle 2017 about Matthew Honnibal's 4-step recipe for Deep Learning NLP pipelines. Description of the stages in pipeline as well as 3 examples of document classification, document similarity and sentence similarity. Examples include Keras custom layers for different types of attention.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
7. Intuition
7
Query:
Donald Trump
Term based:
Donald Trump
Donald Trump, Jr.
Vector based:
Melania Trump
Ivanka Trump
Jared Kushner
Barack Obama
George W Bush
Hillary Clinton
Joe Biden
Graph based:
Rudy Giuliani
Bill Barr
Paul Manafort
Michael Flynn
Michael Cohen
Jeffrey Epstein
Prince Andrew
Robert Mueller III
Christopher Steele
8. Term based search
• Query and documents represented as high dimensional sparse vectors of
term weights.
• Inverted index works well with sparse vectors.
• Term based search captures term similarity / overlap.
• Unsupervised operation.
• Scales to large document sets.
• BM25 more popular nowadays for ranking.
8
9. Vector based search
• Based on Distributional Hypothesis.
• Leverages “word” and other
embeddings.
• Can be based on document content or
document graph structure.
• Captures semantic similarity
• Vectors are low dimensional and
dense.
• Approximate Nearest Neighbor (ANN)
methods work best.
9
10. Graph based search and reranking
10
• Leverages relationships between documents.
• Citation graph, co-authorship network, term/concept co-occurrence
networks, etc.
• We use the SCOPUS citation graph to calculate:
• Citation Count
• PageRank
• Localized citation count (based on results set)
• Combinations based on relative ranks or normalized scores
• Re-rank using computed graph metrics.
11. Graph based search examples
• Global graph metrics (e.g.
PageRank)
• Indicates importance.
11
• Graph neighborhood features
(e.g. Node2Vec)
• Indicates Topological similarity
12. Graph + Vector Hybrid search
• SPECTER: Document level learning using
citation-informed Transformers
• Minimize Triplet loss between papers
• Related/Unrelated papers based on
citation graph
12
13. Evaluation Metric: NDCG
• Search Result Quality measured with Normalized Discounted
Cumulative Gain (NDCG).
• Measured for k=1, 3, 5, 10, 20, 50.
• rel(i) is relevance score, usually relevance(query, document(i)).
13
15. Reformulating the Problem
• Objective – quantitatively compare various search approaches on SCOPUS.
• Needed labeled data, i.e., judgement lists, which weren't available.
• TREC-COVID has (incomplete) judgement lists (35 queries so far)
• TREC-COVID uses CORD-19 data (from 01-May-2020), some of which are available in
SCOPUS.
• Some degree of duplication within corpus causes minor discrepancies
• Using subset of SCOPUS papers from May 1 CORD-19 dataset.
• Using subset of TREC-COVID judgement lists for these papers.
• Promising candidate solutions applied back to SCOPUS.
15
16. Setup
• SOLR index created from CORD-19 corpus (Scopus subset only)
• Original plus stemmed fields
• Baseline created using eDismax query applied to original and
stemmed fields
• NDCG measured with filtered (to Scopus) judgements
• Various reranking schemes applied to eDismax results
• Alternative query methods (SOLR MLT, vector based query) tried
16
18. Conditions
• Unless otherwise stated:
• Use of query from CORD-19 (not question or narrative descriptions)
• NDCG based on Scopus matched records only
• NDCG reported is the average across all 35 queries
18
19. Basic Search
• eDismax (original text and stemmed)
• MLT (using top edismax result)
• Using title, abstract and body fields (where available)
• Experiment removing coronavirus or using only coronavirus for each query
19
Full NDCG is based on the full set of matching documents
NDCG @1 @3 @5 @10 @20 @50 Full
eDismax (orig) 0.41428 0.33743 0.33996 0.32035 0.29261 0.26126 0.54092
eDisMax (stem) 0.44285 0.37126 0.35589 0.32939 0.29697 0.26580 0.54744
MLT (stem) 0.41428 0.29457 0.26813 0.23619 0.19322 0.15559 0.38331
Just "coronavirus" 0 0 0.00604 0.00701 0.00608 0.01003 0.29161
Without "coronavirus" 0.27142 0.22841 0.23478 0.22058 0.21453 0.19836 0.46307
24. (Searching) Ranking
• Sorting of the full corpus, no cut-off applied
• Vector based reranking including:
• BERT embedding
• Node2Vec embeddings
• SPECTER embeddings
• Compare BERT embedding vector distance of query to title and abstract
• Cosine distance and Euclidean distance
• Query vs
• Title
• Title + Abstract (max or mean pooling)
• Also tried best eDismax result document per query
24
25. Extra Embedding
• Also looked at the question and narrative compared to the query, eg
• Query : coronavirus origin
• Question : what is the origin of COVID-19
• Narrative : seeking range of information about the SARS-CoV-2 virus's origin,
including its evolution, animal source, and first transmission into humans
• Question and narratives work better
• Likely due to longer text with words (including synonyms and
concepts) in context (more natural)
25
26. Ranking by BERT
26
NDCG @1 @3 @5 @10 @20 @50 Full
Question 0.31428 0.24006 0.19769 0.16458 0.13224 0.10945 0.36517
Narrative 0.17142 0.15291 0.13685 0.11555 0.0982 0.08420 0.33321
Query 0.04285 0.0386 0.03562 0.03288 0.03255 0.02948 0.29186
Best eDismax doc 0.44285 0.24325 0.19999 0.13860 0.10391 0.07905 0.31552
eDisMax (stem) 0.44285 0.37126 0.35589 0.32939 0.29697 0.26580 0.54744
• Generally -
• Mean better than max pooling
• Best match using Question over Narrative over Query
• Best match on Title + Abstract
• Cosine vs Euclidean slight variations across the board
• Results based on -
• Mean, Cosine using Title+Abstract
All lower than the baseline
28. Ranking by Node2Vec
• Node embedding for node2vec as the query
• Results – Cosine, single top result from eDismax with stemming as the
query
• Some query top results were not in the edge list and therefor yield
zero NDCG
• Core network, all edges connect two CORD documents
• Extended network, all edges touch one CORD document
28
NDCG @1 @3 @5 @10 @20 @50 Full
Core 0.44285 0.21205 0.15721 0.10224 0.06997 0.04739 0.33174
Extended 0.44285 0.21628 0.16048 0.10527 0.06909 0.04360 0.32761
eDisMax (stem) 0.44285 0.37126 0.35589 0.32939 0.29697 0.26580 0.54744
30. Ranking by SPECTER
• SPECTER document embedding for the query
• as calculated by the CORD-19
• Best document returned by eDismax queries
30
NDCG @1 @3 @5 @10 @20 @50 Full
Stemmed 0.44285 0.28204 0.25158 0.19540 0.15484 0.12281 0.41963
eDisMax (stem) 0.44285 0.37126 0.35589 0.32939 0.29697 0.26580 0.54744
32. Training Embedding
• Short experiment to look at training query – document for improved
ranking
• We did not try this with question or narrative, limiting to query
• Poor results assumed to be caused by:
• Lack of variability in query preventing generalisation
• Nearly all queries contain "coronavirus"
32
33. Final Results
• Taking the best results from each set of experiments
• None of the reranking strategies, including the embedding based
ones (content, graph, or hybrid) beat the stemmed eDismax baseline.
33
NDCG @1 @3 @5 @10 @20 @50 Full
Rel + 0.1*LCB 0.44285 0.37039 0.35858 0.3305 0.29205 0.26444 0.54096
BERT reranking 0.44285 0.24325 0.19999 0.13860 0.10391 0.07905 0.31552
Node2Vec core 0.44285 0.21205 0.15721 0.10224 0.06997 0.04739 0.33174
SPECTER (stem) 0.44285 0.28204 0.25158 0.19540 0.15484 0.12281 0.41963
eDisMax (stem) 0.44285 0.37126 0.35589 0.32939 0.29697 0.26580 0.54744
36. Summary
• CORD-19 corpus with incomplete judgement data (they are continuing to add to it based
on results from systems)
• eDismax appear to do ok...
• Recall suffers due to term mismatching
• Beyond basic synonym
• Query intent is represented by a single limiting query clause
• The question and narrative descriptors provide much more natural text for embeddings to work
from
• Graph metrics for importance may have limited application depending on user task
• Incomplete judgement data make NDCG questionable
• ...insufficient information on sampling to apply infNDCG
• Question over embedding general sense of semantic equivalence vs concept identity
(synonyms)
36
37. Future Work
• Experiments based on Scopus and our own judgement data
• Application of graph metrics, including more than just basic citation
graph
• Investigation of fine-tuned embeddings combining text and graph
• Apply ML based reranking
• Investigate the balance between concept, semantic, freshness and
importance
37