The document discusses Boolean retrieval models and indexing in information retrieval systems. It describes how a Boolean retrieval model views documents as sets of words and uses Boolean operators like AND, OR, and NOT to join query terms. An inverted index data structure is built on the text to speed up searches by storing, for each term, the documents that contain it. The document provides examples of how Boolean queries are processed by intersecting postings lists from the inverted index to retrieve matching documents in linear time. It also discusses some limitations of the Boolean model and advantages of ranking search results.
This document provides an introduction to information retrieval. It discusses how Boolean retrieval works using an inverted index to process queries. It also covers how queries can be optimized by processing terms in order of increasing document frequency. Further topics covered include ranking search results, structured versus unstructured data, and more sophisticated techniques in information retrieval.
This document provides an overview of Boolean retrieval models for information retrieval. It discusses how Boolean queries using AND, OR, and NOT operators allow documents to be retrieved based on an exact match to the query terms. An inverted index representation maps terms to postings lists of document IDs. Boolean queries can be evaluated by merging the relevant postings lists. Query optimization techniques, such as processing terms in order of increasing document frequency, can improve efficiency. The document provides examples of Boolean queries from commercial search systems.
Information retrieval is the process of finding documents that satisfy an information need from within large collections. The inverted index is the key data structure underlying modern IR systems, where each term is associated with a list of documents containing that term. Boolean queries can be processed efficiently using the inverted index by merging the postings lists of query terms. Phrase queries require positional indexes, where each term's postings list also includes term positions within documents.
The document introduces information retrieval and describes how an inverted index works as the key data structure for modern IR systems. An inverted index stores for each term a list of all documents that contain the term. It allows efficient processing of Boolean queries by merging the postings lists of query terms. Query processing aims to optimize the order of processing terms based on their document frequencies to minimize the size of intermediate results.
The document introduces information retrieval and describes how an inverted index works as the key data structure for modern IR systems. An inverted index stores for each term a list of all documents that contain the term. It allows efficient processing of Boolean queries by merging the postings lists of query terms. Query processing aims to optimize the order of processing terms based on their document frequencies to minimize the size of intermediate results.
Information Retrieval-4(inverted index_&_query handling)Jeet Das
The document describes the process of creating an inverted index to support keyword searching of documents. It discusses storing term postings lists that map terms to the documents that contain them. It also describes techniques like skip pointers, phrase queries, and proximity searches to improve query processing efficiency and support more complex search needs. Precision, recall, and f-score metrics for evaluating information retrieval systems are also summarized.
This document provides an introduction to Boolean retrieval and the inverted index data structure used in information retrieval systems. It discusses how an inverted index represents term-document incidence matrices in a space-efficient manner by only storing non-zero values. Boolean queries are processed by intersecting the postings lists of query terms in the inverted index. While simple, the Boolean model was widely used in commercial search engines for decades due to its precision.
Dictionaries and Tolerant Retrieval.pptManimaran A
The document discusses dictionaries and tolerant retrieval in information retrieval systems. It describes different data structures that can be used to store term dictionaries for inverted indexes, including arrays, hash tables, binary trees, and B-trees. It also discusses how to handle wildcard queries using techniques like permuterm indexes and k-gram indexes. The document explains methods for spell checking documents and queries, such as edit distance, weighted edit distance, n-gram overlap, and Soundex.
This document provides an introduction to information retrieval. It discusses how Boolean retrieval works using an inverted index to process queries. It also covers how queries can be optimized by processing terms in order of increasing document frequency. Further topics covered include ranking search results, structured versus unstructured data, and more sophisticated techniques in information retrieval.
This document provides an overview of Boolean retrieval models for information retrieval. It discusses how Boolean queries using AND, OR, and NOT operators allow documents to be retrieved based on an exact match to the query terms. An inverted index representation maps terms to postings lists of document IDs. Boolean queries can be evaluated by merging the relevant postings lists. Query optimization techniques, such as processing terms in order of increasing document frequency, can improve efficiency. The document provides examples of Boolean queries from commercial search systems.
Information retrieval is the process of finding documents that satisfy an information need from within large collections. The inverted index is the key data structure underlying modern IR systems, where each term is associated with a list of documents containing that term. Boolean queries can be processed efficiently using the inverted index by merging the postings lists of query terms. Phrase queries require positional indexes, where each term's postings list also includes term positions within documents.
The document introduces information retrieval and describes how an inverted index works as the key data structure for modern IR systems. An inverted index stores for each term a list of all documents that contain the term. It allows efficient processing of Boolean queries by merging the postings lists of query terms. Query processing aims to optimize the order of processing terms based on their document frequencies to minimize the size of intermediate results.
The document introduces information retrieval and describes how an inverted index works as the key data structure for modern IR systems. An inverted index stores for each term a list of all documents that contain the term. It allows efficient processing of Boolean queries by merging the postings lists of query terms. Query processing aims to optimize the order of processing terms based on their document frequencies to minimize the size of intermediate results.
Information Retrieval-4(inverted index_&_query handling)Jeet Das
The document describes the process of creating an inverted index to support keyword searching of documents. It discusses storing term postings lists that map terms to the documents that contain them. It also describes techniques like skip pointers, phrase queries, and proximity searches to improve query processing efficiency and support more complex search needs. Precision, recall, and f-score metrics for evaluating information retrieval systems are also summarized.
This document provides an introduction to Boolean retrieval and the inverted index data structure used in information retrieval systems. It discusses how an inverted index represents term-document incidence matrices in a space-efficient manner by only storing non-zero values. Boolean queries are processed by intersecting the postings lists of query terms in the inverted index. While simple, the Boolean model was widely used in commercial search engines for decades due to its precision.
Dictionaries and Tolerant Retrieval.pptManimaran A
The document discusses dictionaries and tolerant retrieval in information retrieval systems. It describes different data structures that can be used to store term dictionaries for inverted indexes, including arrays, hash tables, binary trees, and B-trees. It also discusses how to handle wildcard queries using techniques like permuterm indexes and k-gram indexes. The document explains methods for spell checking documents and queries, such as edit distance, weighted edit distance, n-gram overlap, and Soundex.
Information retrieval systems use indexes and inverted indexes to quickly search large document collections by mapping terms to their locations. Boolean retrieval uses an inverted index to process Boolean queries by intersecting postings lists to find documents that contain sets of terms. Key aspects of information retrieval systems include precision, recall, and ranking search results by relevance.
The document discusses processing Boolean queries in an information retrieval system using an inverted index. It describes the steps to process a simple conjunctive query by locating terms in the dictionary, retrieving their postings lists, and intersecting the lists. More complex queries involving OR and NOT operators are also processed in a similar way. The document also discusses optimizing query processing by considering the order of accessing postings lists.
This document discusses building an inverted index to efficiently support information retrieval on large document collections. It describes tokenizing documents, building a dictionary of normalized terms, and creating postings lists that map each term to the documents it appears in. Inverted indexes allow skipping linear scanning and support flexible queries by indexing term locations. The document also covers calculating precision and recall to measure system effectiveness.
Concepts and Challenges of Text Retrieval for Search EngineGan Keng Hoon
This document discusses concepts and challenges in text retrieval for search engines. It provides an overview of text retrieval and search engine concepts. Some key challenges discussed are semantics and specificity in queries. The document also uses an example of an expert search engine to illustrate a case study. It describes various components involved in text retrieval including document representation, indexing, inverted indexing, retrieval functions and evaluation metrics.
This document describes the introduction to a text on logic. It presents three sample arguments using symbols from different disciplines and translates them into a common symbolic language using symbols like P, Q, R, etc. to represent statements. All three arguments are shown to have the same logical pattern when expressed this way. The introduction explains that this common language will allow analyzing the logic of arguments independently of their subject matter. It also describes David Hilbert's approach of using an internal formal language for a "logic computer" to represent logical arguments and an external language to discuss and analyze the computer and its operations.
This document provides an introduction to information retrieval systems and their key components. It discusses how search engines work by indexing documents, parsing user queries, and ranking results by relevance. The main components of a search engine are described as the crawler, indexer, query parser, ranking model, and interfaces for result display, evaluation, and feedback. The document also covers core concepts in IR like queries, documents, relevance, and information needs. It compares browsing and querying models and pull vs push information delivery.
This document discusses approximate query processing using sampling to enable interactive queries over large datasets. It describes BlinkDB, a framework that creates and maintains samples from underlying data to return fast, approximate query answers with error bars. BlinkDB verifies the correctness of the error bars it returns by periodically replacing samples and using diagnostics to check the accuracy without running many queries. The document discusses challenges like selecting appropriate samples, estimating errors, and verifying results to balance speed, accuracy and correctness for interactive analysis of big data.
This document provides an overview of information retrieval and the Boolean model. It defines information retrieval as finding relevant documents from large collections to satisfy an information need. The document introduces the Boolean model, including using a term-document matrix and inverted index to process Boolean queries. It discusses how Boolean queries are implemented by intersecting postings lists and optimizing for conjunctions and disjunctions. The challenges of Boolean search at scale are also covered.
The Last Line Effect. Abstract: Micro-clones are tiny duplicated pieces of code; they typically comprise only a few statements or lines. In this paper, we expose the “last line effect”, the phenomenon that the last line or statement in a micro-clone is much more likely to contain an error than the previous lines or statements. We do this by analyzing 208 open source projects and reporting on 202 faulty micro-clones.
This document discusses the longest common subsequence problem and presents an algorithm for solving it using dynamic programming. Specifically, it introduces the longest common subsequence problem, describes how it is different from the substring problem, and presents a dynamic programming algorithm that uses a matrix to store the lengths of the longest common subsequences between prefixes of the two strings in order to efficiently find the longest common subsequence between the full strings.
Logic and mathematics history and overview for studentsBob Marcus
Math and logic overview for students. Covers a wide range of topics including algorithms, proofs, probability, networks, number theory, statistics, causality, WolframAlpha, and Python programs.
Information retrieval (IR) is the study of finding needed information from a collection of documents. IR techniques involve representing documents and queries in a common vector space and calculating relevance scores based on similarity. Common IR models include the Boolean model, vector space model, and probabilistic models. IR systems preprocess text via stopwords removal, stemming, and term weighting before indexing documents in an inverted index to enable efficient retrieval. System performance is evaluated using precision and recall metrics.
1. The document proposes a speaker-centric approach to corpus linguistics that analyzes language at the level of individual speakers rather than aggregate groups.
2. It suggests using blog data, which includes metadata about authors, to build corpora segmented by speaker attributes like gender and age.
3. Analyzing variation across an individual's writings and comparing individuals could provide insights into stylistic preferences, challenge assumptions about group differences, and inform theories of language use and change.
As the volume of content continues to grow exponentially helping search engines to understand context and the topical themes within your site is increasingly important. Understanding some of the concepts are covered and also ways to utilise these in your marketing strategy.
Functional Programming with Immutable Data Structureselliando dias
1. The document discusses the advantages of functional programming with immutable data structures for multi-threaded environments. It argues that shared mutable data and variables are fundamentally flawed concepts that can lead to bugs, while immutable data avoids these issues.
2. It presents Clojure as a functional programming language that uses immutable persistent data structures and software transactional memory to allow for safe, lock-free concurrency. This approach allows readers and writers to operate concurrently without blocking each other.
3. The document makes the case that Lisp parentheses in function calls uniquely define the tree structure of computations and enable powerful macro systems, homoiconicity, and structural editing of code.
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail ScienceUniversity of Washington
The document summarizes a system called SQLShare that aims to make SQL-based data analysis more accessible to scientists by lowering initial setup costs and providing automated tools. It has been used by 50 unique users at 4 UW campus labs on 16GB of uploaded data from various science domains like environmental science and metagenomics. The system provides data uploading, query sharing, automatic English-to-SQL translation, and personalized query recommendations to lower barriers to working with relational databases for analysis.
The document discusses information retrieval and describes how it involves finding unstructured documents like text that satisfy an information need from large collections stored on computers. It provides examples of information retrieval applications. It also describes how information retrieval systems deal with unstructured text data differently than structured database data by using inverted indexes and boolean retrieval models.
Digital logic is based on the binary number system of 0s and 1s. This two-valued logic system allows statements to be either true or false. One reason for using binary is that electronic circuits can be designed to reliably represent and switch between two states. There are two classes of digital logic - combinational logic, where outputs depend only on current inputs, and sequential logic, where outputs also depend on prior states. Basic logic gates like AND, OR, and NOT are used to combine input signals in circuits. The Karnaugh map provides a visual method to simplify Boolean logic expressions by grouping adjacent ones to minimize variables.
Information retrieval systems use indexes and inverted indexes to quickly search large document collections by mapping terms to their locations. Boolean retrieval uses an inverted index to process Boolean queries by intersecting postings lists to find documents that contain sets of terms. Key aspects of information retrieval systems include precision, recall, and ranking search results by relevance.
The document discusses processing Boolean queries in an information retrieval system using an inverted index. It describes the steps to process a simple conjunctive query by locating terms in the dictionary, retrieving their postings lists, and intersecting the lists. More complex queries involving OR and NOT operators are also processed in a similar way. The document also discusses optimizing query processing by considering the order of accessing postings lists.
This document discusses building an inverted index to efficiently support information retrieval on large document collections. It describes tokenizing documents, building a dictionary of normalized terms, and creating postings lists that map each term to the documents it appears in. Inverted indexes allow skipping linear scanning and support flexible queries by indexing term locations. The document also covers calculating precision and recall to measure system effectiveness.
Concepts and Challenges of Text Retrieval for Search EngineGan Keng Hoon
This document discusses concepts and challenges in text retrieval for search engines. It provides an overview of text retrieval and search engine concepts. Some key challenges discussed are semantics and specificity in queries. The document also uses an example of an expert search engine to illustrate a case study. It describes various components involved in text retrieval including document representation, indexing, inverted indexing, retrieval functions and evaluation metrics.
This document describes the introduction to a text on logic. It presents three sample arguments using symbols from different disciplines and translates them into a common symbolic language using symbols like P, Q, R, etc. to represent statements. All three arguments are shown to have the same logical pattern when expressed this way. The introduction explains that this common language will allow analyzing the logic of arguments independently of their subject matter. It also describes David Hilbert's approach of using an internal formal language for a "logic computer" to represent logical arguments and an external language to discuss and analyze the computer and its operations.
This document provides an introduction to information retrieval systems and their key components. It discusses how search engines work by indexing documents, parsing user queries, and ranking results by relevance. The main components of a search engine are described as the crawler, indexer, query parser, ranking model, and interfaces for result display, evaluation, and feedback. The document also covers core concepts in IR like queries, documents, relevance, and information needs. It compares browsing and querying models and pull vs push information delivery.
This document discusses approximate query processing using sampling to enable interactive queries over large datasets. It describes BlinkDB, a framework that creates and maintains samples from underlying data to return fast, approximate query answers with error bars. BlinkDB verifies the correctness of the error bars it returns by periodically replacing samples and using diagnostics to check the accuracy without running many queries. The document discusses challenges like selecting appropriate samples, estimating errors, and verifying results to balance speed, accuracy and correctness for interactive analysis of big data.
This document provides an overview of information retrieval and the Boolean model. It defines information retrieval as finding relevant documents from large collections to satisfy an information need. The document introduces the Boolean model, including using a term-document matrix and inverted index to process Boolean queries. It discusses how Boolean queries are implemented by intersecting postings lists and optimizing for conjunctions and disjunctions. The challenges of Boolean search at scale are also covered.
The Last Line Effect. Abstract: Micro-clones are tiny duplicated pieces of code; they typically comprise only a few statements or lines. In this paper, we expose the “last line effect”, the phenomenon that the last line or statement in a micro-clone is much more likely to contain an error than the previous lines or statements. We do this by analyzing 208 open source projects and reporting on 202 faulty micro-clones.
This document discusses the longest common subsequence problem and presents an algorithm for solving it using dynamic programming. Specifically, it introduces the longest common subsequence problem, describes how it is different from the substring problem, and presents a dynamic programming algorithm that uses a matrix to store the lengths of the longest common subsequences between prefixes of the two strings in order to efficiently find the longest common subsequence between the full strings.
Logic and mathematics history and overview for studentsBob Marcus
Math and logic overview for students. Covers a wide range of topics including algorithms, proofs, probability, networks, number theory, statistics, causality, WolframAlpha, and Python programs.
Information retrieval (IR) is the study of finding needed information from a collection of documents. IR techniques involve representing documents and queries in a common vector space and calculating relevance scores based on similarity. Common IR models include the Boolean model, vector space model, and probabilistic models. IR systems preprocess text via stopwords removal, stemming, and term weighting before indexing documents in an inverted index to enable efficient retrieval. System performance is evaluated using precision and recall metrics.
1. The document proposes a speaker-centric approach to corpus linguistics that analyzes language at the level of individual speakers rather than aggregate groups.
2. It suggests using blog data, which includes metadata about authors, to build corpora segmented by speaker attributes like gender and age.
3. Analyzing variation across an individual's writings and comparing individuals could provide insights into stylistic preferences, challenge assumptions about group differences, and inform theories of language use and change.
As the volume of content continues to grow exponentially helping search engines to understand context and the topical themes within your site is increasingly important. Understanding some of the concepts are covered and also ways to utilise these in your marketing strategy.
Functional Programming with Immutable Data Structureselliando dias
1. The document discusses the advantages of functional programming with immutable data structures for multi-threaded environments. It argues that shared mutable data and variables are fundamentally flawed concepts that can lead to bugs, while immutable data avoids these issues.
2. It presents Clojure as a functional programming language that uses immutable persistent data structures and software transactional memory to allow for safe, lock-free concurrency. This approach allows readers and writers to operate concurrently without blocking each other.
3. The document makes the case that Lisp parentheses in function calls uniquely define the tree structure of computations and enable powerful macro systems, homoiconicity, and structural editing of code.
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail ScienceUniversity of Washington
The document summarizes a system called SQLShare that aims to make SQL-based data analysis more accessible to scientists by lowering initial setup costs and providing automated tools. It has been used by 50 unique users at 4 UW campus labs on 16GB of uploaded data from various science domains like environmental science and metagenomics. The system provides data uploading, query sharing, automatic English-to-SQL translation, and personalized query recommendations to lower barriers to working with relational databases for analysis.
The document discusses information retrieval and describes how it involves finding unstructured documents like text that satisfy an information need from large collections stored on computers. It provides examples of information retrieval applications. It also describes how information retrieval systems deal with unstructured text data differently than structured database data by using inverted indexes and boolean retrieval models.
Digital logic is based on the binary number system of 0s and 1s. This two-valued logic system allows statements to be either true or false. One reason for using binary is that electronic circuits can be designed to reliably represent and switch between two states. There are two classes of digital logic - combinational logic, where outputs depend only on current inputs, and sequential logic, where outputs also depend on prior states. Basic logic gates like AND, OR, and NOT are used to combine input signals in circuits. The Karnaugh map provides a visual method to simplify Boolean logic expressions by grouping adjacent ones to minimize variables.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
1. Boolean retrieval & basics of indexing
Modern Information Retrieval
University of Qom
Z. Imanimehr
Spring 2023
2. Boolean retrieval model
2
Query: Boolean expressions
Boolean queries use AND, OR and NOT to join query
terms
Views each doc as a set of words
Term-incidence matrix is sufficient
Shows presence or absence of terms in each doc
Perhaps the simplest model to build an IR
system on
3. Boolean queries: Exact match
In pure Boolean model, retrieved docs are not
ranked
Result is a set of docs.
It is precise or exact match (docs match condition or not).
Primary commercial retrieval tool for 3 decades (Until
1990’s).
Many search systems you still use are Boolean:
Email, library catalog, Mac OS X Spotlight
3
Sec. 1.3
4. The classic search model
Task
Info Need
Query
Results
SEARCH
ENGINE
Query
Refinement
Get rid of mice in a
politically correct way
Info about removing mice
without killing them
mouse trap
Misconception?
Misformulation?
Corpus
4
5. Example: Plays of Shakespeare
Which plays of Shakespeare contain the words
Brutus AND Caesar but NOT Calpurnia?
scanning all of Shakespeare’s plays for Brutus and
Caesar, then strip out those containing Calpurnia?
The above solution cannot be the answer for large
corpora (computationally expensive)
Efficiency is also an important issue (along with the
effectiveness)
Index: data structure built on the text to speed up the
searches
5
Sec. 1.1
6. Example: Plays of Shakespeare
Term-document incidence matrix
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
1 if play contains
word, 0 otherwise
Sec. 1.1
6
7. Incidence vectors
So we have a 0/1 vector for each term.
Brutus AND Caesar but NOT Calpurnia
To answer query: take the vectors for Brutus,
Caesar and Calpurnia (complemented) bitwise
AND.
110100 AND 110111 AND 101111 = 100100.
7
Sec. 1.1
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
8. Answers to query
Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me.
8
Sec. 1.1
Brutus AND Caesar but NOT Calpurnia
9. Bigger collections
Number of docs: N = 106
Average length of a doc≈ 1000 words
No. of distinct terms: M = 500,000
Average length of a word ≈ 6 bytes
including spaces/punctuation
6GB of data
9
Sec. 1.1
10. Sparsity of Term-document incidence matrix
500K x 1M matrix has half-a-trillion 0’s and 1’s.
But it has no more than one billion 1’s.
matrix is extremely sparse.
so a minimum of 99.8% of the cells are zero.
What’s a better representation?
We only record the 1 positions.
10
Why?
Sec. 1.1
11. Inverted index
For each term t, store a list of all docs that contain t.
Identify each by a docID, a document serial number
Can we use fixed-size arrays for this?
11
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
What happens if the word Caesar is
added to doc 14?
Sec. 1.2
174
54 101
12. Inverted index
We need variable-size postings lists
On disk, a continuous run of postings is normal and best
In memory, can use linked lists or variable length arrays
Some tradeoffs in size/ease of insertion
12
Dictionary Postings
Sorted by docID
Posting
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
13. Tokenizer
Token stream Friends Romans Countrymen
Inverted index construction
Linguistic modules
Modified tokens
friend roman countryman
Indexer
Inverted index
friend
roman
countryman
2 4
2
13 16
1
We will see
more on
these later.
Docs to
be indexed
Friends, Romans, countrymen.
Sec. 1.2
13
14. Indexer steps: Token sequence
Sequence of (Modified token, Document ID) pairs.
I did enact Julius
Caesar I was killed
i' the Capitol;
Brutus killed me.
Doc 1
So let it be with
Caesar. The noble
Brutus hath told you
Caesar was ambitious
Doc 2
Sec. 1.2
14
16. Indexer steps: Dictionary & Postings
Multiple term entries in
a single doc are
merged.
Split into Dictionary and
Postings
Document frequency
information is added.
Why frequency?
Will discuss later.
Sec. 1.2
16
17. Where do we pay in storage?
17
Pointers
Terms and
counts
Sec. 1.2
Lists of
docIDs
19. Query processing: AND
Consider processing the query:
Brutus AND Caesar
Locate Brutus in the dictionary;
Retrieve its postings.
Locate Caesar in the dictionary;
Retrieve its postings.
“Merge” (intersect) the two postings:
19
128
34
2 4 8 16 32 64
1 2 3 5 8 13 21
Brutus
Caesar
Sec. 1.3
20. The merge
Walk through the two postings simultaneously, in
time linear in the total number of postings entries
20
If list lengths are x and y, merge takes O(x+y) operations.
Crucial: postings sorted by docID.
Sec. 1.3
128
31
2 4 8 41 48 64
1 2 3 8 11 17 21
Brutus
Caesar
2 8
22. Boolean queries: More general merges
Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
Can we still run through the merge in time 𝑂(𝑥 + 𝑦)?
22
Sec. 1.3
23. Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT (Antony OR
Cleopatra)
Can we merge in “linear” time for general Boolean
queries?
Linear in what?
Can we do better?
23
Sec. 1.3
24. Query optimization
What is the best order for query processing?
Consider a query that is an AND of 𝑛 terms.
For each of the 𝑛 terms, get its postings, then
AND them together.
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Query: Brutus AND Calpurnia AND Caesar
24
Sec. 1.3
24
25. Query optimization example
Process in order of increasing freq:
start with smallest set, then keep cutting further.
25
This is why we kept
document freq. in dictionary
Execute the query as (Calpurnia AND Brutus) AND Caesar.
Sec. 1.3
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
26. More general optimization
Example:
(madding OR crowd) AND (ignoble OR
strife)
Get doc frequencies for all terms.
Estimate the size of each OR by the sum of
its doc. freq.’s (conservative).
Process in increasing order of OR sizes.
26
Sec. 1.3
27. Exercise
27
Recommend a query processing order for
(tangerine OR trees) AND
(marmalade OR skies) AND
(kaleidoscope OR eyes)
Which two terms should we process first?
Term Freq
eyes 213312
kaleidoscope 87009
marmalade 107913
skies 271658
tangerine 46653
trees 316812
28. Summary of Boolean IR:
Advantages of exact match
28
It can be implemented very efficiently
Predictable, easy to explain
precise semantics
Structured queries for pinpointing precise docs
neat formalism
Work well when you know exactly (or roughly) what
the collection contains and what you’re looking for
29. Summary of Boolean IR:
Disadvantages of the Boolean Model
29
Query formulation (Boolean expression) is difficult for
most users
Too simplistic Boolean queries by most users
AND, OR as opposite extremes in a precision/recall
tradeoff
Usually either too few or too many docs in response to a user
query
Retrieval based on binary decision criteria
No ranking of the docs is provided
Difficulty increases with collection size
30. Ranking results in advanced IR models
Boolean queries give inclusion or exclusion of docs.
Results of queries in Boolean model as a set
Modern information retrieval systems are no longer
based on the Boolean model
Often we want to rank/group results
Need to measure proximity from query to each doc.
Index term weighting can provide a substantial
improvement
30
32. Phrase queries
Example: “stanford university”
“I went to university at Stanford” is not a match.
Easily understood by users
One of the few “advanced search” ideas that works
At least 10% of web queries are phrase queries
Many more queries are implicit phrase queries
such as person names entered without use of double quotes.
It is not sufficient to store only the doc IDs in the
posting lists
Sec. 2.4
32
33. Approaches for phrase queries
33
Indexing bi-words (two word phrases)
Positional indexes
Full inverted index
34. Biword indexes
Index every consecutive pair of terms in the text as a
phrase
E.g., doc :“Friends, Romans, Countrymen”
would generate these biwords:
“friends romans” , “romans countrymen”
Each of these biwords is now a dictionary term
Two-word phrase query-processing is now
immediate.
Sec. 2.4.1
34
35. Biword indexes: Longer phrase queries
Longer phrases are processed as conjunction of
biwords
Query: “stanford university palo alto”
can be broken into the Boolean query on biwords:
“stanford university” AND “university palo” AND “palo
alto”
Can have false positives!
Without the docs, we cannot verify that the docs matching
the above Boolean query do contain the phrase.
Sec. 2.4.1
35
36. Issues for biword indexes
False positives (for phrases with more than two
words)
Index blowup due to bigger dictionary
Infeasible for more than biwords, big even for biwords
Biword indexes are not the standard solution (for all
biwords) but can be part of a compound strategy
Sec. 2.4.1
36
37. Positional index
In the postings, store for each term the position(s) in
which tokens of it appear:
<term, doc freq.;
doc1: position1, position2 … ;
doc2: position1, position2 … ; …>
Sec. 2.4.2
37
<be: 993427;
1: 7, 18, 33, 72, 86, 231;
2: 3, 149;
4: 17, 191, 291, 430, 434;
5: 363, 367, …>
Which of docs 1,2,4,5
could contain
“to be or not to be”?
38. Positional index
For phrase queries, we use a merge algorithm
recursively at the doc level
We need to deal with more than just equality of
docIDs:
Phrase query: find places where all the words appear in
sequence
Proximity query: to find places where all the words
close enough
Sec. 2.4.2
38
39. Processing a phrase query: Example
Query: “to be or not to be”
Extract inverted index entries for: to, be, or, not
Merge: find all positions of “to”, i, i+4, “be”, i+1, i+5,
“or”, i+2, “not”, i+3.
to:
<2:1,17,74,222,551>; <4:8,16,190,429,433, 512>; <7:13,23,191>;
...
be:
<1:17,19>; <4:17,191,291,430,434>; <5:14,19,101>; ...
or:
<3:5,15,19>; <4:5,100,251,431,438>; <7:17,52,121>; ...
not:
<4:71,432>; <6:20,85>; ...
Sec. 2.4.2
39
40. Positional index: Proximity queries
k word proximity searches
Find places where the words are within k proximity
Positional indexes can be used for such queries
as opposed to biword indexes
Exercise: Adapt the linear merge of postings to handle
proximity queries. Can you make it work for any value of
k?
Sec. 2.4.2
40
42. Positional index: size
You can compress position values/offsets
Nevertheless, a positional index expands postings
storage substantially
Positional index is now standardly used
because of the power and usefulness of phrase and
proximity queries …
used explicitly or implicitly in a ranking retrieval system.
Sec. 2.4.2
42
43. Positional index: size
Need an entry for each occurrence, not just once per doc
Index size depends on average doc size
Average web page has <1000 terms
SEC filings, books, even some epic poems … easily 100,000
terms
Consider a term with frequency 0.1%
Why?
100
1
100,000
1
1
1000
Expected entries in
Positional postings
Expected Postings
Doc size (# of terms)
Sec. 2.4.2
43
44. Positional index: size (rules of thumb)
A positional index is usually 2–4 as large as a non-
positional index
Positional index size 35–50% of volume of original
text
Caveat: all of this holds for “English-like” languages
Sec. 2.4.2
44
45. Phrase queries: Combination schemes
Combining two approaches
For queries like “Michael Jordan”, it is inefficient to
merge positional postings lists
Good queries to include in the phrase index:
common queries based on recent querying behavior.
and also for phrases whose individual words are common but
the phrase is not such common
Example: “The Who”
Sec. 2.4.3
45
46. Phrase queries: Combination schemes
46
Williams et al. (2004) evaluate a more sophisticated
mixed indexing scheme
needs (in average) ¼ of the time of using just a positional
index
needs 26% more space than having a positional index
alone