SlideShare a Scribd company logo
Boolean retrieval & basics of indexing
Modern Information Retrieval
University of Qom
Z. Imanimehr
Spring 2023
Boolean retrieval model
2
 Query: Boolean expressions
 Boolean queries use AND, OR and NOT to join query
terms
 Views each doc as a set of words
 Term-incidence matrix is sufficient
 Shows presence or absence of terms in each doc
 Perhaps the simplest model to build an IR
system on
Boolean queries: Exact match
 In pure Boolean model, retrieved docs are not
ranked
 Result is a set of docs.
 It is precise or exact match (docs match condition or not).
 Primary commercial retrieval tool for 3 decades (Until
1990’s).
 Many search systems you still use are Boolean:
 Email, library catalog, Mac OS X Spotlight
3
Sec. 1.3
The classic search model
Task
Info Need
Query
Results
SEARCH
ENGINE
Query
Refinement
Get rid of mice in a
politically correct way
Info about removing mice
without killing them
mouse trap
Misconception?
Misformulation?
Corpus
4
Example: Plays of Shakespeare
 Which plays of Shakespeare contain the words
Brutus AND Caesar but NOT Calpurnia?
 scanning all of Shakespeare’s plays for Brutus and
Caesar, then strip out those containing Calpurnia?
 The above solution cannot be the answer for large
corpora (computationally expensive)
 Efficiency is also an important issue (along with the
effectiveness)
 Index: data structure built on the text to speed up the
searches
5
Sec. 1.1
Example: Plays of Shakespeare
Term-document incidence matrix
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
1 if play contains
word, 0 otherwise
Sec. 1.1
6
Incidence vectors
 So we have a 0/1 vector for each term.
 Brutus AND Caesar but NOT Calpurnia
 To answer query: take the vectors for Brutus,
Caesar and Calpurnia (complemented)  bitwise
AND.
 110100 AND 110111 AND 101111 = 100100.
7
Sec. 1.1
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Answers to query
 Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
 Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me.
8
Sec. 1.1
Brutus AND Caesar but NOT Calpurnia
Bigger collections
 Number of docs: N = 106
 Average length of a doc≈ 1000 words
 No. of distinct terms: M = 500,000
 Average length of a word ≈ 6 bytes
 including spaces/punctuation
 6GB of data
9
Sec. 1.1
Sparsity of Term-document incidence matrix
 500K x 1M matrix has half-a-trillion 0’s and 1’s.
 But it has no more than one billion 1’s.
 matrix is extremely sparse.
 so a minimum of 99.8% of the cells are zero.
 What’s a better representation?
 We only record the 1 positions.
10
Why?
Sec. 1.1
Inverted index
 For each term t, store a list of all docs that contain t.
 Identify each by a docID, a document serial number
 Can we use fixed-size arrays for this?
11
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
What happens if the word Caesar is
added to doc 14?
Sec. 1.2
174
54 101
Inverted index
 We need variable-size postings lists
 On disk, a continuous run of postings is normal and best
 In memory, can use linked lists or variable length arrays
 Some tradeoffs in size/ease of insertion
12
Dictionary Postings
Sorted by docID
Posting
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
Tokenizer
Token stream Friends Romans Countrymen
Inverted index construction
Linguistic modules
Modified tokens
friend roman countryman
Indexer
Inverted index
friend
roman
countryman
2 4
2
13 16
1
We will see
more on
these later.
Docs to
be indexed
Friends, Romans, countrymen.
Sec. 1.2
13
Indexer steps: Token sequence
 Sequence of (Modified token, Document ID) pairs.
I did enact Julius
Caesar I was killed
i' the Capitol;
Brutus killed me.
Doc 1
So let it be with
Caesar. The noble
Brutus hath told you
Caesar was ambitious
Doc 2
Sec. 1.2
14
Indexer steps: Sort
 Sort by terms
 And then docID
Core indexing step
Sec. 1.2
15
Indexer steps: Dictionary & Postings
 Multiple term entries in
a single doc are
merged.
 Split into Dictionary and
Postings
 Document frequency
information is added.
Why frequency?
Will discuss later.
Sec. 1.2
16
Where do we pay in storage?
17
Pointers
Terms and
counts
Sec. 1.2
Lists of
docIDs
A naïve dictionary
 An array of struct:
char[20] int Postings *
Sec. 3.1
18
Query processing: AND
 Consider processing the query:
Brutus AND Caesar
 Locate Brutus in the dictionary;
 Retrieve its postings.
 Locate Caesar in the dictionary;
 Retrieve its postings.
 “Merge” (intersect) the two postings:
19
128
34
2 4 8 16 32 64
1 2 3 5 8 13 21
Brutus
Caesar
Sec. 1.3
The merge
 Walk through the two postings simultaneously, in
time linear in the total number of postings entries
20
If list lengths are x and y, merge takes O(x+y) operations.
Crucial: postings sorted by docID.
Sec. 1.3
128
31
2 4 8 41 48 64
1 2 3 8 11 17 21
Brutus
Caesar
2 8
Intersecting two postings lists
(a “merge” algorithm)
21
Boolean queries: More general merges
 Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
Can we still run through the merge in time 𝑂(𝑥 + 𝑦)?
22
Sec. 1.3
Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT (Antony OR
Cleopatra)
 Can we merge in “linear” time for general Boolean
queries?
 Linear in what?
 Can we do better?
23
Sec. 1.3
Query optimization
 What is the best order for query processing?
 Consider a query that is an AND of 𝑛 terms.
 For each of the 𝑛 terms, get its postings, then
AND them together.
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Query: Brutus AND Calpurnia AND Caesar
24
Sec. 1.3
24
Query optimization example
 Process in order of increasing freq:
 start with smallest set, then keep cutting further.
25
This is why we kept
document freq. in dictionary
Execute the query as (Calpurnia AND Brutus) AND Caesar.
Sec. 1.3
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
More general optimization
 Example:
(madding OR crowd) AND (ignoble OR
strife)
 Get doc frequencies for all terms.
 Estimate the size of each OR by the sum of
its doc. freq.’s (conservative).
 Process in increasing order of OR sizes.
26
Sec. 1.3
Exercise
27
 Recommend a query processing order for
 (tangerine OR trees) AND
 (marmalade OR skies) AND
 (kaleidoscope OR eyes)
 Which two terms should we process first?
Term Freq
eyes 213312
kaleidoscope 87009
marmalade 107913
skies 271658
tangerine 46653
trees 316812
Summary of Boolean IR:
Advantages of exact match
28
 It can be implemented very efficiently
 Predictable, easy to explain
 precise semantics
 Structured queries for pinpointing precise docs
 neat formalism
 Work well when you know exactly (or roughly) what
the collection contains and what you’re looking for
Summary of Boolean IR:
Disadvantages of the Boolean Model
29
 Query formulation (Boolean expression) is difficult for
most users
 Too simplistic Boolean queries by most users
 AND, OR as opposite extremes in a precision/recall
tradeoff
 Usually either too few or too many docs in response to a user
query
 Retrieval based on binary decision criteria
 No ranking of the docs is provided
 Difficulty increases with collection size
Ranking results in advanced IR models
 Boolean queries give inclusion or exclusion of docs.
 Results of queries in Boolean model as a set
 Modern information retrieval systems are no longer
based on the Boolean model
 Often we want to rank/group results
 Need to measure proximity from query to each doc.
 Index term weighting can provide a substantial
improvement
30
31
Phrase and proximity queries:
positional indexes
Phrase queries
 Example: “stanford university”
 “I went to university at Stanford” is not a match.
 Easily understood by users
 One of the few “advanced search” ideas that works
 At least 10% of web queries are phrase queries
 Many more queries are implicit phrase queries
 such as person names entered without use of double quotes.
 It is not sufficient to store only the doc IDs in the
posting lists
Sec. 2.4
32
Approaches for phrase queries
33
 Indexing bi-words (two word phrases)
 Positional indexes
 Full inverted index
Biword indexes
 Index every consecutive pair of terms in the text as a
phrase
 E.g., doc :“Friends, Romans, Countrymen”
 would generate these biwords:
 “friends romans” , “romans countrymen”
 Each of these biwords is now a dictionary term
 Two-word phrase query-processing is now
immediate.
Sec. 2.4.1
34
Biword indexes: Longer phrase queries
 Longer phrases are processed as conjunction of
biwords
Query: “stanford university palo alto”
 can be broken into the Boolean query on biwords:
“stanford university” AND “university palo” AND “palo
alto”
 Can have false positives!
 Without the docs, we cannot verify that the docs matching
the above Boolean query do contain the phrase.
Sec. 2.4.1
35
Issues for biword indexes
 False positives (for phrases with more than two
words)
 Index blowup due to bigger dictionary
 Infeasible for more than biwords, big even for biwords
 Biword indexes are not the standard solution (for all
biwords) but can be part of a compound strategy
Sec. 2.4.1
36
Positional index
 In the postings, store for each term the position(s) in
which tokens of it appear:
<term, doc freq.;
doc1: position1, position2 … ;
doc2: position1, position2 … ; …>
Sec. 2.4.2
37
<be: 993427;
1: 7, 18, 33, 72, 86, 231;
2: 3, 149;
4: 17, 191, 291, 430, 434;
5: 363, 367, …>
Which of docs 1,2,4,5
could contain
“to be or not to be”?
Positional index
 For phrase queries, we use a merge algorithm
recursively at the doc level
 We need to deal with more than just equality of
docIDs:
 Phrase query: find places where all the words appear in
sequence
 Proximity query: to find places where all the words
close enough
Sec. 2.4.2
38
Processing a phrase query: Example
 Query: “to be or not to be”
 Extract inverted index entries for: to, be, or, not
 Merge: find all positions of “to”, i, i+4, “be”, i+1, i+5,
“or”, i+2, “not”, i+3.
 to:
 <2:1,17,74,222,551>; <4:8,16,190,429,433, 512>; <7:13,23,191>;
...
 be:
 <1:17,19>; <4:17,191,291,430,434>; <5:14,19,101>; ...
 or:
 <3:5,15,19>; <4:5,100,251,431,438>; <7:17,52,121>; ...
 not:
 <4:71,432>; <6:20,85>; ...
Sec. 2.4.2
39
Positional index: Proximity queries
 k word proximity searches
 Find places where the words are within k proximity
 Positional indexes can be used for such queries
 as opposed to biword indexes
 Exercise: Adapt the linear merge of postings to handle
proximity queries. Can you make it work for any value of
k?
Sec. 2.4.2
40
41
Positional index: size
 You can compress position values/offsets
 Nevertheless, a positional index expands postings
storage substantially
 Positional index is now standardly used
 because of the power and usefulness of phrase and
proximity queries …
 used explicitly or implicitly in a ranking retrieval system.
Sec. 2.4.2
42
Positional index: size
 Need an entry for each occurrence, not just once per doc
 Index size depends on average doc size
 Average web page has <1000 terms
 SEC filings, books, even some epic poems … easily 100,000
terms
 Consider a term with frequency 0.1%
Why?
100
1
100,000
1
1
1000
Expected entries in
Positional postings
Expected Postings
Doc size (# of terms)
Sec. 2.4.2
43
Positional index: size (rules of thumb)
 A positional index is usually 2–4 as large as a non-
positional index
 Positional index size 35–50% of volume of original
text
 Caveat: all of this holds for “English-like” languages
Sec. 2.4.2
44
Phrase queries: Combination schemes
 Combining two approaches
 For queries like “Michael Jordan”, it is inefficient to
merge positional postings lists
 Good queries to include in the phrase index:
 common queries based on recent querying behavior.
 and also for phrases whose individual words are common but
the phrase is not such common
 Example: “The Who”
Sec. 2.4.3
45
Phrase queries: Combination schemes
46
 Williams et al. (2004) evaluate a more sophisticated
mixed indexing scheme
 needs (in average) ¼ of the time of using just a positional
index
 needs 26% more space than having a positional index
alone

More Related Content

Similar to Boolean IR and Indexing.pptx

Boolean Retrieval
Boolean RetrievalBoolean Retrieval
Boolean Retrieval
mghgk
 
Ir 03
Ir   03Ir   03
Ir 02
Ir   02Ir   02
Concepts and Challenges of Text Retrieval for Search Engine
Concepts and Challenges of Text Retrieval for Search EngineConcepts and Challenges of Text Retrieval for Search Engine
Concepts and Challenges of Text Retrieval for Search Engine
Gan Keng Hoon
 
Logic for everyone
Logic for everyoneLogic for everyone
Logic for everyone
ble nature
 
Information Retrieval
Information Retrieval Information Retrieval
Information Retrieval
ShujaatZaheer3
 
Blinkdb
BlinkdbBlinkdb
Blinkdb
Nitish Upreti
 
Information Retrieval-05(wild card query_positional index_spell correction)
Information Retrieval-05(wild card query_positional index_spell correction)Information Retrieval-05(wild card query_positional index_spell correction)
Information Retrieval-05(wild card query_positional index_spell correction)
Jeet Das
 
lecture1.pdf
lecture1.pdflecture1.pdf
lecture1.pdf
KalaivaniManikandan1
 
The Last Line Effect
The Last Line EffectThe Last Line Effect
The Last Line Effect
Andrey Karpov
 
LongestCS (1).ppt
LongestCS (1).pptLongestCS (1).ppt
LongestCS (1).ppt
MdAsaduzzaman257266
 
Logic and mathematics history and overview for students
Logic and mathematics history and overview for studentsLogic and mathematics history and overview for students
Logic and mathematics history and overview for students
Bob Marcus
 
Cs583 info-retrieval
Cs583 info-retrievalCs583 info-retrieval
Cs583 info-retrieval
Borseshweta
 
Kdd 2014 tutorial bringing structure to text - chi
Kdd 2014 tutorial   bringing structure to text - chiKdd 2014 tutorial   bringing structure to text - chi
Kdd 2014 tutorial bringing structure to text - chi
Barbara Starr
 
Quantitative Individuated Corpus Linguistics
Quantitative Individuated Corpus LinguisticsQuantitative Individuated Corpus Linguistics
Quantitative Individuated Corpus Linguistics
Cornelius Puschmann
 
Using topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic searchUsing topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic search
Dawn Anderson MSc DigM
 
Functional Programming with Immutable Data Structures
Functional Programming with Immutable Data StructuresFunctional Programming with Immutable Data Structures
Functional Programming with Immutable Data Structures
elliando dias
 
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail ScienceSQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
University of Washington
 
Basics of IR: Web Information Systems class
Basics of IR: Web Information Systems class Basics of IR: Web Information Systems class
Basics of IR: Web Information Systems class
Artificial Intelligence Institute at UofSC
 
DE notes
DE notesDE notes

Similar to Boolean IR and Indexing.pptx (20)

Boolean Retrieval
Boolean RetrievalBoolean Retrieval
Boolean Retrieval
 
Ir 03
Ir   03Ir   03
Ir 03
 
Ir 02
Ir   02Ir   02
Ir 02
 
Concepts and Challenges of Text Retrieval for Search Engine
Concepts and Challenges of Text Retrieval for Search EngineConcepts and Challenges of Text Retrieval for Search Engine
Concepts and Challenges of Text Retrieval for Search Engine
 
Logic for everyone
Logic for everyoneLogic for everyone
Logic for everyone
 
Information Retrieval
Information Retrieval Information Retrieval
Information Retrieval
 
Blinkdb
BlinkdbBlinkdb
Blinkdb
 
Information Retrieval-05(wild card query_positional index_spell correction)
Information Retrieval-05(wild card query_positional index_spell correction)Information Retrieval-05(wild card query_positional index_spell correction)
Information Retrieval-05(wild card query_positional index_spell correction)
 
lecture1.pdf
lecture1.pdflecture1.pdf
lecture1.pdf
 
The Last Line Effect
The Last Line EffectThe Last Line Effect
The Last Line Effect
 
LongestCS (1).ppt
LongestCS (1).pptLongestCS (1).ppt
LongestCS (1).ppt
 
Logic and mathematics history and overview for students
Logic and mathematics history and overview for studentsLogic and mathematics history and overview for students
Logic and mathematics history and overview for students
 
Cs583 info-retrieval
Cs583 info-retrievalCs583 info-retrieval
Cs583 info-retrieval
 
Kdd 2014 tutorial bringing structure to text - chi
Kdd 2014 tutorial   bringing structure to text - chiKdd 2014 tutorial   bringing structure to text - chi
Kdd 2014 tutorial bringing structure to text - chi
 
Quantitative Individuated Corpus Linguistics
Quantitative Individuated Corpus LinguisticsQuantitative Individuated Corpus Linguistics
Quantitative Individuated Corpus Linguistics
 
Using topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic searchUsing topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic search
 
Functional Programming with Immutable Data Structures
Functional Programming with Immutable Data StructuresFunctional Programming with Immutable Data Structures
Functional Programming with Immutable Data Structures
 
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail ScienceSQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
SQL is Dead; Long Live SQL: Lightweight Query Services for Long Tail Science
 
Basics of IR: Web Information Systems class
Basics of IR: Web Information Systems class Basics of IR: Web Information Systems class
Basics of IR: Web Information Systems class
 
DE notes
DE notesDE notes
DE notes
 

Recently uploaded

在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
v7oacc3l
 
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
74nqk8xf
 
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
g4dpvqap0
 
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
zsjl4mimo
 
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataPredictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Kiwi Creative
 
DSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelinesDSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelines
Timothy Spann
 
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
Walaa Eldin Moustafa
 
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
nyfuhyz
 
Population Growth in Bataan: The effects of population growth around rural pl...
Population Growth in Bataan: The effects of population growth around rural pl...Population Growth in Bataan: The effects of population growth around rural pl...
Population Growth in Bataan: The effects of population growth around rural pl...
Bill641377
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
74nqk8xf
 
Influence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business PlanInfluence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business Plan
jerlynmaetalle
 
End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024
Lars Albertsson
 
Intelligence supported media monitoring in veterinary medicine
Intelligence supported media monitoring in veterinary medicineIntelligence supported media monitoring in veterinary medicine
Intelligence supported media monitoring in veterinary medicine
AndrzejJarynowski
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
Timothy Spann
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
sameer shah
 
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
bopyb
 
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
nuttdpt
 
Palo Alto Cortex XDR presentation .......
Palo Alto Cortex XDR presentation .......Palo Alto Cortex XDR presentation .......
Palo Alto Cortex XDR presentation .......
Sachin Paul
 
The Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series DatabaseThe Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series Database
javier ramirez
 
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdfUdemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Fernanda Palhano
 

Recently uploaded (20)

在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样
 
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
 
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
一比一原版(爱大毕业证书)爱丁堡大学毕业证如何办理
 
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
一比一原版(Harvard毕业证书)哈佛大学毕业证如何办理
 
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataPredictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
 
DSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelinesDSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelines
 
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
 
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
 
Population Growth in Bataan: The effects of population growth around rural pl...
Population Growth in Bataan: The effects of population growth around rural pl...Population Growth in Bataan: The effects of population growth around rural pl...
Population Growth in Bataan: The effects of population growth around rural pl...
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
 
Influence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business PlanInfluence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business Plan
 
End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024
 
Intelligence supported media monitoring in veterinary medicine
Intelligence supported media monitoring in veterinary medicineIntelligence supported media monitoring in veterinary medicine
Intelligence supported media monitoring in veterinary medicine
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
 
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
一比一原版(GWU,GW文凭证书)乔治·华盛顿大学毕业证如何办理
 
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
 
Palo Alto Cortex XDR presentation .......
Palo Alto Cortex XDR presentation .......Palo Alto Cortex XDR presentation .......
Palo Alto Cortex XDR presentation .......
 
The Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series DatabaseThe Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series Database
 
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdfUdemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
 

Boolean IR and Indexing.pptx

  • 1. Boolean retrieval & basics of indexing Modern Information Retrieval University of Qom Z. Imanimehr Spring 2023
  • 2. Boolean retrieval model 2  Query: Boolean expressions  Boolean queries use AND, OR and NOT to join query terms  Views each doc as a set of words  Term-incidence matrix is sufficient  Shows presence or absence of terms in each doc  Perhaps the simplest model to build an IR system on
  • 3. Boolean queries: Exact match  In pure Boolean model, retrieved docs are not ranked  Result is a set of docs.  It is precise or exact match (docs match condition or not).  Primary commercial retrieval tool for 3 decades (Until 1990’s).  Many search systems you still use are Boolean:  Email, library catalog, Mac OS X Spotlight 3 Sec. 1.3
  • 4. The classic search model Task Info Need Query Results SEARCH ENGINE Query Refinement Get rid of mice in a politically correct way Info about removing mice without killing them mouse trap Misconception? Misformulation? Corpus 4
  • 5. Example: Plays of Shakespeare  Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?  scanning all of Shakespeare’s plays for Brutus and Caesar, then strip out those containing Calpurnia?  The above solution cannot be the answer for large corpora (computationally expensive)  Efficiency is also an important issue (along with the effectiveness)  Index: data structure built on the text to speed up the searches 5 Sec. 1.1
  • 6. Example: Plays of Shakespeare Term-document incidence matrix Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 1 if play contains word, 0 otherwise Sec. 1.1 6
  • 7. Incidence vectors  So we have a 0/1 vector for each term.  Brutus AND Caesar but NOT Calpurnia  To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented)  bitwise AND.  110100 AND 110111 AND 101111 = 100100. 7 Sec. 1.1 Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0
  • 8. Answers to query  Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain.  Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. 8 Sec. 1.1 Brutus AND Caesar but NOT Calpurnia
  • 9. Bigger collections  Number of docs: N = 106  Average length of a doc≈ 1000 words  No. of distinct terms: M = 500,000  Average length of a word ≈ 6 bytes  including spaces/punctuation  6GB of data 9 Sec. 1.1
  • 10. Sparsity of Term-document incidence matrix  500K x 1M matrix has half-a-trillion 0’s and 1’s.  But it has no more than one billion 1’s.  matrix is extremely sparse.  so a minimum of 99.8% of the cells are zero.  What’s a better representation?  We only record the 1 positions. 10 Why? Sec. 1.1
  • 11. Inverted index  For each term t, store a list of all docs that contain t.  Identify each by a docID, a document serial number  Can we use fixed-size arrays for this? 11 Brutus Calpurnia Caesar 1 2 4 5 6 16 57 132 1 2 4 11 31 45 173 2 31 What happens if the word Caesar is added to doc 14? Sec. 1.2 174 54 101
  • 12. Inverted index  We need variable-size postings lists  On disk, a continuous run of postings is normal and best  In memory, can use linked lists or variable length arrays  Some tradeoffs in size/ease of insertion 12 Dictionary Postings Sorted by docID Posting Sec. 1.2 Brutus Calpurnia Caesar 1 2 4 5 6 16 57 132 1 2 4 11 31 45 173 2 31 174 54 101
  • 13. Tokenizer Token stream Friends Romans Countrymen Inverted index construction Linguistic modules Modified tokens friend roman countryman Indexer Inverted index friend roman countryman 2 4 2 13 16 1 We will see more on these later. Docs to be indexed Friends, Romans, countrymen. Sec. 1.2 13
  • 14. Indexer steps: Token sequence  Sequence of (Modified token, Document ID) pairs. I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Doc 1 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious Doc 2 Sec. 1.2 14
  • 15. Indexer steps: Sort  Sort by terms  And then docID Core indexing step Sec. 1.2 15
  • 16. Indexer steps: Dictionary & Postings  Multiple term entries in a single doc are merged.  Split into Dictionary and Postings  Document frequency information is added. Why frequency? Will discuss later. Sec. 1.2 16
  • 17. Where do we pay in storage? 17 Pointers Terms and counts Sec. 1.2 Lists of docIDs
  • 18. A naïve dictionary  An array of struct: char[20] int Postings * Sec. 3.1 18
  • 19. Query processing: AND  Consider processing the query: Brutus AND Caesar  Locate Brutus in the dictionary;  Retrieve its postings.  Locate Caesar in the dictionary;  Retrieve its postings.  “Merge” (intersect) the two postings: 19 128 34 2 4 8 16 32 64 1 2 3 5 8 13 21 Brutus Caesar Sec. 1.3
  • 20. The merge  Walk through the two postings simultaneously, in time linear in the total number of postings entries 20 If list lengths are x and y, merge takes O(x+y) operations. Crucial: postings sorted by docID. Sec. 1.3 128 31 2 4 8 41 48 64 1 2 3 8 11 17 21 Brutus Caesar 2 8
  • 21. Intersecting two postings lists (a “merge” algorithm) 21
  • 22. Boolean queries: More general merges  Exercise: Adapt the merge for the queries: Brutus AND NOT Caesar Brutus OR NOT Caesar Can we still run through the merge in time 𝑂(𝑥 + 𝑦)? 22 Sec. 1.3
  • 23. Merging What about an arbitrary Boolean formula? (Brutus OR Caesar) AND NOT (Antony OR Cleopatra)  Can we merge in “linear” time for general Boolean queries?  Linear in what?  Can we do better? 23 Sec. 1.3
  • 24. Query optimization  What is the best order for query processing?  Consider a query that is an AND of 𝑛 terms.  For each of the 𝑛 terms, get its postings, then AND them together. Brutus Caesar Calpurnia 1 2 3 5 8 16 21 34 2 4 8 16 32 64 128 13 16 Query: Brutus AND Calpurnia AND Caesar 24 Sec. 1.3 24
  • 25. Query optimization example  Process in order of increasing freq:  start with smallest set, then keep cutting further. 25 This is why we kept document freq. in dictionary Execute the query as (Calpurnia AND Brutus) AND Caesar. Sec. 1.3 Brutus Caesar Calpurnia 1 2 3 5 8 16 21 34 2 4 8 16 32 64 128 13 16
  • 26. More general optimization  Example: (madding OR crowd) AND (ignoble OR strife)  Get doc frequencies for all terms.  Estimate the size of each OR by the sum of its doc. freq.’s (conservative).  Process in increasing order of OR sizes. 26 Sec. 1.3
  • 27. Exercise 27  Recommend a query processing order for  (tangerine OR trees) AND  (marmalade OR skies) AND  (kaleidoscope OR eyes)  Which two terms should we process first? Term Freq eyes 213312 kaleidoscope 87009 marmalade 107913 skies 271658 tangerine 46653 trees 316812
  • 28. Summary of Boolean IR: Advantages of exact match 28  It can be implemented very efficiently  Predictable, easy to explain  precise semantics  Structured queries for pinpointing precise docs  neat formalism  Work well when you know exactly (or roughly) what the collection contains and what you’re looking for
  • 29. Summary of Boolean IR: Disadvantages of the Boolean Model 29  Query formulation (Boolean expression) is difficult for most users  Too simplistic Boolean queries by most users  AND, OR as opposite extremes in a precision/recall tradeoff  Usually either too few or too many docs in response to a user query  Retrieval based on binary decision criteria  No ranking of the docs is provided  Difficulty increases with collection size
  • 30. Ranking results in advanced IR models  Boolean queries give inclusion or exclusion of docs.  Results of queries in Boolean model as a set  Modern information retrieval systems are no longer based on the Boolean model  Often we want to rank/group results  Need to measure proximity from query to each doc.  Index term weighting can provide a substantial improvement 30
  • 31. 31 Phrase and proximity queries: positional indexes
  • 32. Phrase queries  Example: “stanford university”  “I went to university at Stanford” is not a match.  Easily understood by users  One of the few “advanced search” ideas that works  At least 10% of web queries are phrase queries  Many more queries are implicit phrase queries  such as person names entered without use of double quotes.  It is not sufficient to store only the doc IDs in the posting lists Sec. 2.4 32
  • 33. Approaches for phrase queries 33  Indexing bi-words (two word phrases)  Positional indexes  Full inverted index
  • 34. Biword indexes  Index every consecutive pair of terms in the text as a phrase  E.g., doc :“Friends, Romans, Countrymen”  would generate these biwords:  “friends romans” , “romans countrymen”  Each of these biwords is now a dictionary term  Two-word phrase query-processing is now immediate. Sec. 2.4.1 34
  • 35. Biword indexes: Longer phrase queries  Longer phrases are processed as conjunction of biwords Query: “stanford university palo alto”  can be broken into the Boolean query on biwords: “stanford university” AND “university palo” AND “palo alto”  Can have false positives!  Without the docs, we cannot verify that the docs matching the above Boolean query do contain the phrase. Sec. 2.4.1 35
  • 36. Issues for biword indexes  False positives (for phrases with more than two words)  Index blowup due to bigger dictionary  Infeasible for more than biwords, big even for biwords  Biword indexes are not the standard solution (for all biwords) but can be part of a compound strategy Sec. 2.4.1 36
  • 37. Positional index  In the postings, store for each term the position(s) in which tokens of it appear: <term, doc freq.; doc1: position1, position2 … ; doc2: position1, position2 … ; …> Sec. 2.4.2 37 <be: 993427; 1: 7, 18, 33, 72, 86, 231; 2: 3, 149; 4: 17, 191, 291, 430, 434; 5: 363, 367, …> Which of docs 1,2,4,5 could contain “to be or not to be”?
  • 38. Positional index  For phrase queries, we use a merge algorithm recursively at the doc level  We need to deal with more than just equality of docIDs:  Phrase query: find places where all the words appear in sequence  Proximity query: to find places where all the words close enough Sec. 2.4.2 38
  • 39. Processing a phrase query: Example  Query: “to be or not to be”  Extract inverted index entries for: to, be, or, not  Merge: find all positions of “to”, i, i+4, “be”, i+1, i+5, “or”, i+2, “not”, i+3.  to:  <2:1,17,74,222,551>; <4:8,16,190,429,433, 512>; <7:13,23,191>; ...  be:  <1:17,19>; <4:17,191,291,430,434>; <5:14,19,101>; ...  or:  <3:5,15,19>; <4:5,100,251,431,438>; <7:17,52,121>; ...  not:  <4:71,432>; <6:20,85>; ... Sec. 2.4.2 39
  • 40. Positional index: Proximity queries  k word proximity searches  Find places where the words are within k proximity  Positional indexes can be used for such queries  as opposed to biword indexes  Exercise: Adapt the linear merge of postings to handle proximity queries. Can you make it work for any value of k? Sec. 2.4.2 40
  • 41. 41
  • 42. Positional index: size  You can compress position values/offsets  Nevertheless, a positional index expands postings storage substantially  Positional index is now standardly used  because of the power and usefulness of phrase and proximity queries …  used explicitly or implicitly in a ranking retrieval system. Sec. 2.4.2 42
  • 43. Positional index: size  Need an entry for each occurrence, not just once per doc  Index size depends on average doc size  Average web page has <1000 terms  SEC filings, books, even some epic poems … easily 100,000 terms  Consider a term with frequency 0.1% Why? 100 1 100,000 1 1 1000 Expected entries in Positional postings Expected Postings Doc size (# of terms) Sec. 2.4.2 43
  • 44. Positional index: size (rules of thumb)  A positional index is usually 2–4 as large as a non- positional index  Positional index size 35–50% of volume of original text  Caveat: all of this holds for “English-like” languages Sec. 2.4.2 44
  • 45. Phrase queries: Combination schemes  Combining two approaches  For queries like “Michael Jordan”, it is inefficient to merge positional postings lists  Good queries to include in the phrase index:  common queries based on recent querying behavior.  and also for phrases whose individual words are common but the phrase is not such common  Example: “The Who” Sec. 2.4.3 45
  • 46. Phrase queries: Combination schemes 46  Williams et al. (2004) evaluate a more sophisticated mixed indexing scheme  needs (in average) ¼ of the time of using just a positional index  needs 26% more space than having a positional index alone