SlideShare a Scribd company logo
Introduction to Information Retrieval
http://informationretrieval.org
IIR 1: Boolean Retrieval
Hinrich Schütze
Center for Information and Language Processing, University of Munich
2014-04-09
1 / 60
Take-away
Boolean Retrieval: Design and data structures of a simple
information retrieval system
What topics will be covered in this class?
2 / 60
Outline
1 Introduction
2 Inverted index
3 Processing Boolean queries
4 Query optimization
5 Course overview
3 / 60
Definition of information retrieval
Information retrieval (IR) is finding material (usually documents) of
an unstructured nature (usually text) that satisfies an information
need from within large collections (usually stored on computers).
4 / 60
Boolean retrieval
The Boolean model is arguably the simplest model to base an
information retrieval system on.
Queries are Boolean expressions, e.g., Caesar and Brutus
The seach engine returns all documents that satisfy the
Boolean expression.
Does Google use the Boolean model?
7 / 60
Does Google use the Boolean model?
On Google, the default interpretation of a query [w1 w2
. . . wn] is w1 AND w2 AND . . . AND wn
Cases where you get hits that do not contain one of the wi :
anchor text
page contains variant of wi (morphology, spelling correction,
synonym)
long queries (n large)
boolean expression generates very few hits
Simple Boolean vs. Ranking of result set
Simple Boolean retrieval returns matching documents in no
particular order.
Google (and most well designed Boolean engines) rank the
result set – they rank good hits (according to some estimator
of relevance) higher than bad hits.
8 / 60
Outline
1 Introduction
2 Inverted index
3 Processing Boolean queries
4 Query optimization
5 Course overview
9 / 60
Unstructured data in 1650: Shakespeare
10 / 60
Unstructured data in 1650
Which plays of Shakespeare contain the words Brutus and
Caesar, but not Calpurnia?
One could grep all of Shakespeare’s plays for Brutus and
Caesar, then strip out lines containing Calpurnia.
Why is grep not the solution?
Slow (for large collections)
grep is line-oriented, IR is document-oriented
“not Calpurnia” is non-trivial
Other operations (e.g., find the word Romans near
countryman) not feasible
11 / 60
Term-document incidence matrix
Anthony Julius The Hamlet Othello Macbeth . . .
and Caesar Tempest
Cleopatra
Anthony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
. . .
Entry is 1 if term occurs. Example: Calpurnia occurs in Julius
Caesar. Entry is 0 if term doesn’t occur. Example: Calpurnia
doesn’t occur in The tempest.
12 / 60
Incidence vectors
So we have a 0/1 vector for each term.
To answer the query Brutus and Caesar and not
Calpurnia:
Take the vectors for Brutus, Caesar, and Calpurnia
Complement the vector of Calpurnia
Do a (bitwise) and on the three vectors
110100 and 110111 and 101111 = 100100
13 / 60
0/1 vectors and result of bitwise operations
Anthony Julius The Hamlet Othello Macbeth . . .
and Caesar Tempest
Cleopatra
Anthony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
. . .
result: 1 0 0 1 0 0
14 / 60
Answers to query
Anthony and Cleopatra, Act III, Scene ii
Agrippa [Aside to Domitius Enobarbus]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar: I was killed i’ the
Capitol; Brutus killed me.
15 / 60
Bigger collections
Consider N = 106 documents, each with about 1000 tokens
⇒ total of 109 tokens
On average 6 bytes per token, including spaces and
punctuation ⇒ size of document collection is about 6 · 109 =
6 GB
Assume there are M = 500,000 distinct terms in the collection
(Notice that we are making a term/token distinction.)
16 / 60
Can’t build the incidence matrix
M = 500,000 × 106 = half a trillion 0s and 1s.
But the matrix has no more than one billion 1s.
Matrix is extremely sparse.
What is a better representations?
We only record the 1s.
17 / 60
Inverted Index
For each term t, we store a list of all documents that contain t.
Brutus −→ 1 2 4 11 31 45 173 174
Caesar −→ 1 2 4 5 6 16 57 132 . . .
Calpurnia −→ 2 31 54 101
.
.
.
| {z } | {z }
dictionary postings
18 / 60
Inverted index construction
1 Collect the documents to be indexed:
Friends, Romans, countrymen. So let it be with Caesar . . .
2 Tokenize the text, turning each document into a list of tokens:
Friends Romans countrymen So . . .
3 Do linguistic preprocessing, producing a list of normalized
tokens, which are the indexing terms: friend roman
countryman so . . .
4 Index the documents that each term occurs in by creating an
inverted index, consisting of a dictionary and postings.
19 / 60
Tokenization and preprocessing
Doc 1. I did enact Julius Caesar: I
was killed i’ the Capitol; Brutus killed
me.
Doc 2. So let it be with Caesar. The
noble Brutus hath told you Caesar
was ambitious:
=⇒
Doc 1. i did enact julius caesar i was
killed i’ the capitol brutus killed me
Doc 2. so let it be with caesar the
noble brutus hath told you caesar was
ambitious
20 / 60
Generate postings
Doc 1. i did enact julius caesar i was
killed i’ the capitol brutus killed me
Doc 2. so let it be with caesar the
noble brutus hath told you caesar was
ambitious
=⇒
term docID
i 1
did 1
enact 1
julius 1
caesar 1
i 1
was 1
killed 1
i’ 1
the 1
capitol 1
brutus 1
killed 1
me 1
so 2
let 2
it 2
be 2
with 2
caesar 2
the 2
noble 2
brutus 2
hath 2
told 2
you 2
caesar 2
was 2
ambitious 2
21 / 60
Sort postings
term docID
i 1
did 1
enact 1
julius 1
caesar 1
i 1
was 1
killed 1
i’ 1
the 1
capitol 1
brutus 1
killed 1
me 1
so 2
let 2
it 2
be 2
with 2
caesar 2
the 2
noble 2
brutus 2
hath 2
told 2
you 2
caesar 2
was 2
ambitious 2
=⇒
term docID
ambitious 2
be 2
brutus 1
brutus 2
capitol 1
caesar 1
caesar 2
caesar 2
did 1
enact 1
hath 1
i 1
i 1
i’ 1
it 2
julius 1
killed 1
killed 1
let 2
me 1
noble 2
so 2
the 1
the 2
told 2
you 2
was 1
was 2
with 2
22 / 60
Create postings lists, determine document frequency
term docID
ambitious 2
be 2
brutus 1
brutus 2
capitol 1
caesar 1
caesar 2
caesar 2
did 1
enact 1
hath 1
i 1
i 1
i’ 1
it 2
julius 1
killed 1
killed 1
let 2
me 1
noble 2
so 2
the 1
the 2
told 2
you 2
was 1
was 2
with 2
=⇒
term doc. freq. → postings lists
ambitious 1 → 2
be 1 → 2
brutus 2 → 1 → 2
capitol 1 → 1
caesar 2 → 1 → 2
did 1 → 1
enact 1 → 1
hath 1 → 2
i 1 → 1
i’ 1 → 1
it 1 → 2
julius 1 → 1
killed 1 → 1
let 1 → 2
me 1 → 1
noble 1 → 2
so 1 → 2
the 2 → 1 → 2
told 1 → 2
you 1 → 2
was 2 → 1 → 2
with 1 → 2
23 / 60
Split the result into dictionary and postings file
Brutus −→ 1 2 4 11 31 45 173 174
Caesar −→ 1 2 4 5 6 16 57 132 . . .
Calpurnia −→ 2 31 54 101
.
.
.
| {z } | {z }
dictionary postings file
24 / 60
Later in this course
Index construction: how can we create inverted indexes for
large collections?
How much space do we need for dictionary and index?
Index compression: how can we efficiently store and process
indexes for large collections?
Ranked retrieval: what does the inverted index look like when
we want the “best” answer?
25 / 60
Outline
1 Introduction
2 Inverted index
3 Processing Boolean queries
4 Query optimization
5 Course overview
26 / 60
Simple conjunctive query (two terms)
Consider the query: Brutus AND Calpurnia
To find all matching documents using inverted index:
1 Locate Brutus in the dictionary
2 Retrieve its postings list from the postings file
3 Locate Calpurnia in the dictionary
4 Retrieve its postings list from the postings file
5 Intersect the two postings lists
6 Return intersection to user
27 / 60
Intersecting two postings lists
Brutus −→ 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174
Calpurnia −→ 2 → 31 → 54 → 101
Intersection =⇒ 2 → 31
This is linear in the length of the postings lists.
Note: This only works if postings lists are sorted.
28 / 60
Intersecting two postings lists
Intersect(p1, p2)
1 answer ← h i
2 while p1 6= nil and p2 6= nil
3 do if docID(p1) = docID(p2)
4 then Add(answer, docID(p1))
5 p1 ← next(p1)
6 p2 ← next(p2)
7 else if docID(p1) < docID(p2)
8 then p1 ← next(p1)
9 else p2 ← next(p2)
10 return answer
29 / 60
Query processing: Exercise
france −→ 1 → 2 → 3 → 4 → 5 → 7 → 8 → 9 → 11 → 12 → 13 → 14 → 15
paris −→ 2 → 6 → 10 → 12 → 14
lear −→ 12 → 15
Compute hit list for ((paris AND NOT france) OR lear)
30 / 60
Boolean retrieval model: Assessment
The Boolean retrieval model can answer any query that is a
Boolean expression.
Boolean queries are queries that use and, or and not to join
query terms.
Views each document as a set of terms.
Is precise: Document matches condition or not.
Primary commercial retrieval tool for 3 decades
Many professional searchers (e.g., lawyers) still like Boolean
queries.
You know exactly what you are getting.
Many search systems you use are also Boolean: spotlight,
email, intranet etc.
31 / 60
Commercially successful Boolean retrieval: Westlaw
Largest commercial legal search service in terms of the
number of paying subscribers
Over half a million subscribers performing millions of searches
a day over tens of terabytes of text data
The service was started in 1975.
In 2005, Boolean search (called “Terms and Connectors” by
Westlaw) was still the default, and used by a large percentage
of users . . .
. . . although ranked retrieval has been available since 1992.
32 / 60
Westlaw: Example queries
Information need: Information on the legal theories involved in
preventing the disclosure of trade secrets by employees formerly
employed by a competing company Query: “trade secret” /s
disclos! /s prevent /s employe! Information need: Requirements
for disabled people to be able to access a workplace Query: disab!
/p access! /s work-site work-place (employment /3 place)
Information need: Cases about a host’s responsibility for drunk
guests Query: host! /p (responsib! liab!) /p (intoxicat! drunk!)
/p guest
33 / 60
Westlaw: Comments
Proximity operators: /3 = within 3 words, /s = within a
sentence, /p = within a paragraph
Space is disjunction, not conjunction! (This was the default in
search pre-Google.)
Long, precise queries: incrementally developed, not like web
search
Why professional searchers often like Boolean search:
precision, transparency, control
When are Boolean queries the best way of searching? Depends
on: information need, searcher, document collection, . . .
34 / 60
Outline
1 Introduction
2 Inverted index
3 Processing Boolean queries
4 Query optimization
5 Course overview
35 / 60
Query optimization
Consider a query that is an and of n terms, n > 2
For each of the terms, get its postings list, then and them
together
Example query: Brutus AND Calpurnia AND Caesar
What is the best order for processing this query?
36 / 60
Query optimization
Example query: Brutus AND Calpurnia AND Caesar
Simple and effective optimization: Process in order of
increasing frequency
Start with the shortest postings list, then keep cutting further
In this example, first Caesar, then Calpurnia, then
Brutus
Brutus −→ 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174
Calpurnia −→ 2 → 31 → 54 → 101
Caesar −→ 5 → 31
37 / 60
Optimized intersection algorithm for conjunctive queries
Intersect(ht1, . . . , tni)
1 terms ← SortByIncreasingFrequency(ht1, . . . , tni)
2 result ← postings(first(terms))
3 terms ← rest(terms)
4 while terms 6= nil and result 6= nil
5 do result ← Intersect(result, postings(first(terms)))
6 terms ← rest(terms)
7 return result
38 / 60
More general optimization
Example query: (madding or crowd) and (ignoble or
strife)
Get frequencies for all terms
Estimate the size of each or by the sum of its frequencies
(conservative)
Process in increasing order of or sizes
39 / 60
Outline
1 Introduction
2 Inverted index
3 Processing Boolean queries
4 Query optimization
5 Course overview
40 / 60
Course overview
We are done with Chapter 1 of IIR (IIR 01).
Plan for the rest of the semester: 18–20 of the 21 chapters of
IIR
In what follows: teasers for most chapters – to give you a
sense of what will be covered.
41 / 60
IIR 02: The term vocabulary and postings lists
Phrase queries: “Stanford University”
Proximity queries: Gates near Microsoft
We need an index that captures position information for
phrase queries and proximity queries.
42 / 60
IIR 03: Dictionaries and tolerant retrieval
rd aboard ardent boardroom border
or border lord morbid sordid
bo aboard about boardroom border
✲ ✲ ✲ ✲
✲ ✲ ✲ ✲
✲ ✲ ✲ ✲
43 / 60
IIR 04: Index construction
master
assign
map
phase
reduce
phase
assign
parser
splits
parser
parser
inverter
postings
inverter
inverter
a-f
g-p
q-z
a-f g-p q-z
a-f g-p q-z
a-f
segment
files
g-p q-z
44 / 60
IIR 05: Index compression
0 1 2 3 4 5 6
0
1
2
3
4
5
6
7
log10 rank
7
log10
cf
Zipf’s law
45 / 60
IIR 06: Scoring, term weighting and the vector space
model
Ranking search results
Boolean queries only give inclusion or exclusion of documents.
For ranked retrieval, we measure the proximity between the query and
each document.
One formalism for doing this: the vector space model
Key challenge in ranked retrieval: evidence accumulation for a term in
a document
1 vs. 0 occurence of a query term in the document
3 vs. 2 occurences of a query term in the document
Usually: more is better
But by how much?
Need a scoring function that translates frequency into score or weight
46 / 60
IIR 07: Scoring in a complete search system
Documents
Document
cache
Indexes
k-gram
Scoring
parameters
MLR
training
set
Results
page
Indexers
Parsing
Linguistics
user query
Free text query parser
Spell correction Scoring and ranking
Tiered inverted
positional index
Inexact
top K
retrieval
Metadata in
zone and
field indexes
47 / 60
IIR 08: Evaluation and dynamic summaries
48 / 60
IIR 09: Relevance feedback & query expansion
49 / 60
IIR 12: Language models
q1
w P(w|q1) w P(w|q1)
STOP 0.2 toad 0.01
the 0.2 said 0.03
a 0.1 likes 0.02
frog 0.01 that 0.04
. . . . . . This
is a one-state probabilistic finite-state automaton – a unigram
language model – and the state emission distribution for its one
state q1. STOP is not a word, but a special symbol indicating that
the automaton stops. frog said that toad likes frog STOP
P(string) = 0.01 ·0.03 ·0.04 ·0.01 ·0.02 ·0.01 ·0.2
= 0.0000000000048
50 / 60
IIR 13: Text classification & Naive Bayes
Text classification = assigning documents automatically to
predefined classes
Examples:
Language (English vs. French)
Adult content
Region
51 / 60
IIR 14: Vector classification
X
X
X
X
X
X
X
X
X
X
X
∗
52 / 60
IIR 15: Support vector machines
53 / 60
IIR 16: Flat clustering
54 / 60
IIR 17: Hierarchical clustering
http://news.google.com
55 / 60
IIR 18: Latent Semantic Indexing
56 / 60
IIR 19: The web and its challenges
Unusual and diverse documents
Unusual and diverse users and information needs
Beyond terms and text: exploit link analysis, user data
How do web search engines work?
How can we make them better?
57 / 60
IIR 21: Link analysis / PageRank
58 / 60
Take-away
Boolean Retrieval: Design and data structures of a simple
information retrieval system
What topics will be covered in this class?
59 / 60
Resources
Chapter 1 of IIR
http://cislmu.org
course schedule
information retrieval links
Shakespeare search engine
60 / 60

More Related Content

Similar to flat studies into.pdf

Hash - A probabilistic approach for big data
Hash - A probabilistic approach for big dataHash - A probabilistic approach for big data
Hash - A probabilistic approach for big data
Luca Mastrostefano
 
Essay About Website. Online assignment writing service.
Essay About Website. Online assignment writing service.Essay About Website. Online assignment writing service.
Essay About Website. Online assignment writing service.
Jenny Mancini
 
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board GameLetter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Emily Smith
 
TRank ISWC2013
TRank ISWC2013TRank ISWC2013
TRank ISWC2013
eXascale Infolab
 
Quran Essay In Malayalam
Quran Essay In MalayalamQuran Essay In Malayalam
Quran Essay In Malayalam
Janna Smith
 
Ir 02
Ir   02Ir   02
Your Favourite Singer Essay
Your Favourite Singer EssayYour Favourite Singer Essay
Your Favourite Singer Essay
Jennifer Reese
 
Essay Descriptive. How to Write a Descriptive Essay: Tips to Consider
Essay Descriptive. How to Write a Descriptive Essay: Tips to ConsiderEssay Descriptive. How to Write a Descriptive Essay: Tips to Consider
Essay Descriptive. How to Write a Descriptive Essay: Tips to Consider
Maggie Cooper
 
01 Machine Learning Introduction
01 Machine Learning Introduction 01 Machine Learning Introduction
01 Machine Learning Introduction
Andres Mendez-Vazquez
 
Robertclass200801
Robertclass200801Robertclass200801
Robertclass200801
SCPilsk
 
Robertclass2008 04 15
Robertclass2008 04 15Robertclass2008 04 15
Robertclass2008 04 15
SCPilsk
 
Writing Lessons Lesson1-Paragraph
Writing Lessons Lesson1-ParagraphWriting Lessons Lesson1-Paragraph
Writing Lessons Lesson1-Paragraph
Renee Franco
 
IE: Named Entity Recognition (NER)
IE: Named Entity Recognition (NER)IE: Named Entity Recognition (NER)
IE: Named Entity Recognition (NER)
Marina Santini
 
Class 2
Class 2Class 2
University Of Michigan Supplemental Essays 2013
University Of Michigan Supplemental Essays 2013University Of Michigan Supplemental Essays 2013
University Of Michigan Supplemental Essays 2013
Taina Myers
 
Stock Market Game Essay
Stock Market Game EssayStock Market Game Essay
Stock Market Game Essay
Shamika Mendoza
 
Edward Schaefer
Edward SchaeferEdward Schaefer
Gmo Essay Example
Gmo Essay ExampleGmo Essay Example
Gmo Essay Example
Jessica Gutierrez
 
Can computers think
Can computers thinkCan computers think
Can computers think
GTClub
 
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
April Lacey
 

Similar to flat studies into.pdf (20)

Hash - A probabilistic approach for big data
Hash - A probabilistic approach for big dataHash - A probabilistic approach for big data
Hash - A probabilistic approach for big data
 
Essay About Website. Online assignment writing service.
Essay About Website. Online assignment writing service.Essay About Website. Online assignment writing service.
Essay About Website. Online assignment writing service.
 
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board GameLetter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
 
TRank ISWC2013
TRank ISWC2013TRank ISWC2013
TRank ISWC2013
 
Quran Essay In Malayalam
Quran Essay In MalayalamQuran Essay In Malayalam
Quran Essay In Malayalam
 
Ir 02
Ir   02Ir   02
Ir 02
 
Your Favourite Singer Essay
Your Favourite Singer EssayYour Favourite Singer Essay
Your Favourite Singer Essay
 
Essay Descriptive. How to Write a Descriptive Essay: Tips to Consider
Essay Descriptive. How to Write a Descriptive Essay: Tips to ConsiderEssay Descriptive. How to Write a Descriptive Essay: Tips to Consider
Essay Descriptive. How to Write a Descriptive Essay: Tips to Consider
 
01 Machine Learning Introduction
01 Machine Learning Introduction 01 Machine Learning Introduction
01 Machine Learning Introduction
 
Robertclass200801
Robertclass200801Robertclass200801
Robertclass200801
 
Robertclass2008 04 15
Robertclass2008 04 15Robertclass2008 04 15
Robertclass2008 04 15
 
Writing Lessons Lesson1-Paragraph
Writing Lessons Lesson1-ParagraphWriting Lessons Lesson1-Paragraph
Writing Lessons Lesson1-Paragraph
 
IE: Named Entity Recognition (NER)
IE: Named Entity Recognition (NER)IE: Named Entity Recognition (NER)
IE: Named Entity Recognition (NER)
 
Class 2
Class 2Class 2
Class 2
 
University Of Michigan Supplemental Essays 2013
University Of Michigan Supplemental Essays 2013University Of Michigan Supplemental Essays 2013
University Of Michigan Supplemental Essays 2013
 
Stock Market Game Essay
Stock Market Game EssayStock Market Game Essay
Stock Market Game Essay
 
Edward Schaefer
Edward SchaeferEdward Schaefer
Edward Schaefer
 
Gmo Essay Example
Gmo Essay ExampleGmo Essay Example
Gmo Essay Example
 
Can computers think
Can computers thinkCan computers think
Can computers think
 
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
005 Essay Example Write My For Me Cheap Non Plagiarized Lines Paper ...
 

Recently uploaded

Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
KrishnaveniKrishnara1
 
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSA SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
IJNSA Journal
 
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball playEric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
enizeyimana36
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
ihlasbinance2003
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
wisnuprabawa3
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
gerogepatton
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
Dr Ramhari Poudyal
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
NazakatAliKhoso2
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
JamalHussainArman
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
nooriasukmaningtyas
 
Recycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part IIRecycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part II
Aditya Rajan Patra
 
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
171ticu
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
University of Maribor
 
Heat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation pptHeat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation ppt
mamunhossenbd75
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
Victor Morales
 

Recently uploaded (20)

Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
 
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSA SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
 
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball playEric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
 
Recycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part IIRecycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part II
 
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
 
Heat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation pptHeat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation ppt
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
 

flat studies into.pdf

  • 1. Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Center for Information and Language Processing, University of Munich 2014-04-09 1 / 60
  • 2. Take-away Boolean Retrieval: Design and data structures of a simple information retrieval system What topics will be covered in this class? 2 / 60
  • 3. Outline 1 Introduction 2 Inverted index 3 Processing Boolean queries 4 Query optimization 5 Course overview 3 / 60
  • 4. Definition of information retrieval Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers). 4 / 60
  • 5.
  • 6.
  • 7. Boolean retrieval The Boolean model is arguably the simplest model to base an information retrieval system on. Queries are Boolean expressions, e.g., Caesar and Brutus The seach engine returns all documents that satisfy the Boolean expression. Does Google use the Boolean model? 7 / 60
  • 8. Does Google use the Boolean model? On Google, the default interpretation of a query [w1 w2 . . . wn] is w1 AND w2 AND . . . AND wn Cases where you get hits that do not contain one of the wi : anchor text page contains variant of wi (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set – they rank good hits (according to some estimator of relevance) higher than bad hits. 8 / 60
  • 9. Outline 1 Introduction 2 Inverted index 3 Processing Boolean queries 4 Query optimization 5 Course overview 9 / 60
  • 10. Unstructured data in 1650: Shakespeare 10 / 60
  • 11. Unstructured data in 1650 Which plays of Shakespeare contain the words Brutus and Caesar, but not Calpurnia? One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia. Why is grep not the solution? Slow (for large collections) grep is line-oriented, IR is document-oriented “not Calpurnia” is non-trivial Other operations (e.g., find the word Romans near countryman) not feasible 11 / 60
  • 12. Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth . . . and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 . . . Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn’t occur. Example: Calpurnia doesn’t occur in The tempest. 12 / 60
  • 13. Incidence vectors So we have a 0/1 vector for each term. To answer the query Brutus and Caesar and not Calpurnia: Take the vectors for Brutus, Caesar, and Calpurnia Complement the vector of Calpurnia Do a (bitwise) and on the three vectors 110100 and 110111 and 101111 = 100100 13 / 60
  • 14. 0/1 vectors and result of bitwise operations Anthony Julius The Hamlet Othello Macbeth . . . and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 . . . result: 1 0 0 1 0 0 14 / 60
  • 15. Answers to query Anthony and Cleopatra, Act III, Scene ii Agrippa [Aside to Domitius Enobarbus]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar: I was killed i’ the Capitol; Brutus killed me. 15 / 60
  • 16. Bigger collections Consider N = 106 documents, each with about 1000 tokens ⇒ total of 109 tokens On average 6 bytes per token, including spaces and punctuation ⇒ size of document collection is about 6 · 109 = 6 GB Assume there are M = 500,000 distinct terms in the collection (Notice that we are making a term/token distinction.) 16 / 60
  • 17. Can’t build the incidence matrix M = 500,000 × 106 = half a trillion 0s and 1s. But the matrix has no more than one billion 1s. Matrix is extremely sparse. What is a better representations? We only record the 1s. 17 / 60
  • 18. Inverted Index For each term t, we store a list of all documents that contain t. Brutus −→ 1 2 4 11 31 45 173 174 Caesar −→ 1 2 4 5 6 16 57 132 . . . Calpurnia −→ 2 31 54 101 . . . | {z } | {z } dictionary postings 18 / 60
  • 19. Inverted index construction 1 Collect the documents to be indexed: Friends, Romans, countrymen. So let it be with Caesar . . . 2 Tokenize the text, turning each document into a list of tokens: Friends Romans countrymen So . . . 3 Do linguistic preprocessing, producing a list of normalized tokens, which are the indexing terms: friend roman countryman so . . . 4 Index the documents that each term occurs in by creating an inverted index, consisting of a dictionary and postings. 19 / 60
  • 20. Tokenization and preprocessing Doc 1. I did enact Julius Caesar: I was killed i’ the Capitol; Brutus killed me. Doc 2. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious: =⇒ Doc 1. i did enact julius caesar i was killed i’ the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious 20 / 60
  • 21. Generate postings Doc 1. i did enact julius caesar i was killed i’ the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious =⇒ term docID i 1 did 1 enact 1 julius 1 caesar 1 i 1 was 1 killed 1 i’ 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2 21 / 60
  • 22. Sort postings term docID i 1 did 1 enact 1 julius 1 caesar 1 i 1 was 1 killed 1 i’ 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2 =⇒ term docID ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 i 1 i 1 i’ 1 it 2 julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 22 / 60
  • 23. Create postings lists, determine document frequency term docID ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 i 1 i 1 i’ 1 it 2 julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 =⇒ term doc. freq. → postings lists ambitious 1 → 2 be 1 → 2 brutus 2 → 1 → 2 capitol 1 → 1 caesar 2 → 1 → 2 did 1 → 1 enact 1 → 1 hath 1 → 2 i 1 → 1 i’ 1 → 1 it 1 → 2 julius 1 → 1 killed 1 → 1 let 1 → 2 me 1 → 1 noble 1 → 2 so 1 → 2 the 2 → 1 → 2 told 1 → 2 you 1 → 2 was 2 → 1 → 2 with 1 → 2 23 / 60
  • 24. Split the result into dictionary and postings file Brutus −→ 1 2 4 11 31 45 173 174 Caesar −→ 1 2 4 5 6 16 57 132 . . . Calpurnia −→ 2 31 54 101 . . . | {z } | {z } dictionary postings file 24 / 60
  • 25. Later in this course Index construction: how can we create inverted indexes for large collections? How much space do we need for dictionary and index? Index compression: how can we efficiently store and process indexes for large collections? Ranked retrieval: what does the inverted index look like when we want the “best” answer? 25 / 60
  • 26. Outline 1 Introduction 2 Inverted index 3 Processing Boolean queries 4 Query optimization 5 Course overview 26 / 60
  • 27. Simple conjunctive query (two terms) Consider the query: Brutus AND Calpurnia To find all matching documents using inverted index: 1 Locate Brutus in the dictionary 2 Retrieve its postings list from the postings file 3 Locate Calpurnia in the dictionary 4 Retrieve its postings list from the postings file 5 Intersect the two postings lists 6 Return intersection to user 27 / 60
  • 28. Intersecting two postings lists Brutus −→ 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia −→ 2 → 31 → 54 → 101 Intersection =⇒ 2 → 31 This is linear in the length of the postings lists. Note: This only works if postings lists are sorted. 28 / 60
  • 29. Intersecting two postings lists Intersect(p1, p2) 1 answer ← h i 2 while p1 6= nil and p2 6= nil 3 do if docID(p1) = docID(p2) 4 then Add(answer, docID(p1)) 5 p1 ← next(p1) 6 p2 ← next(p2) 7 else if docID(p1) < docID(p2) 8 then p1 ← next(p1) 9 else p2 ← next(p2) 10 return answer 29 / 60
  • 30. Query processing: Exercise france −→ 1 → 2 → 3 → 4 → 5 → 7 → 8 → 9 → 11 → 12 → 13 → 14 → 15 paris −→ 2 → 6 → 10 → 12 → 14 lear −→ 12 → 15 Compute hit list for ((paris AND NOT france) OR lear) 30 / 60
  • 31. Boolean retrieval model: Assessment The Boolean retrieval model can answer any query that is a Boolean expression. Boolean queries are queries that use and, or and not to join query terms. Views each document as a set of terms. Is precise: Document matches condition or not. Primary commercial retrieval tool for 3 decades Many professional searchers (e.g., lawyers) still like Boolean queries. You know exactly what you are getting. Many search systems you use are also Boolean: spotlight, email, intranet etc. 31 / 60
  • 32. Commercially successful Boolean retrieval: Westlaw Largest commercial legal search service in terms of the number of paying subscribers Over half a million subscribers performing millions of searches a day over tens of terabytes of text data The service was started in 1975. In 2005, Boolean search (called “Terms and Connectors” by Westlaw) was still the default, and used by a large percentage of users . . . . . . although ranked retrieval has been available since 1992. 32 / 60
  • 33. Westlaw: Example queries Information need: Information on the legal theories involved in preventing the disclosure of trade secrets by employees formerly employed by a competing company Query: “trade secret” /s disclos! /s prevent /s employe! Information need: Requirements for disabled people to be able to access a workplace Query: disab! /p access! /s work-site work-place (employment /3 place) Information need: Cases about a host’s responsibility for drunk guests Query: host! /p (responsib! liab!) /p (intoxicat! drunk!) /p guest 33 / 60
  • 34. Westlaw: Comments Proximity operators: /3 = within 3 words, /s = within a sentence, /p = within a paragraph Space is disjunction, not conjunction! (This was the default in search pre-Google.) Long, precise queries: incrementally developed, not like web search Why professional searchers often like Boolean search: precision, transparency, control When are Boolean queries the best way of searching? Depends on: information need, searcher, document collection, . . . 34 / 60
  • 35. Outline 1 Introduction 2 Inverted index 3 Processing Boolean queries 4 Query optimization 5 Course overview 35 / 60
  • 36. Query optimization Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: Brutus AND Calpurnia AND Caesar What is the best order for processing this query? 36 / 60
  • 37. Query optimization Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first Caesar, then Calpurnia, then Brutus Brutus −→ 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia −→ 2 → 31 → 54 → 101 Caesar −→ 5 → 31 37 / 60
  • 38. Optimized intersection algorithm for conjunctive queries Intersect(ht1, . . . , tni) 1 terms ← SortByIncreasingFrequency(ht1, . . . , tni) 2 result ← postings(first(terms)) 3 terms ← rest(terms) 4 while terms 6= nil and result 6= nil 5 do result ← Intersect(result, postings(first(terms))) 6 terms ← rest(terms) 7 return result 38 / 60
  • 39. More general optimization Example query: (madding or crowd) and (ignoble or strife) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes 39 / 60
  • 40. Outline 1 Introduction 2 Inverted index 3 Processing Boolean queries 4 Query optimization 5 Course overview 40 / 60
  • 41. Course overview We are done with Chapter 1 of IIR (IIR 01). Plan for the rest of the semester: 18–20 of the 21 chapters of IIR In what follows: teasers for most chapters – to give you a sense of what will be covered. 41 / 60
  • 42. IIR 02: The term vocabulary and postings lists Phrase queries: “Stanford University” Proximity queries: Gates near Microsoft We need an index that captures position information for phrase queries and proximity queries. 42 / 60
  • 43. IIR 03: Dictionaries and tolerant retrieval rd aboard ardent boardroom border or border lord morbid sordid bo aboard about boardroom border ✲ ✲ ✲ ✲ ✲ ✲ ✲ ✲ ✲ ✲ ✲ ✲ 43 / 60
  • 44. IIR 04: Index construction master assign map phase reduce phase assign parser splits parser parser inverter postings inverter inverter a-f g-p q-z a-f g-p q-z a-f g-p q-z a-f segment files g-p q-z 44 / 60
  • 45. IIR 05: Index compression 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 log10 rank 7 log10 cf Zipf’s law 45 / 60
  • 46. IIR 06: Scoring, term weighting and the vector space model Ranking search results Boolean queries only give inclusion or exclusion of documents. For ranked retrieval, we measure the proximity between the query and each document. One formalism for doing this: the vector space model Key challenge in ranked retrieval: evidence accumulation for a term in a document 1 vs. 0 occurence of a query term in the document 3 vs. 2 occurences of a query term in the document Usually: more is better But by how much? Need a scoring function that translates frequency into score or weight 46 / 60
  • 47. IIR 07: Scoring in a complete search system Documents Document cache Indexes k-gram Scoring parameters MLR training set Results page Indexers Parsing Linguistics user query Free text query parser Spell correction Scoring and ranking Tiered inverted positional index Inexact top K retrieval Metadata in zone and field indexes 47 / 60
  • 48. IIR 08: Evaluation and dynamic summaries 48 / 60
  • 49. IIR 09: Relevance feedback & query expansion 49 / 60
  • 50. IIR 12: Language models q1 w P(w|q1) w P(w|q1) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q1. STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog STOP P(string) = 0.01 ·0.03 ·0.04 ·0.01 ·0.02 ·0.01 ·0.2 = 0.0000000000048 50 / 60
  • 51. IIR 13: Text classification & Naive Bayes Text classification = assigning documents automatically to predefined classes Examples: Language (English vs. French) Adult content Region 51 / 60
  • 52. IIR 14: Vector classification X X X X X X X X X X X ∗ 52 / 60
  • 53. IIR 15: Support vector machines 53 / 60
  • 54. IIR 16: Flat clustering 54 / 60
  • 55. IIR 17: Hierarchical clustering http://news.google.com 55 / 60
  • 56. IIR 18: Latent Semantic Indexing 56 / 60
  • 57. IIR 19: The web and its challenges Unusual and diverse documents Unusual and diverse users and information needs Beyond terms and text: exploit link analysis, user data How do web search engines work? How can we make them better? 57 / 60
  • 58. IIR 21: Link analysis / PageRank 58 / 60
  • 59. Take-away Boolean Retrieval: Design and data structures of a simple information retrieval system What topics will be covered in this class? 59 / 60
  • 60. Resources Chapter 1 of IIR http://cislmu.org course schedule information retrieval links Shakespeare search engine 60 / 60