SlideShare a Scribd company logo
1 of 64
Download to read offline
Prof. Pier Luca Lanzi
Text Mining
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
(Taken from ChengXiang Zhai, CS 397 – Fall 2003)
Prof. Pier Luca Lanzi
(Taken from ChengXiang Zhai, CS 397 – Fall 2003)
Prof. Pier Luca Lanzi
(Taken from ChengXiang Zhai, CS 397 – Fall 2003)
Prof. Pier Luca Lanzi
(Taken from ChengXiang Zhai, CS 397 – Fall 2003)
Prof. Pier Luca Lanzi
Natural Language Processing (NLP)
Prof. Pier Luca Lanzi
Why Natural Language Processing?
Gov. Schwarzenegger helps inaugurate pricey new bridge
approach
The Associated Press
Article Launched: 04/11/2008 01:40:31 PM PDT
SAN FRANCISCO—It briefly looked like a scene out of a "Terminator" movie, with Governor
Arnold Schwarzenegger standing in the middle of San Francisco wielding a blow-torch in his
hands. Actually, the governor was just helping to inaugurate a new approach to the San Francisco-
Oakland Bay Bridge.
Caltrans thinks the new approach will make it faster for commuters to get on the bridge from the
San Francisco side.
The new section of the highway is scheduled to open tomorrow morning and cost 429 million dollars
to construct.
• Schwarzenegger
• Bridge
• Caltrans
• Governor
• Scene
• Terminator
• Does it article talks about entertainment
or politics?
• Using just the words (bag of words
model) has severe limitations
7
Prof. Pier Luca Lanzi
Natural Language Processing 8
A dog is chasing a boy on the playground
Det Noun Aux Verb Det Noun Prep Det Noun
Noun Phrase Complex Verb Noun Phrase
Noun Phrase
Prep Phrase
Verb Phrase
Verb Phrase
Sentence
Dog(d1).
Boy(b1).
Playground(p1).
Chasing(d1,b1,p1).
Semantic analysis
Lexical
analysis
(part-of-speech
tagging)
Syntactic analysis
(Parsing)
A person saying this may
be reminding another person to
get the dog back…
Pragmatic analysis
(speech act)
Scared(x) if Chasing(_,x,_).
+
Scared(b1)
Inference
(Taken from ChengXiang Zhai, CS 397cxz – Fall 2003)
Prof. Pier Luca Lanzi
Natural Language Processing is
Difficult!
• Natural language is designed to make human communication
efficient. As a result,
§We omit a lot of common sense knowledge, which we
assume the hearer/reader possesses.
§We keep a lot of ambiguities, which we assume the
hearer/reader knows how to resolve.
• This makes every step of NLP very difficult
§Ambiguity
§Common sense reasoning
9
Prof. Pier Luca Lanzi
What the Difficulties?
• Word-level Ambiguity
§“design” can be a verb or a noun
§“root” has multiple meaning
• Syntactic Ambiguity
§Natural language processing
§A man saw a boy with a telescope
• Presupposition
§“He has quit smoking” implies he smoked
• Text Mining NLP Approach
§Locate promising fragments using fast methods
(bag-of-tokens)
§Only apply slow NLP techniques to promising fragments
10
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
State of the Art?
A dog is chasing a boy on the playground
Det Noun Aux Verb Det Noun Prep Det Noun
Noun Phrase Complex Verb Noun Phrase
Noun Phrase
Prep Phrase
Verb Phrase
Verb Phrase
Sentence
Semantic analysis
(some aspects)
Entity-Relation Extraction
Word sense disambiguation
Sentiment analysis
…
Inference?
POS Tagging
97%
Parsing > 90%
(Taken from ChengXiang Zhai, CS 397cxz – Fall 2003)
12
Speech act?
Prof. Pier Luca Lanzi
What the Open Problems?
• 100% POS tagging
§“he turned off the highway” vs “he turned off the light”
• General complete parsing
§“a man saw a boy with a telescope”
• Precise deep semantic analysis
• Robust and general NLP methods tend to be shallow while deep
understanding does not scale up easily
13
Prof. Pier Luca Lanzi
14
Prof. Pier Luca Lanzi
Information Retrieval
Prof. Pier Luca Lanzi
Information Retrieval Systems
• Information retrieval deals with the problem of locating relevant
documents with respect to the user input or preference
• Typical systems
§Online library catalogs
§Online document management systems
• Typical issues
§Management of unstructured documents
§Approximate search
§Relevance
16
Prof. Pier Luca Lanzi
Two Modes of Text Access
• Pull Mode (search engines)
§Users take initiative
§Ad hoc information need
• Push Mode (recommender systems)
§Systems take initiative
§Stable information need or system has good knowledge about
a user’s need
17
Prof. Pier Luca Lanzi
2/)( precisionrecall
precisionrecall
Fscore
+
´
=
|}{|
|}{}{|
Relevant
RetrievedRelevant
recall
Ç
=
|}{|
|}{}{|
Retrieved
RetrievedRelevant
precision
Ç
=
Relevant R&R Retrieved
All Documents
Prof. Pier Luca Lanzi
Text Retrieval Methods
• Document Selection (keyword-based retrieval)
§Query defines a set of requisites
§Only the documents that satisfy the query are returned
§A typical approach is the Boolean Retrieval Model
• Document Ranking (similarity-based retrieval)
§Documents are ranked on the basis of their relevance with
respect to the user query
§For each document a “degree of relevance” is computed with
respect to the query
§A typical approach is the Vector Space Model
19
Prof. Pier Luca Lanzi
• A document and a query are represented as vectors in high-
dimensional space corresponding to all the keywords
• Relevance is measured with an appropriate similarity measure
defined over the vector space
• Issues
§How to select keywords to capture “basic concepts” ?
§How to assign weights to each term?
§How to measure the similarity?
Vector Space Model 20
Prof. Pier Luca Lanzi
Bag of Words
Prof. Pier Luca Lanzi
Boolean Retrieval Model
• A query is composed of keywords linked by the three logical
connectives: not, and, or
• For example, “car and repair”, “plane or airplane”
• In the Boolean model each document is either relevant or non-
relevant, depending it matches or not the query
• Limitations
§Generally not suitable to satisfy information need
§Useful only in very specific domain where users have a big
expertise
22
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Java
Microsoft
Starbucks
Query
DOC
d =
(0,1,1)
q =
(1,1,1)
Prof. Pier Luca Lanzi
Document Ranking
• Query q = q1, …, qm where qi is a word
• Document d = d1, …, dn where di is a word (bag of words model)
• Ranking function f(q, d) which returns a real value
• A good ranking function should rank relevant documents on top of
non-relevant ones
• The key challenge is how to measure the likelihood that document d is
relevant to query q
• The retrieval model gives a formalization of relevance in computational
terms
25
Prof. Pier Luca Lanzi
Retrieval Models
• Similarity-based models: f(q,d) = similarity(q,d)
§Vector space model
• Probabilistic models: f(q,d) = p(R=1|d,q)
§Classic probabilistic model
§Language model
§Divergence-from-randomness model
• Probabilistic inference model: f(q,d) = p(d->q)
• Axiomatic model: f(q,d) must satisfy a set of constraints
26
Prof. Pier Luca Lanzi
BasicVector Space Model (VSM)
Prof. Pier Luca Lanzi
How WouldYou RankThese Docs?
Prof. Pier Luca Lanzi
Example with BasicVSM
Prof. Pier Luca Lanzi
Basic VSM returns the number of distinct query words
matched in d.
Prof. Pier Luca Lanzi
Two Problems of BasicVSM
Prof. Pier Luca Lanzi
Term Frequency Weighting (TFW)
Prof. Pier Luca Lanzi
Term Frequency Weighting (TFW)
Prof. Pier Luca Lanzi
Ranking usingTFW
Prof. Pier Luca Lanzi
“presidential” vs “about”
Prof. Pier Luca Lanzi
Adding Inverse Document Frequency
Prof. Pier Luca Lanzi
Inverse Document Frequency
K-> doc freq
Prof. Pier Luca Lanzi
“presidential” vs “about”
Prof. Pier Luca Lanzi
Inverse Document Frequency Ranking
Prof. Pier Luca Lanzi
CombiningTerm Frequency (TF) with
Inverse Document Frequency Ranking
Prof. Pier Luca Lanzi
TFTransformation
Prof. Pier Luca Lanzi
BM25Transformation
Prof. Pier Luca Lanzi
Ranking with BM25-TF
Prof. Pier Luca Lanzi
What about Document Length?
Prof. Pier Luca Lanzi
Document Length Normalization
• Penalize a long doc with a doc length normalizer
§Long doc has a better chance to match any query
§Need to avoid over-penalization
• A document is long because
§it uses more words, then it should be more penalized
§it has more contents, then it should be penalized less
• Pivoted length normalizer
§It uses average doc length as “pivot”
§The normalizer is 1 if the doc length (|d|) is equal to
the average doc length (avdl)
45
Prof. Pier Luca Lanzi
Pivot Length Normalizer
Prof. Pier Luca Lanzi
State of the Art VSM Ranking
Function
• Pivoted Length Normalization VSM [Singhal et al 96]
• BM25/Okapi [Robertson & Walker 94]
47
Prof. Pier Luca Lanzi
Preprocessing
Prof. Pier Luca Lanzi
Keywords Selection
• Text is preprocessed through tokenization
• Stop word list and word stemming are used to isolate and thus identify
significant keywords
• Stop words elimination
§Stop words are elements that are considered uninteresting with
respect to the retrieval and thus are eliminated
§For instance, “a”, “the”, “always”, “along”
• Word stemming
§Different words that share a common prefix are simplified and
replaced by their common prefix
§For instance, “computer”, “computing”, “computerize” are replaced
by “comput”
50
Prof. Pier Luca Lanzi
Dimensionality Reduction
• Approaches presented so far involves high dimensional space
(huge number of keywords)
§Computationally expensive
§Difficult to deal with synonymy and polysemy problems
§“vehicle” is similar to “car”
§“mining” has different meanings in different contexts
• Dimensionality reduction techniques
§Latent Semantic Indexing (LSI)
§Locality Preserving Indexing (LPI)
§Probabilistic Semantic Indexing (PLSI)
51
Prof. Pier Luca Lanzi
Latent Semantic Indexing (LSI)
• Let xi be vectors representing documents and X
(the term frequency matrix) the all set of documents:
• Let use the singular value decomposition (SVD) to
reduce the size of frequency table:
• Approximate X with Xk that is obtained from the first k vectors
of U
• It can be shown that such transformation minimizes the error for
the reconstruction of X
52
Prof. Pier Luca Lanzi
Text Classification
Prof. Pier Luca Lanzi
Document classification
• Solve the problem of labeling automatically text
documents on the basis of
§Topic
§Style
§Purpose
• Usual classification techniques can be used to learn from a
training set of manually labeled documents
• Which features? Keywords can be thousands…
• Major approaches
§Similarity-based
§Dimensionality reduction
§Naïve Bayes text classifiers
54
Prof. Pier Luca Lanzi
Similarity-based Text Classifiers
• Exploits Information Retrieval and k-nearest-
neighbor classifier
§For a new document to classify, the k most similar documents
in the training set are retrieved
§Documents are classified on the basis of the class distribution
among the k documents retrieved using the majority vote or a
weighted vote
§Tuning k is very important to achieve a good performance
• Limitations
§Space overhead to store all the documents in training set
§Time overhead to retrieve the similar documents
55
Prof. Pier Luca Lanzi
Dimensionality Reduction for Text
Classification
• As in the Vector Space Model, the goal is to reduce the
number of features to represents text
• Usual dimensionality reduction approaches in Information
Retrieval are based on the distribution of keywords
among the whole documents database
• In text classification it is important to consider also the
correlation between keywords and classes
§Rare keywords have a high TF-IDF but might be uniformly
distributed among classes
§LSA and LPA do not take into account classes distributions
• Usual classification techniques can be then applied on reduced
features space are SVMs and Naïve Bayes classifiers
56
Prof. Pier Luca Lanzi
• Document categories {C1 , …, Cn}
• Document to classify D
• Probabilistic model:
• We chose the class C* such that
• Issues
§Which features?
§How to compute the probabilities?
Naïve Bayes for Text Classification
( | ) ( )
( | )
( )
i i
i
P D C P C
P C D
P D
=
57
Prof. Pier Luca Lanzi
Naïve Bayes for Text Classification
• Features can be simply defined as the words in the
document
• Let ai be a keyword in the doc, and wj a word in the vocabulary,
we get:
Example
H={like,dislike}
D= “Our approach to representing arbitrary text documents is disturbingly simple”
58
Prof. Pier Luca Lanzi
Naïve Bayes for Text Classification
• Features can be simply defined as the words in the
document
• Let ai be a keyword in the doc, and wj a word in the vocabulary,
• Assumptions
§Keywords distributions are inter-independent
§Keywords distributions are order-independent
59
Prof. Pier Luca Lanzi
Naïve Bayes for Text Classification
• How to compute probabilities?
§ Simply counting the occurrences may lead to wrong results when
probabilities are small
• M-estimate approach adapted for text:
§ Nc is the whole number of word positions in documents of class C
§ Nc,k is the number of occurrences of wk in documents of class C
§ |Vocabulary| is the number of distinct words in training set
§ Uniform priors are assumed
60
Prof. Pier Luca Lanzi
Naïve Bayes for Text Classification
• Final classification is performed as
• Despite its simplicity Naïve Bayes classifiers works very well in
practice
• Applications
§Newsgroup post classification
§NewsWeeder (news recommender)
Prof. Pier Luca Lanzi
Clustering Text
Prof. Pier Luca Lanzi
Text Clustering
• It involves the same steps as other text mining algorithms to
preprocess the text data into an adequate representation
§Tokenizing and stemming each synopsis
§Transforming the corpus into vector space using TF-IDF
§Calculating cosine distance between each document as a
measure of similarity
§Clustering the documents using the similarity measure
• Everything works as with the usual data except that text requires
a rather long preprocessing.
63
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Run the KNIME workflow and
Python notebook for the lecture

More Related Content

What's hot

DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationPier Luca Lanzi
 
DMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationDMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationPier Luca Lanzi
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringPier Luca Lanzi
 
DMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationDMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationPier Luca Lanzi
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringPier Luca Lanzi
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringPier Luca Lanzi
 
Recurrent networks and beyond by Tomas Mikolov
Recurrent networks and beyond by Tomas MikolovRecurrent networks and beyond by Tomas Mikolov
Recurrent networks and beyond by Tomas MikolovBhaskar Mitra
 
Tomáš Mikolov - Distributed Representations for NLP
Tomáš Mikolov - Distributed Representations for NLPTomáš Mikolov - Distributed Representations for NLP
Tomáš Mikolov - Distributed Representations for NLPMachine Learning Prague
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesPier Luca Lanzi
 
AINL 2016: Castro, Lopez, Cavalcante, Couto
AINL 2016: Castro, Lopez, Cavalcante, CoutoAINL 2016: Castro, Lopez, Cavalcante, Couto
AINL 2016: Castro, Lopez, Cavalcante, CoutoLidia Pivovarova
 
Intro to Deep Learning for Question Answering
Intro to Deep Learning for Question AnsweringIntro to Deep Learning for Question Answering
Intro to Deep Learning for Question AnsweringTraian Rebedea
 
Search problems in Artificial Intelligence
Search problems in Artificial IntelligenceSearch problems in Artificial Intelligence
Search problems in Artificial Intelligenceananth
 
[KDD 2018 tutorial] End to-end goal-oriented question answering systems
[KDD 2018 tutorial] End to-end goal-oriented question answering systems[KDD 2018 tutorial] End to-end goal-oriented question answering systems
[KDD 2018 tutorial] End to-end goal-oriented question answering systemsQi He
 
Knowledge Patterns SSSW2016
Knowledge Patterns SSSW2016Knowledge Patterns SSSW2016
Knowledge Patterns SSSW2016Aldo Gangemi
 
Sentence representations and question answering (YerevaNN)
Sentence representations and question answering (YerevaNN)Sentence representations and question answering (YerevaNN)
Sentence representations and question answering (YerevaNN)YerevaNN research lab
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingJonathan Mugan
 

What's hot (20)

DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluation
 
DMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationDMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data Preparation
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
DMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationDMTM Lecture 04 Classification
DMTM Lecture 04 Classification
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clustering
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
 
NLP from scratch
NLP from scratch NLP from scratch
NLP from scratch
 
Recurrent networks and beyond by Tomas Mikolov
Recurrent networks and beyond by Tomas MikolovRecurrent networks and beyond by Tomas Mikolov
Recurrent networks and beyond by Tomas Mikolov
 
Tomáš Mikolov - Distributed Representations for NLP
Tomáš Mikolov - Distributed Representations for NLPTomáš Mikolov - Distributed Representations for NLP
Tomáš Mikolov - Distributed Representations for NLP
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification Ensembles
 
AINL 2016: Castro, Lopez, Cavalcante, Couto
AINL 2016: Castro, Lopez, Cavalcante, CoutoAINL 2016: Castro, Lopez, Cavalcante, Couto
AINL 2016: Castro, Lopez, Cavalcante, Couto
 
Intro to Deep Learning for Question Answering
Intro to Deep Learning for Question AnsweringIntro to Deep Learning for Question Answering
Intro to Deep Learning for Question Answering
 
AINL 2016: Nikolenko
AINL 2016: NikolenkoAINL 2016: Nikolenko
AINL 2016: Nikolenko
 
Search problems in Artificial Intelligence
Search problems in Artificial IntelligenceSearch problems in Artificial Intelligence
Search problems in Artificial Intelligence
 
AINL 2016: Filchenkov
AINL 2016: FilchenkovAINL 2016: Filchenkov
AINL 2016: Filchenkov
 
[KDD 2018 tutorial] End to-end goal-oriented question answering systems
[KDD 2018 tutorial] End to-end goal-oriented question answering systems[KDD 2018 tutorial] End to-end goal-oriented question answering systems
[KDD 2018 tutorial] End to-end goal-oriented question answering systems
 
NLP and LSA getting started
NLP and LSA getting startedNLP and LSA getting started
NLP and LSA getting started
 
Knowledge Patterns SSSW2016
Knowledge Patterns SSSW2016Knowledge Patterns SSSW2016
Knowledge Patterns SSSW2016
 
Sentence representations and question answering (YerevaNN)
Sentence representations and question answering (YerevaNN)Sentence representations and question answering (YerevaNN)
Sentence representations and question answering (YerevaNN)
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language Processing
 

Similar to DMTM Lecture 17 Text mining

Artificial Intelligence Notes Unit 4
Artificial Intelligence Notes Unit 4Artificial Intelligence Notes Unit 4
Artificial Intelligence Notes Unit 4DigiGurukul
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Mustafa Jarrar
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language ProcessingToine Bogers
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)Abdullah al Mamun
 
A Panorama of Natural Language Processing
A Panorama of Natural Language ProcessingA Panorama of Natural Language Processing
A Panorama of Natural Language ProcessingTed Xiao
 
Introduction to Natural Language Processing (NLP)
Introduction to Natural Language Processing (NLP)Introduction to Natural Language Processing (NLP)
Introduction to Natural Language Processing (NLP)VenkateshMurugadas
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringPier Luca Lanzi
 
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Zachary S. Brown
 
Scientific and Technical Translation in English: Week 2
Scientific and Technical Translation in English: Week 2Scientific and Technical Translation in English: Week 2
Scientific and Technical Translation in English: Week 2Ron Martinez
 
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹台灣資料科學年會
 
NLP_guest_lecture.pdf
NLP_guest_lecture.pdfNLP_guest_lecture.pdf
NLP_guest_lecture.pdfSoha82
 
Natural Language Processing: Lecture 255
Natural Language Processing: Lecture 255Natural Language Processing: Lecture 255
Natural Language Processing: Lecture 255deffa5
 
Language Models for Information Retrieval
Language Models for Information RetrievalLanguage Models for Information Retrieval
Language Models for Information RetrievalNik Spirin
 
NLP introduced and in 47 slides Lecture 1.ppt
NLP introduced and in 47 slides Lecture 1.pptNLP introduced and in 47 slides Lecture 1.ppt
NLP introduced and in 47 slides Lecture 1.pptOlusolaTop
 
UCU NLP Summer Workshops 2017 - Part 2
UCU NLP Summer Workshops 2017 - Part 2UCU NLP Summer Workshops 2017 - Part 2
UCU NLP Summer Workshops 2017 - Part 2Yuriy Guts
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language ProcessingYasir Khan
 
Text Representations for Deep learning
Text Representations for Deep learningText Representations for Deep learning
Text Representations for Deep learningZachary S. Brown
 
Introduction to search engine-building with Lucene
Introduction to search engine-building with LuceneIntroduction to search engine-building with Lucene
Introduction to search engine-building with LuceneKai Chan
 

Similar to DMTM Lecture 17 Text mining (20)

Artificial Intelligence Notes Unit 4
Artificial Intelligence Notes Unit 4Artificial Intelligence Notes Unit 4
Artificial Intelligence Notes Unit 4
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)
 
A Panorama of Natural Language Processing
A Panorama of Natural Language ProcessingA Panorama of Natural Language Processing
A Panorama of Natural Language Processing
 
Introduction to Natural Language Processing (NLP)
Introduction to Natural Language Processing (NLP)Introduction to Natural Language Processing (NLP)
Introduction to Natural Language Processing (NLP)
 
Some Information Retrieval Models and Our Experiments for TREC KBA
Some Information Retrieval Models and Our Experiments for TREC KBASome Information Retrieval Models and Our Experiments for TREC KBA
Some Information Retrieval Models and Our Experiments for TREC KBA
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
 
Scientific and Technical Translation in English: Week 2
Scientific and Technical Translation in English: Week 2Scientific and Technical Translation in English: Week 2
Scientific and Technical Translation in English: Week 2
 
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹
[系列活動] 無所不在的自然語言處理—基礎概念、技術與工具介紹
 
NLP_guest_lecture.pdf
NLP_guest_lecture.pdfNLP_guest_lecture.pdf
NLP_guest_lecture.pdf
 
Natural Language Processing: Lecture 255
Natural Language Processing: Lecture 255Natural Language Processing: Lecture 255
Natural Language Processing: Lecture 255
 
Language Models for Information Retrieval
Language Models for Information RetrievalLanguage Models for Information Retrieval
Language Models for Information Retrieval
 
NLP introduced and in 47 slides Lecture 1.ppt
NLP introduced and in 47 slides Lecture 1.pptNLP introduced and in 47 slides Lecture 1.ppt
NLP introduced and in 47 slides Lecture 1.ppt
 
UCU NLP Summer Workshops 2017 - Part 2
UCU NLP Summer Workshops 2017 - Part 2UCU NLP Summer Workshops 2017 - Part 2
UCU NLP Summer Workshops 2017 - Part 2
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Text Representations for Deep learning
Text Representations for Deep learningText Representations for Deep learning
Text Representations for Deep learning
 
4.3.pdf
4.3.pdf4.3.pdf
4.3.pdf
 
Introduction to search engine-building with Lucene
Introduction to search engine-building with LuceneIntroduction to search engine-building with Lucene
Introduction to search engine-building with Lucene
 

More from Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i VideogiochiPier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiPier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomePier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaPier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Pier Luca Lanzi
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationPier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesPier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesPier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsPier Luca Lanzi
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesPier Luca Lanzi
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesPier Luca Lanzi
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationPier Luca Lanzi
 
DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationPier Luca Lanzi
 
DMTM Lecture 03 Regression
DMTM Lecture 03 RegressionDMTM Lecture 03 Regression
DMTM Lecture 03 RegressionPier Luca Lanzi
 
DMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionDMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionPier Luca Lanzi
 
DMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningDMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningPier Luca Lanzi
 
VDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelineVDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelinePier Luca Lanzi
 
VDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityVDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityPier Luca Lanzi
 

More from Pier Luca Lanzi (20)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
 
DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representation
 
DMTM Lecture 03 Regression
DMTM Lecture 03 RegressionDMTM Lecture 03 Regression
DMTM Lecture 03 Regression
 
DMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionDMTM Lecture 01 Introduction
DMTM Lecture 01 Introduction
 
DMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningDMTM Lecture 02 Data mining
DMTM Lecture 02 Data mining
 
VDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelineVDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipeline
 
VDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityVDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with Unity
 

Recently uploaded

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 

Recently uploaded (20)

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 

DMTM Lecture 17 Text mining

  • 1. Prof. Pier Luca Lanzi Text Mining Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi (Taken from ChengXiang Zhai, CS 397 – Fall 2003)
  • 3. Prof. Pier Luca Lanzi (Taken from ChengXiang Zhai, CS 397 – Fall 2003)
  • 4. Prof. Pier Luca Lanzi (Taken from ChengXiang Zhai, CS 397 – Fall 2003)
  • 5. Prof. Pier Luca Lanzi (Taken from ChengXiang Zhai, CS 397 – Fall 2003)
  • 6. Prof. Pier Luca Lanzi Natural Language Processing (NLP)
  • 7. Prof. Pier Luca Lanzi Why Natural Language Processing? Gov. Schwarzenegger helps inaugurate pricey new bridge approach The Associated Press Article Launched: 04/11/2008 01:40:31 PM PDT SAN FRANCISCO—It briefly looked like a scene out of a "Terminator" movie, with Governor Arnold Schwarzenegger standing in the middle of San Francisco wielding a blow-torch in his hands. Actually, the governor was just helping to inaugurate a new approach to the San Francisco- Oakland Bay Bridge. Caltrans thinks the new approach will make it faster for commuters to get on the bridge from the San Francisco side. The new section of the highway is scheduled to open tomorrow morning and cost 429 million dollars to construct. • Schwarzenegger • Bridge • Caltrans • Governor • Scene • Terminator • Does it article talks about entertainment or politics? • Using just the words (bag of words model) has severe limitations 7
  • 8. Prof. Pier Luca Lanzi Natural Language Processing 8 A dog is chasing a boy on the playground Det Noun Aux Verb Det Noun Prep Det Noun Noun Phrase Complex Verb Noun Phrase Noun Phrase Prep Phrase Verb Phrase Verb Phrase Sentence Dog(d1). Boy(b1). Playground(p1). Chasing(d1,b1,p1). Semantic analysis Lexical analysis (part-of-speech tagging) Syntactic analysis (Parsing) A person saying this may be reminding another person to get the dog back… Pragmatic analysis (speech act) Scared(x) if Chasing(_,x,_). + Scared(b1) Inference (Taken from ChengXiang Zhai, CS 397cxz – Fall 2003)
  • 9. Prof. Pier Luca Lanzi Natural Language Processing is Difficult! • Natural language is designed to make human communication efficient. As a result, §We omit a lot of common sense knowledge, which we assume the hearer/reader possesses. §We keep a lot of ambiguities, which we assume the hearer/reader knows how to resolve. • This makes every step of NLP very difficult §Ambiguity §Common sense reasoning 9
  • 10. Prof. Pier Luca Lanzi What the Difficulties? • Word-level Ambiguity §“design” can be a verb or a noun §“root” has multiple meaning • Syntactic Ambiguity §Natural language processing §A man saw a boy with a telescope • Presupposition §“He has quit smoking” implies he smoked • Text Mining NLP Approach §Locate promising fragments using fast methods (bag-of-tokens) §Only apply slow NLP techniques to promising fragments 10
  • 12. Prof. Pier Luca Lanzi State of the Art? A dog is chasing a boy on the playground Det Noun Aux Verb Det Noun Prep Det Noun Noun Phrase Complex Verb Noun Phrase Noun Phrase Prep Phrase Verb Phrase Verb Phrase Sentence Semantic analysis (some aspects) Entity-Relation Extraction Word sense disambiguation Sentiment analysis … Inference? POS Tagging 97% Parsing > 90% (Taken from ChengXiang Zhai, CS 397cxz – Fall 2003) 12 Speech act?
  • 13. Prof. Pier Luca Lanzi What the Open Problems? • 100% POS tagging §“he turned off the highway” vs “he turned off the light” • General complete parsing §“a man saw a boy with a telescope” • Precise deep semantic analysis • Robust and general NLP methods tend to be shallow while deep understanding does not scale up easily 13
  • 14. Prof. Pier Luca Lanzi 14
  • 15. Prof. Pier Luca Lanzi Information Retrieval
  • 16. Prof. Pier Luca Lanzi Information Retrieval Systems • Information retrieval deals with the problem of locating relevant documents with respect to the user input or preference • Typical systems §Online library catalogs §Online document management systems • Typical issues §Management of unstructured documents §Approximate search §Relevance 16
  • 17. Prof. Pier Luca Lanzi Two Modes of Text Access • Pull Mode (search engines) §Users take initiative §Ad hoc information need • Push Mode (recommender systems) §Systems take initiative §Stable information need or system has good knowledge about a user’s need 17
  • 18. Prof. Pier Luca Lanzi 2/)( precisionrecall precisionrecall Fscore + ´ = |}{| |}{}{| Relevant RetrievedRelevant recall Ç = |}{| |}{}{| Retrieved RetrievedRelevant precision Ç = Relevant R&R Retrieved All Documents
  • 19. Prof. Pier Luca Lanzi Text Retrieval Methods • Document Selection (keyword-based retrieval) §Query defines a set of requisites §Only the documents that satisfy the query are returned §A typical approach is the Boolean Retrieval Model • Document Ranking (similarity-based retrieval) §Documents are ranked on the basis of their relevance with respect to the user query §For each document a “degree of relevance” is computed with respect to the query §A typical approach is the Vector Space Model 19
  • 20. Prof. Pier Luca Lanzi • A document and a query are represented as vectors in high- dimensional space corresponding to all the keywords • Relevance is measured with an appropriate similarity measure defined over the vector space • Issues §How to select keywords to capture “basic concepts” ? §How to assign weights to each term? §How to measure the similarity? Vector Space Model 20
  • 21. Prof. Pier Luca Lanzi Bag of Words
  • 22. Prof. Pier Luca Lanzi Boolean Retrieval Model • A query is composed of keywords linked by the three logical connectives: not, and, or • For example, “car and repair”, “plane or airplane” • In the Boolean model each document is either relevant or non- relevant, depending it matches or not the query • Limitations §Generally not suitable to satisfy information need §Useful only in very specific domain where users have a big expertise 22
  • 24. Prof. Pier Luca Lanzi Java Microsoft Starbucks Query DOC d = (0,1,1) q = (1,1,1)
  • 25. Prof. Pier Luca Lanzi Document Ranking • Query q = q1, …, qm where qi is a word • Document d = d1, …, dn where di is a word (bag of words model) • Ranking function f(q, d) which returns a real value • A good ranking function should rank relevant documents on top of non-relevant ones • The key challenge is how to measure the likelihood that document d is relevant to query q • The retrieval model gives a formalization of relevance in computational terms 25
  • 26. Prof. Pier Luca Lanzi Retrieval Models • Similarity-based models: f(q,d) = similarity(q,d) §Vector space model • Probabilistic models: f(q,d) = p(R=1|d,q) §Classic probabilistic model §Language model §Divergence-from-randomness model • Probabilistic inference model: f(q,d) = p(d->q) • Axiomatic model: f(q,d) must satisfy a set of constraints 26
  • 27. Prof. Pier Luca Lanzi BasicVector Space Model (VSM)
  • 28. Prof. Pier Luca Lanzi How WouldYou RankThese Docs?
  • 29. Prof. Pier Luca Lanzi Example with BasicVSM
  • 30. Prof. Pier Luca Lanzi Basic VSM returns the number of distinct query words matched in d.
  • 31. Prof. Pier Luca Lanzi Two Problems of BasicVSM
  • 32. Prof. Pier Luca Lanzi Term Frequency Weighting (TFW)
  • 33. Prof. Pier Luca Lanzi Term Frequency Weighting (TFW)
  • 34. Prof. Pier Luca Lanzi Ranking usingTFW
  • 35. Prof. Pier Luca Lanzi “presidential” vs “about”
  • 36. Prof. Pier Luca Lanzi Adding Inverse Document Frequency
  • 37. Prof. Pier Luca Lanzi Inverse Document Frequency K-> doc freq
  • 38. Prof. Pier Luca Lanzi “presidential” vs “about”
  • 39. Prof. Pier Luca Lanzi Inverse Document Frequency Ranking
  • 40. Prof. Pier Luca Lanzi CombiningTerm Frequency (TF) with Inverse Document Frequency Ranking
  • 41. Prof. Pier Luca Lanzi TFTransformation
  • 42. Prof. Pier Luca Lanzi BM25Transformation
  • 43. Prof. Pier Luca Lanzi Ranking with BM25-TF
  • 44. Prof. Pier Luca Lanzi What about Document Length?
  • 45. Prof. Pier Luca Lanzi Document Length Normalization • Penalize a long doc with a doc length normalizer §Long doc has a better chance to match any query §Need to avoid over-penalization • A document is long because §it uses more words, then it should be more penalized §it has more contents, then it should be penalized less • Pivoted length normalizer §It uses average doc length as “pivot” §The normalizer is 1 if the doc length (|d|) is equal to the average doc length (avdl) 45
  • 46. Prof. Pier Luca Lanzi Pivot Length Normalizer
  • 47. Prof. Pier Luca Lanzi State of the Art VSM Ranking Function • Pivoted Length Normalization VSM [Singhal et al 96] • BM25/Okapi [Robertson & Walker 94] 47
  • 48. Prof. Pier Luca Lanzi Preprocessing
  • 49. Prof. Pier Luca Lanzi Keywords Selection • Text is preprocessed through tokenization • Stop word list and word stemming are used to isolate and thus identify significant keywords • Stop words elimination §Stop words are elements that are considered uninteresting with respect to the retrieval and thus are eliminated §For instance, “a”, “the”, “always”, “along” • Word stemming §Different words that share a common prefix are simplified and replaced by their common prefix §For instance, “computer”, “computing”, “computerize” are replaced by “comput” 50
  • 50. Prof. Pier Luca Lanzi Dimensionality Reduction • Approaches presented so far involves high dimensional space (huge number of keywords) §Computationally expensive §Difficult to deal with synonymy and polysemy problems §“vehicle” is similar to “car” §“mining” has different meanings in different contexts • Dimensionality reduction techniques §Latent Semantic Indexing (LSI) §Locality Preserving Indexing (LPI) §Probabilistic Semantic Indexing (PLSI) 51
  • 51. Prof. Pier Luca Lanzi Latent Semantic Indexing (LSI) • Let xi be vectors representing documents and X (the term frequency matrix) the all set of documents: • Let use the singular value decomposition (SVD) to reduce the size of frequency table: • Approximate X with Xk that is obtained from the first k vectors of U • It can be shown that such transformation minimizes the error for the reconstruction of X 52
  • 52. Prof. Pier Luca Lanzi Text Classification
  • 53. Prof. Pier Luca Lanzi Document classification • Solve the problem of labeling automatically text documents on the basis of §Topic §Style §Purpose • Usual classification techniques can be used to learn from a training set of manually labeled documents • Which features? Keywords can be thousands… • Major approaches §Similarity-based §Dimensionality reduction §Naïve Bayes text classifiers 54
  • 54. Prof. Pier Luca Lanzi Similarity-based Text Classifiers • Exploits Information Retrieval and k-nearest- neighbor classifier §For a new document to classify, the k most similar documents in the training set are retrieved §Documents are classified on the basis of the class distribution among the k documents retrieved using the majority vote or a weighted vote §Tuning k is very important to achieve a good performance • Limitations §Space overhead to store all the documents in training set §Time overhead to retrieve the similar documents 55
  • 55. Prof. Pier Luca Lanzi Dimensionality Reduction for Text Classification • As in the Vector Space Model, the goal is to reduce the number of features to represents text • Usual dimensionality reduction approaches in Information Retrieval are based on the distribution of keywords among the whole documents database • In text classification it is important to consider also the correlation between keywords and classes §Rare keywords have a high TF-IDF but might be uniformly distributed among classes §LSA and LPA do not take into account classes distributions • Usual classification techniques can be then applied on reduced features space are SVMs and Naïve Bayes classifiers 56
  • 56. Prof. Pier Luca Lanzi • Document categories {C1 , …, Cn} • Document to classify D • Probabilistic model: • We chose the class C* such that • Issues §Which features? §How to compute the probabilities? Naïve Bayes for Text Classification ( | ) ( ) ( | ) ( ) i i i P D C P C P C D P D = 57
  • 57. Prof. Pier Luca Lanzi Naïve Bayes for Text Classification • Features can be simply defined as the words in the document • Let ai be a keyword in the doc, and wj a word in the vocabulary, we get: Example H={like,dislike} D= “Our approach to representing arbitrary text documents is disturbingly simple” 58
  • 58. Prof. Pier Luca Lanzi Naïve Bayes for Text Classification • Features can be simply defined as the words in the document • Let ai be a keyword in the doc, and wj a word in the vocabulary, • Assumptions §Keywords distributions are inter-independent §Keywords distributions are order-independent 59
  • 59. Prof. Pier Luca Lanzi Naïve Bayes for Text Classification • How to compute probabilities? § Simply counting the occurrences may lead to wrong results when probabilities are small • M-estimate approach adapted for text: § Nc is the whole number of word positions in documents of class C § Nc,k is the number of occurrences of wk in documents of class C § |Vocabulary| is the number of distinct words in training set § Uniform priors are assumed 60
  • 60. Prof. Pier Luca Lanzi Naïve Bayes for Text Classification • Final classification is performed as • Despite its simplicity Naïve Bayes classifiers works very well in practice • Applications §Newsgroup post classification §NewsWeeder (news recommender)
  • 61. Prof. Pier Luca Lanzi Clustering Text
  • 62. Prof. Pier Luca Lanzi Text Clustering • It involves the same steps as other text mining algorithms to preprocess the text data into an adequate representation §Tokenizing and stemming each synopsis §Transforming the corpus into vector space using TF-IDF §Calculating cosine distance between each document as a measure of similarity §Clustering the documents using the similarity measure • Everything works as with the usual data except that text requires a rather long preprocessing. 63
  • 64. Prof. Pier Luca Lanzi Run the KNIME workflow and Python notebook for the lecture