1. Represents text documents as graph-of-words and extracts subgraph features through frequent subgraph mining to classify texts as a graph classification problem.
2. Uses gSpan algorithm to efficiently mine frequent subgraphs from the graph-of-words and selects the optimal minimum support threshold using the elbow method.
3. Evaluates the approach on four datasets, achieving improved accuracy over bag-of-words models by extracting long-distance n-gram features through subgraph mining.
2. Outline
Section 1 Introduction
Section 2 Review of the related work
Section 3 Preliminary concepts
Section 4 Proposed approaches
Section 5 Experimental evaluation
Section 6 Conclusion
References
2
3. 1. What is text mining ?
2. Bag-of-words and its issues
3. Graph-of-words - A new approach
Introduction
3
4. Introduction
What is Text mining?
Search engines
Understand user’s queries. E.g. What is Google?
Find matching websites or documents (ranking).
Product recommendation
Understand product description.
Understand product reviews. 4
5. Introduction
Bag-of-words and its issues
Definition
A text (such as a sentence or a document) is represented as the bag (multiset)
of its words.
5
6. Introduction
Bag-of-words and its issues
Example
“He likes watching action movies, she likes watching romantic movies”
⇒ [ “He”, “likes”, “watching”, “action”, “movies”, “she”, “likes”, “watching”,
“romantic”, “movies” ].
The sentence has 10 distinct words, by using indexes of the list, it can be
represented by a 10-entry vector: [ 1, 2, 2, 1, 2, 1, 2, 2, 1, 2 ]
6
7. Introduction
Bag-of-words and its issues
Problems
There are millions of n-gram features when dealing with thousands of news
articles, but only a few hundreds actually present in each article and tens
of class labels.
N-gram fails to capture word inversion and subset matching (e.g., “article
about news” vs. “news article”).
7
8. Introduction
Graph-of-words - A new approach
8
Consider the task of text categorization as a graph
classification problem.
Represent textual documents as graph-of-words
instead of traditional n-gram bag-of-words.
Extract more discriminative features that
correspond to long-distance n-grams through
frequent subgraph mining.
9. Introduction
Graph-of-words - A new approach
9
Summary:
1. Constructs a graph-of-words for each document
in the set
2. For each graphs from step 1 , extract its main
core (for cost-effective)
3. Find all frequent subgraphs size n in the
obtained set of graphs from step 2
4. Remove isomorphic subgraphs to reduce the
total number of features
5. Finally, extract n-gram features on the
remaining text
10. ● Subgraph feature mining on graph-of-words representations by Markov et
al. (2007)
Kudo and Matsumoto (2004), Matsumoto et al. (2005), Jiang et al. (2010) and
Arora et al. (2010) suggested using parse and dependency trees
representation for text categorization, but the support value (i.e. the total
number of features) was not discussed and can potentially lead to millions
of subgraphs on standard datasets.
Review of the related works
10
12. Definition
An undirected graph G = (V, E) , where
V is the set of vertices, which represents unique terms of the document
E is the set of edges, which represents co-occurrences between the terms
within a fixed-size sliding window
12
Preliminary Concepts
Graph-of-words model
13. Definition
Given two graphs G and H, an isomorphism of G and H is a bijection between the
vertex sets of G and H such that any two vertices u and v of G are adjacent in G if
and only if f(u) and f(v) are adjacent in H.
Example
13
Preliminary Concepts
Subgraph isomorphism
14. Definition
A subgraph H = (V’, E’) induced by the subset of vertices V’ ⊆ V and the subset of
edges E’ ⊆ E of graph G = (V, E) is called a k-core, where k is an integer, if and
only if: H is the maximal subgraph holds the property ∀ v ∈ V’, deg(v) >= k.
k-core: a maximal connected subgraph whose vertices are at least of degree k
within that subgraph.
main core: the k-core with the largest k.
Preliminary Concepts
K-core and main core
14
16. 1. Unsupervised feature mining using gSpan
2. Find frequent subgraphs using gSpan
3. Unsupervised support selection
4. Considered classifiers
5. Multiclass scenario
6. Main core mining using gSpan
Proposed approaches
16
17. Idea
● Considered the task of text categorization as a graph classification problem
● Representing textual documents as graph-of-words and then extracting
subgraph features to train a graph classifier.
● Each document is a separate graph-of-words and the collection of
documents thus corresponds to a set of graphs.
Proposed approaches
Unsupervised feature mining using gSpan
17
18. Given
● D = {G0
, G1
, G2
, ..., GN
} a graph dataset
● Support(g) the number of graphs (in D) in which g is a subgraph
● minSup minimum support threshold
Problem
Find any subgraph so that support(g) >= minSup
Proposed approaches
Find frequent subgraphs using gSpan
18
19. Frequent subgraph : a subgraph of multiple graph in D
Proposed approaches
Find frequent subgraphs using gSpan
19
20. Baseline solution
● Enumerate all the subgraphs and testing for isomorphism throughout the
collection => very expensive
Propose solution
● Use gSpan (graph-based Substructure pattern mining )
Proposed approaches
Find frequent subgraphs using gSpan
20
21. gSpan Idea:
1. For each graph, build a lexicographic order of all the edges using depth-first-
search (DFS) traversal
2. Assign to each of them a unique minimum DFS code.
3. Based on all these DFS codes, a hierarchical search tree is constructed at the
collection-level.
4. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs
with required support.
Proposed approaches
Find frequent subgraphs using gSpan
21
22. Note :
● Given two graphs G and G’
G is isomorphic to G’ if and only if minDFS(G) = minDFS(G’)
The lower the support will result in:
1. more features
2. longer the mining
3. longer feature vector generation
4. longer learning .
Proposed approaches
Find frequent subgraphs using gSpan
22
23. Given
D = {G0
, G1
, G2
,... ,GN
} a graph dataset
Support(g) denotes the number of graphs (in D) in which g is a subgraph
minSup denotes the minimum support threshold
Proposed approaches
Unsupervised support selection (Select best minSup)
23
24. Situation
The classifier can only improve its goodness of fit with more features
=> It is likely that the lowest support will lead to the best test accuracy
As the support decreases, the number of features increases slightly up until a
point where it increases exponentially
=> This makes both the feature vector generation and the learning expensive,
especially with multiple classes.
Proposed approaches
Unsupervised support selection (Select best minSup)
24
26. Elbow method
Example: selecting the number of clusters in k-means clustering
Choose a number of clusters so that adding
another cluster doesn't give much better
modeling of the data
Proposed approaches
Unsupervised support selection (Select best minSup)
26
27. Elbow method
In our case :
Choose a minSup so that decreasing this value by a unit will :
not give much better accuracy
but increase the number of features significantly
Proposed approaches
Unsupervised support selection (Select best minSup)
27
28. Standard baseline classifiers
K-nearest neighbors (kNN) (Larkey and Croft, 1996)
Naive Bayes (NB) (McCallum and Nigam, 1998)
Linear Support Vector Machines (SVM) (Joachims, 1998)
Proposed approaches
Considered classifiers
28
29. Problem
Single support value might lead to some classes generating a tremendous
number of features ( hundreds of thousands ) and some others only a few (a few
hundreds subgraphs)
⇒ Need an extremely low support to include discriminative features for
these minority classes
⇒ Resulting in an exponential number of features because of the majority
classes.
Proposed approaches
Multiclass scenario
29
30. Solution
Mine frequent subgraphs per class using the same relative support (in %)
Then aggregate each feature set into a global one at the cost of a supervised
process (but still avoids cross validating).
Proposed approaches
Multiclass scenario
30
31. Problem
The number of features (subgraphs) to be extracted is very large when mining
frequent subgraphs directly !
How to extract discriminative features while maintaining word dependence
and retaining as much classification information as possible ?
Solution
Reduce the graphs’ size by keeping the densest subgraphs.
Proposed approaches
Main core using gSpan
31
33. 1. Datasets
2. Results
3. Unsupervised support selection
4. Distributions of mined n-grams
Experimental evaluation
33
34. Experimental evaluation
Datasets
34
● WebKB: 4 most frequent categories among labeled web pages from
various CS departments
(2,803 for training and 1,396 for test )
● R8: 8 most frequent categories of Reuters- 21578, a set of labeled news
articles from the 1987 Reuters newswire
(5,485 for training and 2,189 for test )
● LingSpam: 2,893 emails classified as spam or legitimate messages
(10 sets for 10-fold cross validation )
● Amazon: 8,000 product reviews over four different sub-collections
(books, DVDs, electronics and kitchen appliances) classified as positive
or negative
(1,600 for training and 400 for test )
35. Experimental evaluation
Datasets
35
● Multi-class document categorization : WebKB and R8
● Spam detection (Ling-Spam)
● Opinion mining (Amazon) so as to cover all the main subtasks of text
categorization
36. Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main
cores along with the reduction of the dimension of the feature space on all four datasets.
36
Experimental evaluation
Results
37. Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best
performance in a column * indicates statistical significance at p < 0.05 using micro sign test with regards
to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the
main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support
values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon).
37
Experimental evaluation
Results
38. Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection
(main core retention) on R8 dataset. 38
Experimental evaluation
Results
39. Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right)
datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in
test for the SVM baseline.
Experimental evaluation
Unsupervised support selection
39
40. Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB
dataset.
Experimental evaluation
Distribution of mined n-grams
40
41. Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most
discriminative features for SVM on WebKB dataset.
Experimental evaluation
Distribution of mined n-grams
41
42. Conclusion
New graph-of-words approach for text mining.
Consider the problem as a graph classification
Achieved:
Extract more discriminative features that correspond to long-distance n-grams
through frequent subgraph mining
42
43. References
Text Categorization as a Graph Classification Problem (François Rousseau, Emmanouil Kiagias ,Michalis Vazirgiannis )
http://www.aclweb.org/anthology/P15-1164
gSpan: Graph-Based Substructure Pattern Mining (Xifeng Yan and Jiawei Han )
http://cs.ucsb.edu/~xyan/papers/gSpan-short.pdf
Determining the number of clusters in a data set - The Elbow Method
https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
Graph isomorphism
https://en.wikipedia.org/wiki/Graph_isomorphism
43