Effective Extraction of Thematically Grouped Key Terms From Text Maria Grineva  Ph.D., research scientist at  Institute fo...
Outline <ul><li>Key terms extraction: traditional approaches and applications </li></ul><ul><li>Using Wikipedia as a knowl...
Key Terms Extraction <ul><li>B asic step for  various  NLP tasks : </li></ul><ul><ul><li>document classification </li></ul...
Approaches to Key Terms Extraction <ul><li>Based on  statistical learning : </li></ul><ul><ul><li>use for example: frequen...
Using Wikipedia as a Knowledge Base for Natural Language Processing <ul><li>Wikipedia (www.wikipedia.org) – free open ency...
Basic Techniques of Our Method: Semantic Relatedness of Terms <ul><li>S emantic relatedness assigns a  score  for a pair o...
<ul><li>Using  Dice-measure  for Wikipedia-based semantic relatedness </li></ul>Basic Techniques of Our Method: Semantic R...
Basic Techniques of Our Method: Detecting Community Structure in Networks <ul><li>Community – densely interconnected group...
Our Method <ul><li>Candidate  t erms  e xtraction </li></ul><ul><li>Word sense disambiguation </li></ul><ul><li>Building s...
Our Method:   Candidate  T erms  E xtraction <ul><li>Goal:  e xtract all terms from the document  and f or each term prepa...
Our Method:   Word Sense Disambiguation <ul><li>Goal:   choose the most appropriate  W ikipedia   article from the set of ...
Our Method: Building Semantic Graph <ul><li>Goal:  building document semantic graph using semantic relatedness between ter...
Our Method: Detecting Community Structure of the Semantic Graph <ul><li>Dense communities represent main topics of the doc...
Our Method:   Selecting Valuable Communities <ul><li>Goal:   rank term communities  in a way that: </li></ul><ul><ul><li>t...
Advantages of the Method <ul><li>No training .  Instead of training the system with handcreated examples, we use semantic ...
Experimental Evaluation: Creating Test Set <ul><li>30 blog posts   from the technical blogs </li></ul><ul><ul><li>5 person...
Experimental Evaluation: Precision and Recall <ul><li>30 blog posts, 180 key terms extracted manually, 297 key terms were ...
Experimental Evaluation: Revision of Precision and Recall <ul><li>O ur method typically   extracts more related terms in e...
Your Questions
Upcoming SlideShare
Loading in …5
×

Effective Extraction of Thematically Grouped Key Terms From Text

1,032
-1

Published on

"Effective Extraction of Thematically Grouped Key Terms From Text". Presentation for AAAI-SSS-09: Social Semantic Web: Where Web 2.0 Meets Web 3.0:

Published in: Technology, Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,032
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
31
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Effective Extraction of Thematically Grouped Key Terms From Text

  1. 1. Effective Extraction of Thematically Grouped Key Terms From Text Maria Grineva Ph.D., research scientist at Institute for System Programming of RAS
  2. 2. Outline <ul><li>Key terms extraction: traditional approaches and applications </li></ul><ul><li>Using Wikipedia as a knowledge base for Natural Language Processing </li></ul><ul><li>Main techniques of our approach: </li></ul><ul><ul><li>Wikipedia-based semantic relatedness </li></ul></ul><ul><ul><li>Network analysis algorithm to detect community structure in networks </li></ul></ul><ul><li>Our method </li></ul><ul><li>Experimental evaluation </li></ul>
  3. 3. Key Terms Extraction <ul><li>B asic step for various NLP tasks : </li></ul><ul><ul><li>document classification </li></ul></ul><ul><ul><li>document clustering </li></ul></ul><ul><ul><li>text summarization </li></ul></ul><ul><ul><li>inferring a more general topic of a text document </li></ul></ul><ul><li>C ore task of Internet content - based advertising systems , such as Google AdSense and Yahoo! Contextual Match . </li></ul>
  4. 4. Approaches to Key Terms Extraction <ul><li>Based on statistical learning : </li></ul><ul><ul><li>use for example: frequency criterion (TFxIDF model), keyphrase-frequency, distance between terms normalized by the number of words in the document ( KEA ) </li></ul></ul><ul><ul><li>compute statistical features over Wikipedia corpus ( Wikify! ) </li></ul></ul><ul><ul><li>require training set </li></ul></ul><ul><li>Based on analyzing syntactic or semantic term relatedness within a document </li></ul><ul><ul><li>compute semantic relatedness between terms (using, for example, Wikipedia) </li></ul></ul><ul><ul><li>modeling document as a semantic graph of terms and applying graph analysis techniques to it ( TextRank ) </li></ul></ul><ul><ul><li>no training set required </li></ul></ul>
  5. 5. Using Wikipedia as a Knowledge Base for Natural Language Processing <ul><li>Wikipedia (www.wikipedia.org) – free open encyclopedia </li></ul><ul><ul><li>Today Wikipedia is the biggest encyclopedia ( more than 2 . 7 million articles in English Wikipedia ) </li></ul></ul><ul><ul><li>It is always up-to-date thanks to millions of editors over the world </li></ul></ul><ul><ul><li>Has huge network of cross-references between articles, large number of categories, redirect pages, disambiguation pages = > rich resource for bootstrapping NLP and IR tasks </li></ul></ul>
  6. 6. Basic Techniques of Our Method: Semantic Relatedness of Terms <ul><li>S emantic relatedness assigns a score for a pair of terms that represents the strength of relatedness between the terms </li></ul><ul><li>Can be computed over dictionary or thesaurus. We use Wikipedia </li></ul><ul><li>Wikipedia-based semantic relatedness for the two terms c an be computed using : </li></ul><ul><ul><li>the links found within their corresponding Wikipedia articles </li></ul></ul><ul><ul><li>Wikipedia categories structure </li></ul></ul><ul><ul><li>the article’s textual content </li></ul></ul>
  7. 7. <ul><li>Using Dice-measure for Wikipedia-based semantic relatedness </li></ul>Basic Techniques of Our Method: Semantic Relatedness of Terms <ul><li>Denis Turdakov, Pavel Velikhov </li></ul><ul><ul><li>“ Semantic Relatedness Metric for Wikipedia Concepts Based on </li></ul></ul><ul><ul><li>Link Analysis and its Application to Word Sense Disambiguation ” </li></ul></ul><ul><li>SYRCoDIS, 2008 </li></ul>
  8. 8. Basic Techniques of Our Method: Detecting Community Structure in Networks <ul><li>Community – densely interconnected group of nodes in a network </li></ul><ul><li>Girvan-Newman algorithm for detection community structure in networks: </li></ul><ul><li>betweenness – how much is edge “in between” different communities </li></ul><ul><li>modularity - partition is a good one, if there are many edges within communities and only a few between them </li></ul>
  9. 9. Our Method <ul><li>Candidate t erms e xtraction </li></ul><ul><li>Word sense disambiguation </li></ul><ul><li>Building semantic graph </li></ul><ul><li>Discovering community structure of the semantic graph </li></ul><ul><li>Selecting valuable communities </li></ul>
  10. 10. Our Method: Candidate T erms E xtraction <ul><li>Goal: e xtract all terms from the document and f or each term prepare a set of Wikipedia articles that can describe its meaning </li></ul><ul><li>P arse the input document and extract all possible n - grams </li></ul><ul><li>For each n-gram (+ its morphological variations ) provide a set of Wikipedia article titles </li></ul><ul><ul><li>“ drinks ”, “ drinking ”, “ drink ” => [Wikipedia:] Drink ; Drinking </li></ul></ul>
  11. 11. Our Method: Word Sense Disambiguation <ul><li>Goal: choose the most appropriate W ikipedia article from the set of candidate articles for each ambiguous term extracted on the previous step </li></ul><ul><li>U se of Wikipedia disambiguation and redirect pages to obtain candidate meanings of ambiguous terms </li></ul><ul><li>Denis Turdakov, Pavel Velikhov </li></ul><ul><ul><li>“ Semantic Relatedness Metric for Wikipedia Concepts Based on </li></ul></ul><ul><ul><li>Link Analysis and its Application to Word Sense Disambiguation ” </li></ul></ul><ul><li>SYRCoDIS, 2008 </li></ul>
  12. 12. Our Method: Building Semantic Graph <ul><li>Goal: building document semantic graph using semantic relatedness between terms </li></ul>Semantic graph built from a news article &quot;Apple to Make ITunes More Accessible For the Blind&quot;
  13. 13. Our Method: Detecting Community Structure of the Semantic Graph <ul><li>Dense communities represent main topics of the document </li></ul><ul><li>Disambiguation mistakes become isolated vertices </li></ul><ul><li>Modularity for semantic graphs: 0.3~0.5 </li></ul>
  14. 14. Our Method: Selecting Valuable Communities <ul><li>Goal: rank term communities in a way that: </li></ul><ul><ul><li>the highest ranked communities contain key terms </li></ul></ul><ul><ul><li>the lowest ranked communities contain not important terms, and possible disambiguation mistakes </li></ul></ul><ul><li>Use: </li></ul><ul><ul><li>density of community – sum of inner edges of community divided by the number of vertices in this community </li></ul></ul><ul><ul><li>informativeness – sum of keyphraseness measure (Wikipedia-based TFxIDF analogue) of community terms </li></ul></ul><ul><li>Community rank: density*informativeness </li></ul><ul><li>Take 2-3 communities with highest rank </li></ul>
  15. 15. Advantages of the Method <ul><li>No training . Instead of training the system with handcreated examples, we use semantic information derived from Wikipedia </li></ul><ul><li>Thematically grouped key terms . Significantly improve further inferring of document topics using, for example, spreading activation over Wikipedia categories graph </li></ul><ul><li>High accuracy . Evaluation using human judgments (futher in this presentation) </li></ul>
  16. 16. Experimental Evaluation: Creating Test Set <ul><li>30 blog posts from the technical blogs </li></ul><ul><ul><li>5 persons took part in evaluation and was aksed to: </li></ul></ul><ul><ul><li>identify from 5 to 10 key terms for each blog post </li></ul></ul><ul><ul><li>each key term must present in the blog post, and must be identified using Wikipedia article names as the allowed vocabulary </li></ul></ul><ul><ul><li>choose key terms should cover several main topics of the blog post </li></ul></ul><ul><li>Eventualy, key term was considered valid if at least two of the participants identified the same key term from th e blog post </li></ul>
  17. 17. Experimental Evaluation: Precision and Recall <ul><li>30 blog posts, 180 key terms extracted manually, 297 key terms were extracted by our method, 123 of manually extracted key terms were also extracted by our method </li></ul><ul><li>Recall equals to 68% </li></ul><ul><li>Precision equals to 41% </li></ul>
  18. 18. Experimental Evaluation: Revision of Precision and Recall <ul><li>O ur method typically extracts more related terms in each thematic group than a human (possibly, our method produces better terms coverage for a specific topic than an average human) => revisit precision and recall </li></ul><ul><li>Each participant reviewed key terms extracted automatically for every blog, and, if possible, extended his manually identified key terms with some from the automatically extracted set </li></ul><ul><li>Recall after revision equals to 73% </li></ul><ul><li>Precision after revision equals to 52% </li></ul>
  19. 19. Your Questions
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×