Unknown Word 08


Published on

Published in: Technology, Education
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Unknown Word 08

    1. 1. Pattern Mining to Chinese Unknown word Extraction 資工碩二 955202037 楊傑程 2008/08/12
    2. 2. Outline <ul><li>Introduction </li></ul><ul><li>Related Works </li></ul><ul><li>Unknown Word Detection </li></ul><ul><li>Unknown Word Extraction </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    3. 3. Introduction <ul><li>Since the growing popularity of Chinese, Chinese Text Processing has become a popular research task in recent years. </li></ul><ul><li>Before utilizing knowledge of Chinese texts, some preprocessing work should be done, such as Chinese Word Segmentation. </li></ul><ul><ul><li>There is no blank to mark word boundaries in Chinese texts. </li></ul></ul>
    4. 4. Introduction <ul><li>Chinese Word Segmentation encounters two major problems: Ambiguity and Unknown Words. </li></ul><ul><li>Ambiguity </li></ul><ul><ul><li>One un-segmented Chinese character string has different segmentations according to different context information. </li></ul></ul><ul><ul><ul><li>Ex: the sentence “ 研究生命起源” can be segmented into </li></ul></ul></ul><ul><ul><ul><ul><li>“ 研究 生命 起源” or </li></ul></ul></ul></ul><ul><ul><ul><ul><li>“ 研究生 命 起源”。 </li></ul></ul></ul></ul><ul><li>Unknown Words </li></ul><ul><ul><li>Also known as Out-Of-Vocabulary words (OOV words), mostly unfamiliar proper nouns or new-born words. </li></ul></ul><ul><ul><ul><li>Ex: the sentence “ 王義氣熱衷於研究生命” would be segmented into </li></ul></ul></ul><ul><ul><ul><ul><li>“ 王 義氣 熱衷 於 研究 生命” </li></ul></ul></ul></ul><ul><ul><ul><ul><li>because “ 王義氣” is a uncommon personal name, which is not in vocabularies. </li></ul></ul></ul></ul>
    5. 5. Introduction- types of unknown words <ul><li>In this paper, we focus on Chinese unknown word problem. </li></ul>Types of Chinese unknown words Organization names Ex: 華碩電腦 Ex: 總經理、電腦化 Abbreviation Proper Names Ex: 中油、中大 Personal names Ex: 王小明 Derived Words Compounds Ex: 電腦桌、搜尋法 Numeric type compounds Ex: 1986 年、 19 巷
    6. 6. Introduction- unknown word identification <ul><li>Chinese Word Segmentation Process: </li></ul><ul><li>Initial Segmentation (Dictionary assisted) </li></ul><ul><ul><li>Correctly identified words are called known words. </li></ul></ul><ul><ul><li>Unknown words are wrongly segmented into two or more parts. </li></ul></ul><ul><ul><ul><li>Ex: personal name 王小明 after initial segmentation, </li></ul></ul></ul><ul><ul><ul><li>become 王 小 明 </li></ul></ul></ul><ul><li>Unknown word identification </li></ul><ul><ul><li>Characters belong to one unknown word should combine together. </li></ul></ul><ul><ul><ul><li>Ex: combine 王 小 明 together as 王小明 </li></ul></ul></ul>
    7. 7. Introduction- unknown word identification <ul><li>How does unknown word identification work? </li></ul><ul><ul><li>A character can be a word ( 馬 ) or part of unknown word ( 馬 + 英 + 九 ). </li></ul></ul><ul><li>Unknown Word Detection Rules </li></ul><ul><ul><li>With help of syntactic information 、 context information </li></ul></ul><ul><li>Then just focus on detected morphemes and combine them. </li></ul>
    8. 8. Introduction- detection and extraction <ul><li>In this paper, we apply continuity pattern mining to discover unknown word detection rules. </li></ul><ul><li>Then, we utilize syntactic information 、 context information and heuristic statistical information to correctly extract unknown words. </li></ul>
    9. 9. Introduction- applied techniques <ul><li>We adopt Sequential Data Learning methods and Machine Learning Algorithms to carry out unknown word extraction. </li></ul><ul><li>Our unknown word extraction method is a general method </li></ul><ul><ul><li>not limit extraction on specific types of unknown words based on artificial rules. </li></ul></ul>
    10. 10. Related Works- particular methods <ul><li>So far, research on Chinese word segmentation has lasted for a decade. </li></ul><ul><li>First, researchers apply different kinds of information to discover different kinds of unknown words (particular). </li></ul><ul><ul><li>Patterns, Frequency, Context Information </li></ul></ul><ul><ul><ul><li>Proper nouns ([Chen & Li, 1996] 、 [Chen & Chen, 2000]) </li></ul></ul></ul>
    11. 11. Related Works- general methods (Rule-based) <ul><li>Then, researchers start to figure out methods extracting whole kinds of unknown words. </li></ul><ul><li>Rule-based Detection: </li></ul><ul><ul><li>Distinguish monosyllabic words and monosyllabic morphemes ([ Chen et al., 1998 ]) </li></ul></ul><ul><ul><li>Combine Morphological rules with Statistical rules to extract personal names 、 transliteration names and compound nouns. ( [Chen et al., 2002] ) <Precision: 89%, Recall: 68%> </li></ul></ul><ul><ul><li>Utilize context free grammar concept and propose a bottom-up merging algorithm </li></ul></ul><ul><ul><ul><li>Adopt morphological rules and general rules to extract all kinds of unknown words. ([Ma et al., 2003] ) < Precision: 76%, Recall: 57%> </li></ul></ul></ul>
    12. 12. Related Works- general methods (Statistical Model-based) <ul><li>Statistical Model-based Detection: </li></ul><ul><ul><li>Apply Machine Learning algorithms and Sequential Supervised Learning. </li></ul></ul><ul><ul><li>Direct method: </li></ul></ul><ul><ul><ul><li>Generate one corresponding statistical model </li></ul></ul></ul><ul><ul><ul><li>Initial Segmentation and role tagging (HMM 、 CRF) </li></ul></ul></ul><ul><ul><ul><li>Chunking (SVM) </li></ul></ul></ul><ul><ul><ul><li>[Goh et. al, 2006]: HMM+SVM, <Precision: 63.8%, Recall: 58.3%> </li></ul></ul></ul><ul><ul><ul><li>[Tsai et. al, 2006]: CRF, < Recall: 73% > </li></ul></ul></ul>
    13. 13. Related Works – Data <ul><li>Sequential Supervised Learning: </li></ul><ul><ul><li>Direct method, like HMM 、 CRF </li></ul></ul><ul><ul><li>Indirect method, like Sliding Window 、 Recurrent Sliding Windows </li></ul></ul><ul><ul><ul><li>Transform sequential learning problem into classification problem <[T. G. Dietterich, 2002]> </li></ul></ul></ul><ul><li>Imbalance Data Problem </li></ul><ul><ul><li><[Seyda et. al, 2007]> </li></ul></ul><ul><ul><ul><li>Select the most informative instances. </li></ul></ul></ul><ul><ul><ul><li>Random sampling 59 instances in each iteration, then pick the closest instance to the hyper-plane. </li></ul></ul></ul>
    14. 14. Unknown Word Detection & Extraction <ul><li>Our idea is similar to [Chen et al, 2002]: </li></ul><ul><ul><li>Unknown word detection </li></ul></ul><ul><ul><ul><li>Continuity pattern mining to derive detection rules. </li></ul></ul></ul><ul><ul><li>Unknown word extraction </li></ul></ul><ul><ul><ul><li>utilize natural language information 、 content & context information and statistical information to extract unknown words. </li></ul></ul></ul><ul><li>Sequential supervised learning methods (indirect) and machine-learning based models are used. </li></ul>
    15. 15. Unknown Word Detection <ul><li>We call unknown word detection as “Phase 1 process”, and unknown word extraction as “Phase 2 process”. </li></ul><ul><li>The following graph is the flow chart of unknown word detection (Phase 1). </li></ul>
    16. 16. Initial segmentation Dictionary (Libtabe lexicon ) POS tagging -TnT Unknown word detection Detection rules Pattern Mining to derive detection rules Training data (8/10 balanced corpus) Phase2 training data label Testing 2 ( un-segmented ) (1/10 balanced corpus) Initial segmentation POS tagging -TnT Phase1 Training Phase1 Testing
    17. 17. Unknown word detection- Pattern Mining <ul><li>Pattern Mining: </li></ul><ul><ul><li>Sequential Pattern: </li></ul></ul><ul><ul><ul><li>“ 因為… , 所以…” </li></ul></ul></ul><ul><ul><ul><li>Required items match pattern order </li></ul></ul></ul><ul><ul><ul><li>Allow noise in the middle of required items. </li></ul></ul></ul><ul><ul><li>Continuity Pattern: </li></ul></ul><ul><ul><ul><li>“ 打球” => “ 打球” : match, “ 打籃球” : not match </li></ul></ul></ul><ul><ul><ul><li>Strict definition to each items and order. </li></ul></ul></ul><ul><ul><ul><li>Efficient pattern mining </li></ul></ul></ul>
    18. 18. Unknown word detection- Continuity Pattern Mining <ul><li>Prowl </li></ul><ul><ul><li><[Huang et. al, 2004]> </li></ul></ul><ul><ul><li>Starts with 1-frequent pattern </li></ul></ul><ul><ul><li>Extend to 2 pattern by two adjacent 1-frequent patterns, then evaluate its frequency. </li></ul></ul>
    19. 19. Encoding <ul><li>Original segmentation label the words based on lexicon matching : known (Y) or unknown (N) </li></ul><ul><ul><li>“ 葡萄” , in the lexicon => “ 葡萄” labels as known word (Y) </li></ul></ul><ul><ul><li>“ 葡萄皮” , not in the lexicon => “ 葡萄皮” labels as unknown word (N) </li></ul></ul><ul><li>Encoding examples: </li></ul><ul><ul><li>葡萄 (Na)  葡 (Na) Y + 萄 (Na) Y </li></ul></ul><ul><ul><li>葡萄皮 (Na)  葡 (Na) N + 萄 (Na) N+ 皮 (Na) N </li></ul></ul>
    20. 20. Create detection rules <ul><li>This pattern rule means: when “ 葡 (Na), 萄 (Na)” appears, the probability that “ 葡 (Na)” being a known word (unknown word) is 0.5. </li></ul>( 葡 (Na) , 萄 Y) : 1 ( 葡 (Na) , 萄 Y) : 1
    21. 21. Store data (term + term_attribute + POS) Phase2 training data Sliding Window Positive example: Find BIES Negative example: Learn and drop SVM model 2-gram SVM model 3-gram SVM model 4-gram Calculate term frequency per docs SVM training Models (3) Calculate Precision /Recall Correct segmentation 1/10 balanced corpus Merging evaluation Solve overlap and conflict (SVM) Sequential data
    22. 22. Unknown Word Extraction <ul><li>After initial segmentation and applying detection rules, each term will have a “ term_attribute ” label itself. </li></ul><ul><li>Six different “term_attributes” are as follows : </li></ul><ul><ul><li>ms() mornosyllabic word , Ex: 你、我、他 </li></ul></ul><ul><ul><li>ms(?) morphemes of unknown word , Ex: “ 王 ”、“ 小 ”、“ 明 ” on “ 王小明 ” </li></ul></ul><ul><ul><li>ds() double-syllabic word , Ex: 學校 </li></ul></ul><ul><ul><li>ps() poly-syllabic word , Ex: 筆記型電腦 </li></ul></ul><ul><ul><li>dot() punctuation , Ex: “ ,”、 “。”… </li></ul></ul><ul><ul><li>none() no above information or new term </li></ul></ul><ul><li>The target of unknown word are those whose “term_attribute” labeled as “ms(?)”. </li></ul>
    23. 23. Positive / Negative Judgment <ul><li>A term should be a word or part of unknown word. Based upon the position of a word in the sentence, we have the following four types of position labels : </li></ul><ul><ul><li>B Begin ex: “ 王” of “ 王姿分” </li></ul></ul><ul><ul><li>I Intermediate ex: “ 姿” of “ 王姿分” </li></ul></ul><ul><ul><li>E End ex: “ 分” of “ 王姿分” </li></ul></ul><ul><ul><li>S Singular ex: “ 我”、“你” </li></ul></ul><ul><li>Find B + I * (zero to more) + E combination (positive) </li></ul><ul><ul><li>王 (?) B </li></ul></ul><ul><ul><li>姿 (?) I </li></ul></ul><ul><ul><li>分 (?) E </li></ul></ul><ul><li>Combine as a new word ( 王姿分 ) </li></ul><ul><li>Random pick the same number of positive examples as number of negative ones in the training model. </li></ul>
    24. 24. Data Processing- Sliding Window <ul><li>Sequential Supervised Learning </li></ul><ul><ul><li>Indirect method: transform sequential learning to classification learning </li></ul></ul><ul><ul><li>Sliding Window </li></ul></ul><ul><li>Each time we choose n+2 (+prefix & suffix) terms as one data, then we shift one token to right to generate another one, and so on. </li></ul><ul><ul><li>Ps. must exist at least one ms(?) in n terms. </li></ul></ul><ul><li>We offer three choices of n, e.g. 2.3.4. Namely, we offer three SVM models to extract different lengths of unknown words. </li></ul><ul><li>We call them as N-gram data (model). </li></ul>
    25. 25. EX: 3-gram Model discard negative negative negative positive 運動會 ()  ‧ ()  四年 ()  甲班 ()  王 (?)  姿 (?)  分 (?)  ‧ ()  本校 ()  為 ()  響 ()  應 () 運動會 ‧ 四年 甲班 王 (?) ‧ 四年 甲班 王 (?) 姿 (?) 四年 甲班 王 (?) 姿 (?) 分 (?) 甲班 王 (?) B 姿 (?) I 分 (?) E ‧ 王 (?) 姿 (?) 分 (?) ‧ 本校
    26. 26. Statistical Information <ul><li>For each n-gram data, we calculate subsequent records: </li></ul><ul><ul><li>pos tag of each term </li></ul></ul><ul><ul><li>Term_attribute (ms() 、 ms(?) 、 ds()…) </li></ul></ul><ul><ul><li>Statistical information: (examplfied by 3-gram Model), </li></ul></ul><ul><ul><ul><li>Frequency of 3-gram. </li></ul></ul></ul><ul><ul><ul><li>p( prefix | 3-gram), e.g. p( prefix | t1~t3) </li></ul></ul></ul><ul><ul><ul><li>p( suffix | 3-gram), e.g. p( suffix | t1~t3) </li></ul></ul></ul><ul><ul><ul><li>p( first term of n | other n-1 consecutive terms), e.g. p( t1 | t2~t3) </li></ul></ul></ul><ul><ul><ul><li>p( last term of n | other n-1 preceding terms), e.g. p( t3 | t1~t2) </li></ul></ul></ul><ul><ul><ul><li>p( pos_freq(prefix) / pos_freq(prefix in training positive)) </li></ul></ul></ul><ul><ul><ul><li>p( pos_freq(suffix) / pos_freq(suffix in training positive)) </li></ul></ul></ul>prefix (0) t1 t2 t3 suffix (4)
    27. 27. Experiments <ul><li>Unknown word detection. </li></ul><ul><li>Unknown word extraction. </li></ul>
    28. 28. Unknown Word Detection <ul><li>8/10 balanced corpus (575m words) as training data. </li></ul><ul><li>Use Pattern mining tool: Prowl [Huang et al., 2004] </li></ul><ul><li>Random pick 1/10 balanced corpus (uncovered in training data) as testing data. </li></ul><ul><li>Use accuracy as threshold of detection rules. </li></ul>Threshold (Accuracy) Precision Recall F-measure (our system) F-measure (AS system) 0.7 0.9324 0.4305 0.589035 0.71250 0.8 0.9008 0.5289 0.66648 0.752447 0.9 0.8343 0.7148 0.769941 0.76955 0.95 0.764 0.8288 0.795082 0.76553 0.98 0.686 0.8786 0.770446 0.744036
    29. 29. Unknown Word Extraction <ul><li>The rest of Sinica corpus data will be used as testing data in Phase 2. </li></ul><ul><li>[Chen et al., 2002] evaluates unknown word extraction mainly on Chinese personal names 、 foreign transliteration names and compound nouns. </li></ul><ul><li>We utilize our extraction method on all kinds of unknown word types. </li></ul>
    30. 30. Unknown Word Extraction <ul><li>In judging overlap and conflict problem of different combination of unknown words : </li></ul><ul><ul><li>[Chen et al., 2002] : frequency (w) * length (w) . </li></ul></ul><ul><ul><ul><li>Ex: “ 律師 班 奈 特” , => freq( 律師 + 班 )*2 : freq( 班 + 奈 + 特 )*3 </li></ul></ul></ul><ul><ul><li>Our method: </li></ul></ul><ul><ul><li>First solve overlap problem for identical N-gram data: </li></ul></ul><ul><ul><ul><li>P( prefix | overlap) : P( suffix | overlap) </li></ul></ul></ul><ul><ul><ul><ul><li>Ex: “ 義民 廟 中” : P( 義民 | 廟 ) : P( 中 | 廟 ) </li></ul></ul></ul></ul><ul><ul><li>Then solve conflict problem by comparing different N-gram data by: </li></ul></ul><ul><ul><ul><li>Real frequency </li></ul></ul></ul><ul><ul><ul><ul><li>freq (X)-freq (Y) , if X is included in Y ex: X=“ 醫學”、“學院” , Y=“ 醫學院” </li></ul></ul></ul></ul><ul><ul><ul><li>Freq( N-gram) * Freq( POS_N-gram*), N: 2~4 </li></ul></ul></ul>
    31. 31. Testing result <ul><li>We also evaluate three kinds of unknown word in [Chen et al., 2002]: </li></ul><ul><ul><li>3-gram unknown words: recall=0.73 </li></ul></ul><ul><ul><li>2-gram unknown words: recall=0.7 </li></ul></ul><ul><ul><li>3-gram and 2-gram combined: recall=0.68 </li></ul></ul><ul><li>[Chen et al., 2002] : </li></ul><ul><ul><li>Only morphological rules: F1 score= 0.62 (precision=0.92,recall=0.47) </li></ul></ul><ul><ul><li>Only statistical rules: F1 score= 0.52 (precision=0.78,recall=0.39) </li></ul></ul><ul><ul><li>Combination: F1 score= 0.77 (precision=0.89,recall=0.68) </li></ul></ul>
    32. 32. SVM testing result <ul><li>For general purpose: </li></ul>N-gram F1 score Precision Recall Only 4-gram 0.164 0.1 0.57 Only 3-gram 0.377 0.257 0.70 Only 2-gram 0.587 0.492 0.73 Three n-gram models combined 0.524 0.457 0.614
    33. 33. Ongoing Experiments <ul><li>Two experimental directions: </li></ul><ul><li>Sampling policy </li></ul><ul><ul><li><[Seyda et. al, 2007]>: </li></ul></ul><ul><ul><ul><ul><li>In SVM, the instances close to the hyper-plane are informative for learning. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Weka classification confidence </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Spilt whole training data to get confidence </li></ul></ul></ul></ul><ul><li>Ensemble Methods </li></ul><ul><ul><li>Bagging 、 AdaBoost </li></ul></ul>inst# actual predicted error prediction 1 2:-1 2:-1 - 0.984 2 1:1 1:1 - 0.933 …………………………………………… .. 116 2:-1 1:1 + 0.505
    34. 34. 0.75 0.688 0.825 Bagging (SMO) Confidence=0.97 + all p 3 0.743 0.674 0.829 Libsvm Confidence=0.97 + all p 3 0.72 0.722 0.717 Libsvm P:N= 1:4 3 0.678 0.674 F-Measure 0.612 0.716 Recall Precision 0.759 0.637 Result Libsvm Libsvm Algorithm (inside) Confidence=0.95 + error + all p P:N = 1:2 Sample By 2 2 Gram