Simulation of Language Acquisition Walter Daelemans

423
-1

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
423
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Simulation of Language Acquisition Walter Daelemans

  1. 1. Simulation of Language Acquisition Walter Daelemans (CNTS, University of Antwerp) [email_address] http://www.cnts.ua.ac.be/~walter EMLAR 2005 Utrecht
  2. 2. Overview <ul><li>Theories, computational models and simulations </li></ul><ul><li>Machine Learning </li></ul><ul><ul><li>Generalization versus abstraction </li></ul></ul><ul><ul><li>Eager versus lazy learning </li></ul></ul><ul><li>Memory-based models of language acquisition and processing </li></ul><ul><li>Case Study 1: Stress acquisition </li></ul><ul><li>TiMBL crash course and demonstration </li></ul><ul><li>Case Study 2: German plural </li></ul>
  3. 3. Simulation (1) <ul><li>Theory </li></ul><ul><ul><li>Explains and predicts empirical data (observations, experimental results) </li></ul></ul><ul><ul><li>Cogsci: in terms of knowledge representation, acquisition, and processing framework </li></ul></ul><ul><ul><li>Problems </li></ul></ul><ul><ul><ul><li>Verbal </li></ul></ul></ul><ul><ul><ul><li>Sometimes vague, underspecified </li></ul></ul></ul><ul><ul><ul><li>Every theoretical description, however exact, turns out to contain errors when you try to implement it (~ Hugo Brandt Corstius, second law of Computational Linguistics) </li></ul></ul></ul>
  4. 4. Simulation (2) <ul><li>Computational Model </li></ul><ul><ul><li>Translation of a theory into specific symbol representation and processing framework (algorithms and data structures) </li></ul></ul><ul><ul><li>Advantages </li></ul></ul><ul><ul><ul><li>Precise formulation </li></ul></ul></ul><ul><ul><ul><li>Explicit in all details </li></ul></ul></ul><ul><ul><ul><li>Consistence and completeness can sometimes be proven </li></ul></ul></ul><ul><ul><ul><li>Falsifiable through simulations </li></ul></ul></ul><ul><li>Simulations </li></ul><ul><ul><li>A computational model with specific parameter settings used to mimic specific empirical data </li></ul></ul>
  5. 5. Machine Learning as a model for acquisition <ul><li>Cognitive architecture </li></ul><ul><ul><li>Competence (knowledge representation) </li></ul></ul><ul><ul><li>Performance (search) </li></ul></ul><ul><ul><li>Acquisition (search) </li></ul></ul><ul><li>Bias </li></ul><ul><ul><li>Restrictions on input and output representations </li></ul></ul><ul><ul><li>Restrictions on learning algorithm </li></ul></ul><ul><ul><li>Restrictions on knowledge representation formalism </li></ul></ul>
  6. 6. Output Input Performance Component R i R k R l R j Learning Component Search Experience BIAS
  7. 7. Generalisation  Abstraction + abstraction - abstraction + generalisation - generalisation Rule Induction Connectionism Statistics Handcrafting Table Lookup Memory-Based Learning … (Fill in your most hated linguist here)
  8. 8. Nativism  Rule-Based nativist empiricist + rule-based - rule-based Innate mental rules Rule Induction Connectionism Statistics Memory-Based Learning Hard-wired neural networks Innate probabilities? Innate exemplars?
  9. 9. Machine Learning crash course <ul><li>The field of machine learning is concerned with the question of how to construct computer programs that automatically learn with experience. (Mitchell, 1997) </li></ul><ul><li>Dynamic process: learner L shows improvement on task T after learning. </li></ul><ul><li>Getting rid of programming. </li></ul><ul><li>Handcrafting versus learning. </li></ul><ul><li>Machine Learning is task-independent . </li></ul>
  10. 10. Machine Learning: Roots <ul><li>Information theory </li></ul><ul><li>Artificial intelligence </li></ul><ul><li>Pattern recognition </li></ul><ul><li>Took off during 70s </li></ul><ul><li>Major algorithmic improvements during 80s </li></ul><ul><li>Forking: neural networks, data mining </li></ul>
  11. 11. Machine Learning: 2 types <ul><li>Theoretical ML (what can be proven to be learnable by what?) </li></ul><ul><ul><li>Gold, identification in the limit </li></ul></ul><ul><ul><li>Valiant, probably approximately correct learning </li></ul></ul><ul><li>Empirical ML (on real or artificial data) </li></ul><ul><ul><li>Evaluation Criteria: </li></ul></ul><ul><ul><ul><li>Accuracy </li></ul></ul></ul><ul><ul><ul><li>Quality of solutions </li></ul></ul></ul><ul><ul><ul><li>Time complexity </li></ul></ul></ul><ul><ul><ul><li>Space complexity </li></ul></ul></ul><ul><ul><ul><li>Noise resistance </li></ul></ul></ul>
  12. 12. Empirical ML: Key Terms 1 <ul><li>Instances : individual examples of input-output mappings of a particular type </li></ul><ul><li>Input consists of features </li></ul><ul><li>Features have values </li></ul><ul><li>Values can be </li></ul><ul><ul><li>Symbolic (e.g. letters, words, …) </li></ul></ul><ul><ul><li>Binary (e.g. indicators) </li></ul></ul><ul><ul><li>Numeric (e.g. counts, signal measurements) </li></ul></ul><ul><li>Output can be </li></ul><ul><ul><li>Symbolic (classification: linguistic symbols, …) </li></ul></ul><ul><ul><li>Binary (discrimination, detection, …) </li></ul></ul><ul><ul><li>Numeric (regression) </li></ul></ul>
  13. 13. Empirical ML: Key Terms 2 <ul><li>A set of instances is an instance base </li></ul><ul><li>Instance bases come as labeled training sets or unlabeled test sets (you know the labeling, the learner does not) </li></ul><ul><li>A ML experiment consists of training on the training set, followed by testing on the disjoint test set </li></ul><ul><li>Generalization performance ( accuracy, precision, recall, F-score ) is measured on the output predicted on the test set </li></ul><ul><li>Splits in train and test sets should be systematic: n-fold cross-validation </li></ul><ul><ul><li>10-fold CV </li></ul></ul><ul><ul><li>Leave-one-out testing </li></ul></ul><ul><li>Significance tests on pairs or sets of (average) CV outcomes </li></ul>
  14. 14. Empirical ML: 2 Flavors <ul><li>Eager </li></ul><ul><ul><li>Learning </li></ul></ul><ul><ul><ul><li>abstract model from data </li></ul></ul></ul><ul><ul><li>Classification </li></ul></ul><ul><ul><ul><li>apply abstracted model to new data </li></ul></ul></ul><ul><li>Lazy </li></ul><ul><ul><li>Learning </li></ul></ul><ul><ul><ul><li>store data in memory </li></ul></ul></ul><ul><ul><li>Classification </li></ul></ul><ul><ul><ul><li>compare new data to data in memory </li></ul></ul></ul>
  15. 15. Eager vs Lazy Learning <ul><li>Eager: </li></ul><ul><ul><li>Decision tree induction </li></ul></ul><ul><ul><ul><li>CART, C4.5 </li></ul></ul></ul><ul><ul><li>Rule induction </li></ul></ul><ul><ul><ul><li>CN2, Ripper </li></ul></ul></ul><ul><ul><li>Hyperplane discriminators </li></ul></ul><ul><ul><ul><li>Winnow, perceptron, backprop, SVM </li></ul></ul></ul><ul><ul><li>Probabilistic </li></ul></ul><ul><ul><ul><li>Naïve Bayes, maximum entropy, HMM </li></ul></ul></ul><ul><ul><li>(Hand-made rulesets) </li></ul></ul><ul><li>Lazy: </li></ul><ul><ul><li>k -Nearest Neighbour </li></ul></ul><ul><ul><ul><li>MBL, AM </li></ul></ul></ul><ul><ul><ul><li>Local regression </li></ul></ul></ul>
  16. 16. -etje -kje Coda last syl Nucleus last syl Rule Induction
  17. 17. ? -etje -kje Coda last syl Nucleus last syl MBL
  18. 18. Eager vs Lazy Learning <ul><li>Decision trees keep the smallest amount of informative decision boundaries (in the spirit of MDL, Rissanen, 1983) </li></ul><ul><li>Rule induction keeps smallest number of rules with highest coverage and accuracy (MDL) </li></ul><ul><li>Hyperplane discriminators keep just one hyperplane (or vectors that support it) </li></ul><ul><li>Probabilistic classifiers convert data to probability matrices </li></ul><ul><li>k-NN retains every piece of information available at training time </li></ul>
  19. 19. Eager vs Lazy Learning <ul><li>Minimal Description Length principle: </li></ul><ul><ul><li>Ockham’s razor </li></ul></ul><ul><ul><li>Length of abstracted model (covering core ) </li></ul></ul><ul><ul><li>Length of productive exceptions not covered by core ( periphery ) </li></ul></ul><ul><ul><li>Sum of sizes of both should be minimal </li></ul></ul><ul><ul><li>More minimal models are better </li></ul></ul><ul><li>“ Learning = compression” dogma </li></ul><ul><li>In ML, length of abstracted model has been focus; not storing periphery </li></ul>
  20. 20. Eager vs Lazy: So? <ul><li>Highly relevant to language modeling </li></ul><ul><li>In language data, what is core ? What is periphery ? </li></ul><ul><li>Often little or no noise; productive exceptions </li></ul><ul><li>(Sub-)subregularities, pockets of exceptions </li></ul><ul><li>“ disjunctiveness” and “polymorphism” </li></ul><ul><li>Some important elements of language have different distributions than the “normal” one </li></ul><ul><li>E.g. word forms have a Zipfian distribution </li></ul><ul><li>Hard to distinguish noise from exceptions on the basis of </li></ul><ul><ul><li>Frequency </li></ul></ul><ul><ul><li>Typicality </li></ul></ul>
  21. 22. ML and Natural Language <ul><li>Apparent conclusion: ML could be an interesting tool to do psycholinguistic modeling </li></ul><ul><ul><li>Next to probability theory, information theory, statistical analysis (natural allies) </li></ul></ul><ul><li>More and more annotated data available </li></ul><ul><li>Skyrocketing computing power and memory </li></ul>
  22. 23. Case Study Exemplar-based acquisition of Dutch Stress (Durieux / Gillis / Daelemans)
  23. 24. This “rule of nearest neighbor” has considerable elementary intuitive appeal and probably corresponds to practice in many situations. For example, it is possible that much medical diagnosis is influenced by the doctor's recollection of the subsequent history of an earlier patient whose symptoms resemble in some way those of the current patient. (Fix and Hodges, 1952, p.43) MBL: Use memory traces of experiences as a basis for analogical reasoning, rather than using rules or other abstractions extracted from experience and replacing the experiences.
  24. 25. MBL Acquisition <ul><li>Language process is represented by a set of exemplars in memory </li></ul><ul><ul><li>Exemplars act as models </li></ul></ul><ul><ul><li>Learning is incremental storage of exemplars </li></ul></ul><ul><ul><li>Compression and Metrics </li></ul></ul><ul><li>Exemplar consists of set of (mostly symbolic) features </li></ul>
  25. 26. MBL Processing <ul><li>New instances of a performance process are solved through </li></ul><ul><ul><li>Memory retrieval </li></ul></ul><ul><ul><li>Analogical (Similarity-Based) Reasoning </li></ul></ul><ul><li>Similarity metric </li></ul><ul><ul><li>Language (faculty) - independent </li></ul></ul><ul><ul><li>Adaptive (feature and exemplar weighting) </li></ul></ul>
  26. 27. Operationalization <ul><li>Basis: k nearest neighbor algorithm: </li></ul><ul><ul><li>store all examples in memory </li></ul></ul><ul><ul><li>to classify a new instance X , look up the k examples in memory with the smallest distance D(X,Y) to X </li></ul></ul><ul><ul><li>let each nearest neighbor vote with its class </li></ul></ul><ul><ul><li>classify instance X with the class that has the most votes in the nearest neighbor set </li></ul></ul><ul><li>Choices: </li></ul><ul><ul><li>similarity metric </li></ul></ul><ul><ul><li>number of nearest neighbors ( k) </li></ul></ul><ul><ul><li>voting weights </li></ul></ul>
  27. 28. The Overlap distance function <ul><li>“Count the number of mismatching features” </li></ul>
  28. 29. The MVDM distance function <ul><li>Estimate a numeric “distance” between pairs of values </li></ul><ul><ul><li>“ e” is more like “i” than like “p” in a phonetic task </li></ul></ul><ul><ul><li>“ book” is more like “document” than like “the” in a parsing task </li></ul></ul>
  29. 30. Feature weighting in the distance function <ul><li>Mismatching on a more important feature gives a larger distance </li></ul><ul><li>Factor in the distance function: </li></ul>
  30. 31. Entropy & IG: Formulas
  31. 32. Exemplar weighting <ul><li>Scale the distance of a memory instance by some externally computed factor </li></ul><ul><li>Smaller distance for “good” instances </li></ul><ul><li>Bigger distance for “bad” instances </li></ul>
  32. 33. Distance weighting <ul><li>Relation between larger k and smoothing </li></ul><ul><li>Make more distant neighbors contribute less in the class vote </li></ul><ul><ul><li>Linear inverse of distance (w.r.t. max) </li></ul></ul><ul><ul><li>Inverse of distance </li></ul></ul><ul><ul><li>Exponential decay </li></ul></ul>
  33. 34. Learning word stress: A case study <ul><li>Learn primary stress </li></ul><ul><li>Compare MBL with P&P/UG </li></ul><ul><li>Match acquisition and processing data </li></ul><ul><li>Durieux, G. (2003) “Computermodellen en klemtoon.” Fonologische Kruispunten, BICN. </li></ul><ul><li>Daelemans, W., Gillis, S., and Durieux, G. (1994). The acquisition of stress: A data-oriented approach.&quot; Computational Linguistics 20: 421-451. </li></ul><ul><li>Daelemans, W., Gillis, S., Durieux, G., and Van den Bosch, A. (1993). Learnability and markedness: Dutch stress assignment. In T.M. Ellison and J.M. Scobbie (Eds.), Computational Phonology . Edinburgh Working Papers in Cognitive Science, 8, pp. 157-178. </li></ul>
  34. 35. MBL for psychology <ul><li>Similarity metric </li></ul><ul><ul><li>Analogy engine </li></ul></ul><ul><li>Feature weighting </li></ul><ul><ul><li>Relevance assignment </li></ul></ul><ul><ul><li>Information fusion </li></ul></ul><ul><li>Value weighting </li></ul><ul><ul><li>Implicit concept formation </li></ul></ul><ul><li>Exemplar weighting </li></ul><ul><ul><li>Recency, priming </li></ul></ul><ul><li>Distance-weighted extrapolation </li></ul><ul><ul><li>Distributions, probabilities </li></ul></ul><ul><li>Local modeling </li></ul><ul><ul><li>Heterogeneity and density </li></ul></ul>
  35. 36. Dominant Linguistic Approach <ul><li>Principles and Parameters, UG </li></ul><ul><ul><li>Typology </li></ul></ul><ul><ul><li>Acquisition </li></ul></ul><ul><li>Formalism: Metrical trees, metrical grids </li></ul><ul><li>Stress = prominence relations between constituents in a hierarchical structure </li></ul>
  36. 37. YOUPIE (Dresher & Kaye, 1990) <ul><li>Assumptions </li></ul><ul><ul><li>11 parameters (216 “languages”) </li></ul></ul><ul><ul><li>Task-specific system for learning stress (domain knowledge) </li></ul></ul><ul><ul><li>Core grammar only </li></ul></ul><ul><li>Learning </li></ul><ul><ul><li>Cue-based parameter setting results in a grammar of stress </li></ul></ul><ul><li>Performance </li></ul><ul><ul><li>Generate tree with grammar and algorithmically determine stress location </li></ul></ul>
  37. 38.    1 0 1 0 0 0 0 1 1 0 1 UG-stress Grammar and Assignment rules word PLD Cue-based Learning
  38. 39. Parameters (with setting for Dutch) Feet are / aren’t assigned iteratively P10 Weak foot looses / doesn’t loose foot status in a clash P9 Left / Right -most syllable is extra-metrical P8 There isn’t / is an extra-metrical syllable P8A Strong node in foot must / mustn’t branch P7 Feet are quantity-sensitive w.r.t. rime / nucleus P6 Feet are / are not quantity-sensitive P5 Feet right/ left -dominant P4 Feet assigned from the left/ right edge P3 Binary/ unbound feet P2 Word tree right /left dominant P1 Value Parameter
  39. 40. MBL <ul><li>Assumptions </li></ul><ul><ul><li>Lexical storage and generalization </li></ul></ul><ul><ul><li>Generic learning method, no task-specific linguistic knowledge </li></ul></ul><ul><ul><li>Core and periphery </li></ul></ul><ul><li>Learning </li></ul><ul><ul><li>Based on storage of exemplars </li></ul></ul><ul><li>Performance </li></ul><ul><ul><li>Similarity-based reasoning with feature weighting on stored exemplars </li></ul></ul>
  40. 41. Syllable-structure representations Retrieval or Similarity-based reasoning on exemplars word PLD Storage Stress pattern
  41. 42. YOUPIE tested <ul><li>Experimental design </li></ul><ul><ul><li>216 languages </li></ul></ul><ul><ul><li>117 items per language generated by YOUPIE performance component ( no exceptions, core only ) </li></ul></ul><ul><ul><li>For each language, grammar learned with YOUPIE cue-based learning component </li></ul></ul><ul><li>Results </li></ul><ul><ul><li>For 60% of the languages, YOUPIE reconstructs the original parameter setting with which the words were generated </li></ul></ul><ul><ul><li>For 21% convergence is to a compatible setting </li></ul></ul><ul><ul><li>For 19% of the languages errors in one or more stress patterns </li></ul></ul><ul><li>Upper Boundary! </li></ul><ul><ul><li>Perfect input, no exceptions to be learned </li></ul></ul>
  42. 43. MBLP vs. Youpie 81% 176 YOUPIE-languages 41% 89 MBLP-languages 95% 11.88 YOUPIE-syllables 97% 3.7 MBLP-syllables 90% 28.24 105 YOUPIE-words 89% 15.01 104 MBLP-words Accuracy Sd Score System and level
  43. 44. Discussion <ul><li>No significant quantitative difference in performance </li></ul><ul><li>Clear qualitative difference </li></ul><ul><ul><li>YOUPIE: more languages perfectly learned </li></ul></ul><ul><ul><li>MBLP: fewer errors per language </li></ul></ul><ul><li>Issues: </li></ul><ul><ul><li>Real language data </li></ul></ul><ul><ul><li>Core and periphery </li></ul></ul><ul><ul><li>Acquisition </li></ul></ul><ul><ul><li>Processing </li></ul></ul>
  44. 45. Dutch stress <ul><li>Stress on one of the last three syllables </li></ul><ul><li>Predictable, but not completely </li></ul><ul><ul><li>E.g., py- a -ma ca -na-da pa-ra- plu </li></ul></ul><ul><li>Words not covered by the parameter-configuration for Dutch need lexical marking with exception features (one, two or completely idiosyncratic) </li></ul>
  45. 46. MBLP on Dutch data <ul><li>CELEX, 4868 monomorphemes </li></ul><ul><li>Exemplar encoding schemes </li></ul><ul><li>For each of the three final syllables: </li></ul><ul><ul><li>S1: syllable weight (SL, L, H, SH) </li></ul></ul><ul><ul><li>S2: nucleus and coda (complete rhymes, VC) </li></ul></ul><ul><ul><li>S3: nucleus and coda (separate features, phonemes) </li></ul></ul><ul><ul><li>S4: onset, nucleus, and coda (phonemes) </li></ul></ul><ul><li>Class: final, penultimate, ante-penultimate </li></ul>
  46. 47. Results
  47. 48. Language Acquisition <ul><li>Learning rules or learning lexical items? </li></ul><ul><li>Rules (Hochberg ‘88 Spanish, Nouveau ‘93 Dutch) </li></ul><ul><ul><li>Lexical learning lacks generalization capacity </li></ul></ul><ul><ul><li>Lexical learning incompatible with acquisition data </li></ul></ul><ul><li>Imitation task </li></ul><ul><ul><li>Errors increase with irregularity </li></ul></ul><ul><ul><li>Tendency to regularization (but irregularization occurs) </li></ul></ul><ul><ul><ul><li>By stress shift </li></ul></ul></ul><ul><ul><ul><li>By changing structure of repeated word </li></ul></ul></ul>
  48. 49. Error Percentages
  49. 50. Discussion <ul><li>MBLP error correlates with markedness like children’s errors </li></ul><ul><li>MBLP has a tendency for regularization like children </li></ul><ul><ul><li>Direction of stress shifts </li></ul></ul><ul><ul><li>Structural changes from inspection of nearest neighbors </li></ul></ul><ul><li>Irregularization and differences 3 and 4 year-olds on marked patterns hard to explain in rule-based context </li></ul><ul><li>Rule learning is not the only possible explanation for the language acquisition data </li></ul>
  50. 51. Adult processing <ul><li>Rule-based: stress grammar and set of irregular words, marked in the lexicon </li></ul><ul><ul><li>Known words: rule application except when blocked by lexicon </li></ul></ul><ul><ul><li>Unknown words: rule application </li></ul></ul><ul><li>MBLP: lexical storage and analogy </li></ul><ul><ul><li>Known words: look-up </li></ul></ul><ul><ul><li>Unknown words: analogy </li></ul></ul>
  51. 52. Experimental set-up <ul><li>Stimuli </li></ul><ul><ul><li>Create pseudo-words and transcribe them (encoding 4) </li></ul></ul><ul><ul><li>Have a machine learner assign stress (regular or irregular) </li></ul></ul>60 60 Irregulars 60 60 Regulars Trisyllabic Bisyllabic
  52. 53. Experimental set-up <ul><li>Method </li></ul><ul><ul><li>18 adult participants </li></ul></ul><ul><ul><li>Reading task </li></ul></ul><ul><ul><li>3 independent judges, consensus </li></ul></ul><ul><li>Result </li></ul><ul><ul><li>Main effect for regularity-variable (ANOVA p < .001); regular stress only in regular conditions </li></ul></ul><ul><ul><li>In all conditions, participants do the same as model prediction (ANOVA p < .001) </li></ul></ul>
  53. 54. Results
  54. 55. Results
  55. 56. Discussion <ul><li>Adult speakers sometimes prefer marked stress patterns for non-words </li></ul><ul><li>These cases are partially predictable with an MBLP model and are problematic in a rule-based model (regularization only) </li></ul><ul><li>BUT: </li></ul><ul><ul><li>MBLP has a significantly better match with participant behavior in the regular conditions </li></ul></ul><ul><ul><li>Hypothesis: differences between mental lexicon and celex </li></ul></ul><ul><ul><ul><li>Using a set-up with a population of machine ‘learners’ using different samples from celex explains the variability </li></ul></ul></ul>
  56. 57. Summary <ul><li>Goal: put MBLP to the test on a concrete linguistic problem of sufficient complexity by comparing it to </li></ul><ul><ul><li>Linguistic theory </li></ul></ul><ul><ul><li>Child language acquisition data </li></ul></ul><ul><ul><li>Adult processing data </li></ul></ul><ul><li>Results: </li></ul><ul><ul><li>MBLP and YOUPIE (P&P/UG) comparable </li></ul></ul><ul><ul><li>MBLP can learn core as well as periphery using superficial representations </li></ul></ul><ul><ul><li>MBLP shows same errors and tendencies as children learning stress placement </li></ul></ul><ul><ul><li>MBLP better predictor of human adult behaviour with non-words </li></ul></ul>
  57. 58. Overall Conclusion <ul><li>Exemplar-based models should be taken as a serious alternative for rule-based/P&P/UG/dual route type theories </li></ul><ul><ul><li>Workable operationalisation of analogy </li></ul></ul><ul><ul><li>Adequacy </li></ul></ul><ul><ul><ul><li>Similar results in morphology and syntax (grammatical relations, chunking, pp-attachment) </li></ul></ul></ul><ul><ul><ul><li>We’ll see … </li></ul></ul></ul>
  58. 59. Simulation with TiMBL Demonstration: German plural
  59. 60. TiMBL http://ilk.uvt.nl/timbl <ul><li>Tilburg Memory-Based Learner </li></ul><ul><li>Available for research and education </li></ul><ul><li>Lazy learning, extending k -NN and IB1 </li></ul><ul><li>Optimized search for NN </li></ul><ul><ul><li>Internal structure: tree, not flat instance base </li></ul></ul><ul><ul><li>Tree ordered by chosen feature weight </li></ul></ul><ul><ul><li>Many built-in optional metrics: feature weights, distance function, distance weights, exemplar weights, … </li></ul></ul>
  60. 61. Current practice <ul><li>Default TiMBL settings: </li></ul><ul><ul><li>k=1, Overlap, GR, no distance weighting </li></ul></ul><ul><ul><li>Work well for some morpho-phonological tasks </li></ul></ul><ul><li>Rules of thumb: </li></ul><ul><ul><li>Combine MVDM with bigger k </li></ul></ul><ul><ul><li>Combine distance weighting with bigger k </li></ul></ul><ul><ul><li>Very good bet: higher k, MVDM, GR, distance weighting </li></ul></ul><ul><ul><li>Especially for sentence and text level tasks </li></ul></ul>
  61. 62. <ul><li>usage: Timbl -f data-file {-t test-file} [options] </li></ul><ul><li>Algorithm and Metric options: </li></ul><ul><li>-a n : algorithm. </li></ul><ul><li>0 or IB1 : IB1 (default) </li></ul><ul><li>1 or IG : IGTree </li></ul><ul><li>2 or TRIBL : TRIBL </li></ul><ul><li>3 or IB2 : IB2 </li></ul><ul><li>4 or TRIBL2 : TRIBL2 </li></ul><ul><li>-m s : use feature metrics as specified in string s: </li></ul><ul><li>format: GlobalMetric:MetricRange:MetricRange </li></ul><ul><li>e.g.: -mO:N3:I2,5-7 </li></ul><ul><li>D: Dot product. (Global only. numeric features implied) </li></ul><ul><li>O: weighted Overlap. (default) </li></ul><ul><li>M: Modified value difference. </li></ul><ul><li>N: numeric values. </li></ul><ul><li>I: Ignore named values. </li></ul>
  62. 63. <ul><li>-w 0 : No Weighting. </li></ul><ul><li>1 : Weight using GainRatio. (default) </li></ul><ul><li>2 : Weight using InfoGain </li></ul><ul><li>3 : Weight using Chi-square </li></ul><ul><li>4 : Weight using Shared Variance </li></ul><ul><li>f : use Weights from file 'f'. </li></ul><ul><li>-b n : number of lines used for bootstrapping (IB2 only). </li></ul><ul><li>-d val : weight neighbors as function of their distance: </li></ul><ul><li>Z : all the same weight. (default) </li></ul><ul><li>ID : Inverse Distance. </li></ul><ul><li>IL : Inverse Linear. </li></ul><ul><li>ED:a : Exponential Decay with factor a. (no whitespace!) </li></ul><ul><li>ED:a:b : Exponential Decay with factor a and b. (no whitespace!) </li></ul><ul><li>-k n : k nearest neighbors (default n = 1). </li></ul>
  63. 64. <ul><li>-q n : TRIBL treshold at level n. </li></ul><ul><li>-L n : MVDM treshold at level n. </li></ul><ul><li>-R n : solve ties at random with seed n. </li></ul><ul><li>-t f : test using file 'f'. </li></ul><ul><li>-t leave_one_out: test with Leave One Out,using IB1. </li></ul><ul><li>-t cross_validate: Cross Validate Test,using IB1. </li></ul><ul><li>@f : test using files and options described in file 'f'. </li></ul><ul><li>Supported options: d e F k m o p q R t u v w x % - </li></ul><ul><li>-t <file> is mandatory </li></ul>
  64. 65. <ul><li>Input options: </li></ul><ul><li>-f f : read from Datafile 'f'. </li></ul><ul><li>-f f : OR: use filenames from 'f' for CV test </li></ul><ul><li>-F format: Assume the specified inputformat. </li></ul><ul><li>(Compact, C4.5, ARFF, Columns, Binary, Sparse ) </li></ul><ul><li>-l n : length of Features (Compact format only). </li></ul><ul><li>-i f : read the InstanceBase from file 'f'. (skips phase 1 & 2 ) </li></ul><ul><li>-u f : read value_class probabilities from file 'f'. </li></ul><ul><li>-P d : read data using path 'd'. </li></ul><ul><li>-s : use exemplar weights from the input file </li></ul><ul><li>-s0 : silently ignore the exemplar weights from the input file </li></ul>
  65. 66. <ul><li>Output options: </li></ul><ul><li>-e n : estimate time until n patterns tested. </li></ul><ul><li>-I f : dump the InstanceBase in file 'f'. </li></ul><ul><li>-n f : create names file 'f'. </li></ul><ul><li>-p n : show progress every n lines. (default p = 100,000) </li></ul><ul><li>-U f : save value_class probabilities in file 'f'. </li></ul><ul><li>-V : Show VERSION. </li></ul><ul><li>+v or -v level : set or unset verbosity level, where level is </li></ul><ul><li>s: work silently. </li></ul><ul><li>o: show all options set. </li></ul><ul><li>f: show Calculated Feature Weights. (default) </li></ul><ul><li>p: show MVD matrices. </li></ul><ul><li>e: show exact matches. </li></ul><ul><li>as: show advanced statistics. (memory consuming) </li></ul><ul><li>cm: show Confusion Matrix. </li></ul><ul><li>cs: show per Class Statistics. (implies +vas) </li></ul><ul><li>di: add distance to output file. </li></ul><ul><li>db: add distribution of best matched to output file </li></ul><ul><li>k: add a summary for all k neigbors to output file (sets -x) </li></ul><ul><li>n: add nearest neigbors to output file </li></ul><ul><li>(sets -x and --) </li></ul><ul><li>You may combine levels using '+' e.g. +v p+db or -v o+di </li></ul>
  66. 67. <ul><li>-W f : save current Weights in file 'f'. </li></ul><ul><li>+% or -% : do or don't save test result (%) to file. </li></ul><ul><li>-o s : use s as output filename. </li></ul><ul><li>-O d : save output using path 'd'. </li></ul><ul><li>Internal representation options: </li></ul><ul><li>-B n : number of bins used for discretization of numeric feature values </li></ul><ul><li>-c n : clipping frequency for prestoring MVDM matrices </li></ul><ul><li>-D : Don't store distributions. </li></ul><ul><li>(saves memory, but disables +vDB option) </li></ul><ul><li>+H or -H: write hashed trees (default +H) </li></ul><ul><li>-M n: size of MaxBests Array </li></ul><ul><li>-N n: Number of features (default 2500) </li></ul><ul><li>-T n : ordering of the Tree : </li></ul><ul><li>DO: none. </li></ul><ul><li>GRO: using GainRatio </li></ul><ul><li>IGO: using InformationGain </li></ul><ul><li>(… and many others) </li></ul><ul><li>+x or -x : Do or don't use the exact match shortcut. </li></ul><ul><li>(IB only, default is -x) </li></ul>
  67. 68. Data & Representation <ul><li>Symbolic features </li></ul><ul><ul><li>segmental information (syllable structure) </li></ul></ul><ul><ul><li>stress </li></ul></ul><ul><ul><li>gender </li></ul></ul><ul><li>German Plural (~ 25,000 from CELEX) </li></ul><ul><ul><li>Vorlesung (lecture) l e - z U N F en </li></ul></ul><ul><ul><li>Classes: e (e)n s er - U- Uer Ue </li></ul></ul>
  68. 69. Cognitive Architectures of Inflectional Morphology <ul><li>Dual Route (Pinker, Clahsen, Marcus …) </li></ul><ul><ul><li>Rules for regular cases </li></ul></ul><ul><ul><ul><li>(over)generalization </li></ul></ul></ul><ul><ul><ul><li>default behaviour </li></ul></ul></ul><ul><ul><li>Associative memory for exceptions </li></ul></ul><ul><ul><ul><li>irregularization / family effects </li></ul></ul></ul><ul><li>Single Route (R&M, MacWhinney, Plunkett, Elman, …) </li></ul><ul><ul><li>Frequency-based regularity </li></ul></ul>Dual Route Pattern Associator Rule Input Features Suffix-class Memory Failure
  69. 70. German Plural <ul><li>Notoriously complex but routinely acquired (at age 5) </li></ul><ul><li>Evidence for Dual Route ? </li></ul><ul><ul><li>-s suffix is default/regular (novel words, surnames, acronyms, …) </li></ul></ul><ul><ul><li>-s suffix is infrequent (least frequent of the five most important suffixes) </li></ul></ul>
  70. 72. The default status of -s <ul><li>Similar item missing Fnöhk-s </li></ul><ul><li>Surname, product name Mann-s </li></ul><ul><li>Borrowings Kiosk-s </li></ul><ul><li>Acronyms BMW-s </li></ul><ul><li>Lexicalized phrases Vergissmeinnicht-s </li></ul><ul><li>Onomatopoeia, truncated roots, derived nouns, ... </li></ul>
  71. 74. Discussion <ul><li>Three “classes” of plurals: ((-en -)(-e -er))(s) </li></ul><ul><ul><li>the former 4 suffixes seem “regular”, can be accurately learned using information from phonology and gender </li></ul></ul><ul><ul><li>-s is learned reasonably well but information is lacking </li></ul></ul><ul><ul><ul><li>Hypothesis: more “features” are needed (syntactic, semantic, meta-linguistic, …) to enrich the “lexical similarity space” </li></ul></ul></ul><ul><li>No difference in accuracy and speed of learning with and without Umlaut </li></ul><ul><li>Overall generalization accuracy very high: 95% (90%) </li></ul><ul><li>Schema-based learning (Köpcke). </li></ul>*,*,*,*,i,r,M e
  72. 77. Acquisition Data: Summary of previous studies <ul><li>Existing nouns: </li></ul><ul><li>(Park 78; Veit 86; Mills 86; Schamer-Wolles 88; Clahsen et al. 93; Sedlak et al. 98) </li></ul><ul><ul><li>Children mainly overapply -e or -(e)n </li></ul></ul><ul><ul><li>-s plurals are learned late </li></ul></ul><ul><li>Novel words: </li></ul><ul><li>(Mugdan 77; MacWhinney 78; Phillis & Bouma 80; Schöler & Kany 89) </li></ul><ul><ul><li>Children inflect novel words with -e or -(e)n </li></ul></ul><ul><ul><li>More “irregular” plural forms produced than “defaults” </li></ul></ul>
  73. 78. MBLP simulation <ul><li>model overapplies mainly -en and -e </li></ul><ul><li>-s is learned late and imperfectly </li></ul><ul><li>Mainly but not completely parallel to input frequency (more -s overgeneralization than -er generalization) </li></ul>
  74. 79. Bartke, Marcus, Clahsen (1995) <ul><li>37 children age 3.6 to 6.6 </li></ul><ul><li>pictures of imaginary things, presented as neologisms </li></ul><ul><ul><li>names or roots </li></ul></ul><ul><ul><li>rhymes of existing words or not </li></ul></ul><ul><ul><li>choice -en or -s </li></ul></ul><ul><li>results: </li></ul><ul><ul><li>children are aware that unusual sounding words require the default </li></ul></ul><ul><ul><li>children are aware that names require the default </li></ul></ul>
  75. 80. MBLP simulation <ul><li>sort CELEX data according to rhyme </li></ul><ul><li>compare overgeneralization </li></ul><ul><ul><li>to -en versus to -s </li></ul></ul><ul><ul><li>percentage of total number of errors </li></ul></ul><ul><li>results: </li></ul><ul><ul><li>when new words don’t rhyme more errors are made </li></ul></ul><ul><ul><li>overgeneralization to -en drops below the level of overgeneralization to -s </li></ul></ul>
  76. 81. Conclusions <ul><li>Computational models in language acquisition shouldn’t necessarily be connectionist </li></ul><ul><ul><li>From rule induction to exemplar-based models </li></ul></ul><ul><li>TiMBL may be useful as software for computational psycholinguistics </li></ul>

×