The Evolution of Speech Segmentation: A Computer Simulation

1,408 views

Published on

These are the slides for my undergraduate dissertation on word segmne

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,408
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

The Evolution of Speech Segmentation: A Computer Simulation

  1. 1. The Evolution of Speech Segmentation A Computational Simulation Richard Littauer (Edinburgh)
  2. 2. Outline <ul><li>The Problem </li></ul><ul><li>The Possible Solution </li></ul><ul><li>Conclusions and Implications </li></ul>
  3. 3. The Research Problem <ul><li>Word Segmentation </li></ul>
  4. 4. The Problem <ul><li>Fluent listeners hear speech as a sequence of discrete words. </li></ul><ul><li>But there are no pauses in the wave form … </li></ul>
  5. 5. The Problem <ul><li>Listeners Problem : </li></ul><ul><li>jak ɑrəmnə (or thereishope) </li></ul><ul><li>Solution! </li></ul><ul><li>Find all boundaries </li></ul><ul><li>Don ’ t find any boundaries </li></ul>
  6. 6. The Problem <ul><li>Suggestions: </li></ul><ul><ul><li>Allophonic variation </li></ul></ul><ul><ul><li>Coarticulation </li></ul></ul><ul><ul><li>Prosody </li></ul></ul><ul><ul><li>Phonotactics </li></ul></ul><ul><ul><li>Combining any of these </li></ul></ul><ul><ul><li>Or… </li></ul></ul>
  7. 7. The Problem <ul><li>Recent studies have shown that 8-month-olds can segment continuous strings of speech syllables into word-like units using only statistical computation of syllables (Aslin et al. 1997, 1998; Mattys et. al, 1999) </li></ul>
  8. 8. The Problem <ul><li>These studies looked at syllable transition probability, but didn’t look at the possibility that the children may simply be counting the syllables. </li></ul>
  9. 9. The Problem <ul><li>Furthermore, while Aslin, Saffran, & Newport (1996; 1998) did show that children can use statistical probability, they didn’t judge how that type of analysis would influence language over time. </li></ul>
  10. 10. The Problem <ul><li>No one has done this (as far as I am aware.) </li></ul>
  11. 11. The Problem <ul><li>So, why does this matter? Because, obviously, the child has no lexicon to back up, so the information which the child is exposed to must be that which is used to learn how to segment properly. </li></ul>
  12. 12. My Simulation <ul><li>Code for four different possible transitional segmentation strategies. Use an Iterated Learning Model to see how well these do when culturally replicated. </li></ul>
  13. 14. My Simulation <ul><li>Coded four different types of methods: </li></ul><ul><ul><li>If you have seen one of the two test words before and not the other, choose the one you have seen before. </li></ul></ul><ul><ul><li>If one of the test words has occurred more frequently than the other, chose the more frequent one. </li></ul></ul><ul><ul><li>If one of the test words contains more frequent transitions, chose that one. </li></ul></ul><ul><ul><li>If one of the test words contains more probable transitions, chose that one </li></ul></ul>
  14. 15. My Simulation <ul><li>Variables: </li></ul><ul><ul><li>word recognition </li></ul></ul><ul><ul><li>word frequency </li></ul></ul><ul><ul><li>syllable transition count </li></ul></ul><ul><ul><li>syllable transition probability </li></ul></ul>
  15. 16. My Simulation <ul><li>Variables: </li></ul><ul><ul><li>Word length </li></ul></ul><ul><ul><li>Amount of ‘syllables’ </li></ul></ul><ul><ul><li>Amount of words </li></ul></ul><ul><ul><li>Amount of words used </li></ul></ul><ul><ul><li>Fixed lexicons </li></ul></ul>
  16. 17. My Simulation <ul><li>Types of Pairings </li></ul><ul><ul><li>2 randoms </li></ul></ul><ul><ul><li>1 lex, 1 random word with the same phonemes </li></ul></ul><ul><ul><li>1 scrambled word </li></ul></ul><ul><ul><li>1 chopped up </li></ul></ul>
  17. 18. My Simulation <ul><li>The ILM </li></ul><ul><ul><li>All of this was run through an Iterated Learning Model - which means, a generational model. </li></ul></ul>
  18. 19. My Simulation <ul><li>What I judged the output on: </li></ul><ul><ul><li>The original generation </li></ul></ul><ul><ul><li>The first generation </li></ul></ul>
  19. 20. My Simulation <ul><li>What I measured: </li></ul><ul><ul><li>Lexical retention </li></ul></ul><ul><ul><li>Lexical size </li></ul></ul><ul><ul><li>Hamming distance </li></ul></ul><ul><ul><li>Levenshtein distance </li></ul></ul><ul><ul><li>Phonotactic Development </li></ul></ul><ul><ul><li>Transitional Probability </li></ul></ul>
  20. 21. Results <ul><li>Word Recognition: Pretty unsuccessful. </li></ul><ul><li>Word Frequency: Wildly successful (100%) </li></ul><ul><li>Transitional Probability: Alright. </li></ul><ul><li>Transitional Counting: Better than alright, and each generation got better. </li></ul>
  21. 22. Results
  22. 23. Results <ul><li>Controls: </li></ul><ul><ul><li>The WPT was very influential </li></ul></ul><ul><ul><li>Fixed original corpus: word recognition and frequency did too well, while the transitional processes looked most like real language. </li></ul></ul><ul><ul><li>Random original corpus: none of them did well. </li></ul></ul>
  23. 24. Results <ul><li>Controls: </li></ul><ul><ul><li>More words is better. </li></ul></ul><ul><ul><li>Shorter words is better. </li></ul></ul><ul><ul><li>Longer runs aren’t needed but are useful. </li></ul></ul>
  24. 25. Disclaimers <ul><li>Online processing </li></ul><ul><li>Memory constraints </li></ul><ul><li>The WPT is unrealistic </li></ul><ul><li>Words aren’t isolated </li></ul><ul><li>Abstraction </li></ul><ul><li>The digram analysis </li></ul>
  25. 26. Future Work? <ul><li>What about a Bayesian analysis? </li></ul><ul><li>How exactly would transition count and probability be used in sequence? </li></ul><ul><li>And anything you might raise now that shows that I need to redo this? </li></ul><ul><li>thatisitiamdonenowthanks </li></ul>

×