The Geometry of Learning


Published on

Latent Semantic Analysis (LSA) is a mathematical technique for computationally modeling the meaning of words and larger units of texts. LSA works by applying a mathematical technique called Singular Value Decomposition (SVD) to a term*document matrix containing frequency counts for all words found in the corpus in all of the documents or passages in the corpus. After this SVD application, the meaning of a word is represented as a vector in a multidimensional semantic space, which makes it possible to compare word meanings, for instance by computing the cosine between two word vectors.

LSA has been successfully used in a large variety of language related applications from automatic grading of student essays to predicting click trails in website navigation. In Coh-Metrix (Graesser et al. 2004), a computational tool that produces indices of the linguistic and discourse representations of a text, LSA was used as a measure of text cohesion by assuming that cohesion increases as a function of higher cosine scores between adjacent sentences.

Besides being interesting as a technique for building programs that need to deal with semantics, LSA is also interesting as a model of human cognition. LSA can match human performance on word association tasks and vocabulary test. In this talk, Fridolin will focus on LSA as a tool in modeling language acquisition. After framing the area of the talk with sketching the key concepts learning, information, and competence acquisition, and after outlining presuppositions, an introduction into meaningful interaction analysis (MIA) is given. MIA is a means to inspect learning with the support of language analysis that is geometrical in nature. MIA is a fusion of latent semantic analysis (LSA) combined with network analysis (NA/SNA). LSA, NA/SNA, and MIA are illustrated by several examples.

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • The Geometry of Learning

    1. 1. The Geometry of Learning<br />November 17th, 2009, Utrecht, The Netherlands<br />Fridolin WildKMi, The Open University<br />
    2. 2. (createdwith<br />
    3. 3. Outline<br />Context & Framing Theories<br />Latent Semantic Analysis (LSA)<br />Social Network Analysis (SNA)<br />Meaningful Interaction Analysis (MIA)<br />Conclusion & Outlook<br />
    4. 4. Context & Theories<br />
    5. 5. Information<br />(96dpi)<br />Information could be the quality of a certain signal.<br />Information could be a logical abstractor, the release mechanism.<br />Information & Knowledge<br />Knowledge could be the delta at the receiver (a paper, a human, a library).<br />
    6. 6. Learning is change<br />Learning is about competence development<br />Competence becomes visible in performance<br />Professional competence is mainly about (re-)constructing and processing information and knowledge from cues<br />Professional competence development is much about learning concepts from language<br />Professional performance is much about demonstrating conceptual knowledge with language<br />Language!<br />What is learning about?<br />
    7. 7. Tying shoelaces<br />Douglas Adams’ ‘meaning of liff’:<br />Epping: The futile movements of forefingers and eyebrows used when failing to attract the attention of waiters and barmen.<br />Shoeburyness: The vague uncomfortable feeling you get when sitting on a seat which is still warm from somebody else&apos;s bottom<br />I have been convincingly Sapir-Whorfed by this book.<br />Non-textual concepts things we can’t (easily) learn from language<br />
    8. 8. Latent Semantic Analysis<br />
    9. 9. Word Choice<br />Educated adult understands ~100,000 word forms<br />An average sentence contains 20 tokens. <br />Thus 100,00020 possible combinationsof words in a sentence<br /> maximum of log2 100,00020= 332 bits in word choice alone.<br />20! = 2.4 x 1018 possible orders of 20 words = maximum of 61 bits from order of the words. <br />332/(61+ 332) = 84%word choice<br />(Landauer, 2007)<br />
    10. 10. Latent Semantic Analysis<br />“Humans learn word meanings and how to combine them into passage meaning through experience with ~paragraph unitized verbal environments.” <br />“They don’t remember all the separate words of a passage; they remember its overall gist or meaning.” <br />“LSA learns by ‘reading’ ~paragraph unitized texts that represent the environment.”<br />“It doesn’t remember all the separate words of a text it; it remembers its overall gist or meaning.”<br />(Landauer, 2007)<br />
    11. 11. Latent Semantics<br />latent-semantic space<br />In other words:Assumption: language utterances have a semantic structure<br />Problem: structure is obscured by word usage(noise, synonymy, polysemy, …)<br />Solution: map doc-term matrix using conceptual indices derived statistically (truncated SVD) and make similarity comparisons using angles<br />
    12. 12. Input (e.g., documents)<br />term = feature<br />vocabulary = ordered set of features<br />Only the red terms appear in more than one document, so strip the rest.<br />TEXTMATRIX<br />{ M } = <br />Deerwester, Dumais, Furnas, Landauer, and Harshman (1990): Indexing by Latent Semantic Analysis, In: Journal of the American Society for Information Science, 41(6):391-407<br />
    13. 13. Singular Value Decomposition<br />=<br />
    14. 14. Truncated SVD<br />latent-semantic space<br />… we will get a different matrix (different values, but still of the same format as M).<br />
    15. 15. (Landauer, 2007)<br />
    16. 16. Reconstructed, Reduced Matrix<br />m4: Graphminors: A survey<br />
    17. 17. Similarity in a Latent-Semantic Space<br />Query<br />Y dimension<br />Target 1<br />Angle 1<br />Angle 2<br />Target 2<br />X dimension<br />(Landauer, 2007)<br />
    18. 18. doc2doc - similarities<br />Unreduced = pure vector space model<br />- Based on M = TSD’<br />- Pearson Correlation over document vectors<br />reduced<br />- based on M2 = TS2D’<br />- Pearson Correlation over document vectors<br />
    19. 19. Typical, simple workflow<br />tm = textmatrix(‘dir/‘)<br />tm = lw_logtf(tm) * gw_idf(tm)<br />space = lsa(tm, dims=dimcalc_share())<br />tm3 = fold_in(tm, space)<br />as.textmatrix(tm)<br />
    20. 20. Processing Pipeline (with Options)<br />4 x 12 x 7 x 2 x 3 = 2016 Combinations<br />
    21. 21. b) SVD is computationally expensive<br />From seconds (lower hundreds of documents, optimised linear algebra libraries, truncated SVD)<br />To minutes (hundreds to thousands of documents)<br />To hours (tens and hundreds of thousands)<br />a) SVD factor stability<br />SVD calculates factorsover a given text base; different texts – different factors<br />Problem: avoid unwanted factor changes<br />Solution: folding-in of instead of recalculating<br />Projecting by Folding-In<br />
    22. 22. 2<br />1<br />vT<br />Folding-In in Detail<br />(cf. Berry et al., 1995)<br />Mk<br />(2) convert<br />„Dk“-format<br />vector to<br />„Mk“-format<br />Tk<br />Sk<br />Dk<br />(1) convert<br />Original<br />Vector to<br />„Dk“-format<br />
    23. 23. The Value of Singular Values<br />Pearson(jahr, wien)<br />Pearson(eu, österreich)<br />
    24. 24. Simple LSA application<br />
    25. 25. Summary Writing: Working Principle<br />(Landauer, 2007)<br />
    26. 26. Summary Writing<br />Gold Standard 1<br />Gold <br />Standard 2<br />Y dimension<br />Gold Standard 3<br />Essay 1<br />Essay 2<br />X dimension<br />
    27. 27. ‘Dumb’ Summary Writing (Code)<br />library( &quot;lsa“ )# load package<br /># load training texts<br />trm = textmatrix( &quot;trainingtexts/“ )<br />trm = lw_bintf( trm ) * gw_idf( trm )# weighting<br />space = lsa( trm )# create an LSA space<br /># fold-in summaries to be tested (including gold standard text)<br />tem = textmatrix( &quot;testessays/&quot;, vocabulary=rownames(trm) )<br />tem_red = fold_in( tem, space )<br /># score a summary by comparing with <br /># gold standard text (very simple method!)<br />cor( tem_red[,&quot;goldstandard.txt&quot;], tem_red[,&quot;E1.txt&quot;] )<br />=&gt; 0.7<br />
    28. 28. Evaluating Effectiveness<br />Compare Machine Scores with Human Scores<br />Human-to-Human Correlation<br /> Usually around .6<br /> Increased by familiarity between assessors, tighter assessment schemes, …<br /> Scores vary even stronger with decreasing subject familiarity (.8 at high familiarity, worst test -.07)<br /><ul><li>Test Collection: 43 German Essays, scored from 0 to 5 points (ratio scaled), average length: 56.4 words
    29. 29. Training Collection: 3 ‘golden essays’, plus 302 documents from a marketing glossary, average length: 56.1 words</li></li></ul><li>(Positive) Evaluation Results<br />LSA machinescores:<br />Spearman&apos;s rank correlationrho<br />data: humanscores[names(machinescores), ] and machinescores<br />S = 914.5772, p-value = 0.0001049<br />alternative hypothesis: truerhoisnotequal to 0 <br />sampleestimates:<br />rho<br />0.687324 <br />Pure vectorspacemodel:<br />Spearman&apos;s rank correlationrho<br />data: humanscores[names(machinescores), ] and machinescores<br />S = 1616.007, p-value = 0.02188<br />alternative hypothesis: truerhoisnotequal to 0 <br />sampleestimates:<br />rho<br />0.4475188<br />
    30. 30. (S)NA<br />
    31. 31. Social Network Analysis<br />Existing for a long time (term coined 1954)<br />Basic idea:<br />Actors and Relationships between them (e.g. Interactions)<br />Actors can be people (groups, media, tags, …)<br />Actors and Ties form a Graph (edges and nodes)<br />Within that graph, certain structures can be investigated <br />Betweenness, Degree of Centrality, Density, Cohesion<br />Structural Patterns can be identified (e.g. the Troll)<br />
    32. 32. Forum Messages<br />
    33. 33. Incidence Matrix<br />msg_id = incident, authorsappear in incidents<br />
    34. 34. DeriveAdjacency Matrix<br />= t(im) %*% im<br />
    35. 35. Visualization: Sociogramme<br />
    36. 36. Measuring Techniques (Sample)<br />Closenesshow close to all others<br />Degree Centralitynumber of (in/out) connections to others<br />Betweennesshow often intermediary<br />Componentse.g. kmeans cluster (k=3)<br />
    37. 37. SNA applications<br />
    38. 38. Co-Authorship Network WI (2005)<br />
    39. 39. Paper Collaboration Prolearn<br />e.g. co-authorships of ~30 deliverables of three work packages (ProLearn NoE)<br />Roles: reviewer (red), editor (green), contributor<br />Size: Prestige()<br />But: type of interaction? Content of interaction? =&gt; not possible!<br />
    40. 40. TEL Project Cooperation (2004-2007)<br />
    41. 41. iCamp Collaboration (Y1)<br />Shades of yellow: WP leadership<br />Red: coordinator<br />
    42. 42. MIA<br />
    43. 43. Meaningful Interaction Analysis (MIA)<br />Fusion: Combining LSA with SNA<br />Terms and Documents (or anything else represented with column vectors or row vectors) are mapped into same space by LSA<br />Semantic proximity can be measured between them: how close is a term to a document?<br />(S)NA allows to analyse these resulting graph structures<br />By e.g. cluster or component analysis<br />By e.g. identifying central descriptors for these<br />
    44. 44. The mathemagics behind<br />Meaning Interaction Analysis<br />
    45. 45. Truncated SVD<br />latent-semantic space<br />… we will get a different matrix (different values, but still of the same format as M).<br />
    46. 46. Knowledge Proxy: LSA Part<br />Tk= left-hand sided matrix = ‚term loadings‘ on the singular value<br />Dk= right-hand sided matrix = ‚document loadings‘ on the singular value<br />Multiply them into same space<br />VT = TkSk<br />VD = DkTSk<br />Cosine Distance Matrix over ... = a graph<br />Extension: add author vectors VAthrough cluster centroids or vector addition of their publication vectors<br />latent-semantic space<br />Ofcourse:useexistingspaceandfold inthewholesetsofvectors<br />
    47. 47. Knowledge Proxy: SNA Part:Filter the Network<br />Every vectorhas a cosinedistancetoeveryother (maybe negative)!<br />So: filter forthedesiredsimilaritystrength<br />
    48. 48. ConSpectmonitoring conceptual development<br />
    49. 49.
    50. 50. TopicProxy (30 people, 2005)<br />
    51. 51. Spot unwanted fragmentation<br />e.g. two authors work on the same topic, but with different collaborator groups and with different literature<br />Intervention Instrument: automatically recommend to hold a flashmeeting<br />Bringing together what belongs together<br />Wild, Ochoa, Heinze, Crespo, Quick (2009, to appear)<br />
    52. 52. //eof.<br />