NLP and LSA getting started
Upcoming SlideShare
Loading in...5
×
 

NLP and LSA getting started

on

  • 734 views

An introduction to Natural Language Processing and Latent Semantic Analysis

An introduction to Natural Language Processing and Latent Semantic Analysis

Statistics

Views

Total Views
734
Views on SlideShare
732
Embed Views
2

Actions

Likes
2
Downloads
17
Comments
0

1 Embed 2

http://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

NLP and LSA getting started NLP and LSA getting started Presentation Transcript

  • Latent semantic analysis (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. Wikipedia Latent semantic analysis Getting started
  • Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. Wikipedia Natural language processing could be divided in 4 phases: Grammar analysis Lexical analysis Semantic analysis Syntactic analysis Apache OpenNLP Machine learning based toolkit for the processing of natural language text. http://opennlp.apache.org/ LSA LSA could be seen as a part of NLP
  • Apache OpenNLP usage examples: Lexical analysis Grammar analysis Syntactic analysis Part-of-speech tagging Tokenization Chunker - Parser NOTE: Before the lexical analysis is possible to use a sentences analysis tool: sentence detector (Apache OpenNLP).
  • Supervised machine learning concepts INPUT DATA (ex: wikipedia corpus) Humans produce a finite set of couples (INPUT,OUTPUT). It represents the training set. It can be seen as discrete function. Machine learning algorithm (ex:linear regretion, maximum entropy, perceptron) MODEL OUTPUT DATA (ex:corpus POSTagged) Machine produces a model. It can be seen as a continuous function. INPUT DATA (ex: just a document) OUTPUT DATA (that document POSTagged) Input data are taken from an infinte set. Machine, using model and input, produces the expected output.
  • LSA assumes that words that are close in meaning will occur in similar pieces of text. LSA is a method for discovering hidden concepts in document data. LSA key concepts Doc 2 Doc 3 Doc 4 Doc 1 Set of documents, each document contains several words. LSA algorithm takes docs and words and evaluates vectors in a semantic vectorial space using: • A documents/words matrix • Singular value decomposition (SVD) word1word2 doc1 doc2 doc3 doc4 Semantic vectorial space. Word1 and word2 are close, it means that their (latent) meaning is related.
  • Example: Doc 2 Doc 3 Doc 4 Doc 1 Doc1 Doc2 Doc3 Doc4 Word1 1 0 1 0 Word2 1 0 1 1 Word3 0 1 0 1 … Words/document matrix 1: there are occurrences of the i-word in the j-doc. 0: there are not occurrences of the i-word in the j-doc. The matrix dimension is very big (thousands of words, hundreds of documents). Matrix SVD decomposition To reduce the matrix dimension Semantic Vector or JLSI libraries: • SVD decomposition. • Build the vectorial semantic space. word1word2 doc1 doc2 doc4 UIMA to manage the solution
  • Online references: http://opennlp.apache.org/documentation/manual/opennlp.html https://code.google.com/p/semanticvectors/ http://hlt.fbk.eu/en/technology/jlsi http://uima.apache.org/ http://en.wikipedia.org/wiki/Singular_value_decomposition http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors Coursera video references: http://www.coursera.org/course/nlangp http://www.coursera.org/course/ml
  • Some snipptes and console commands OpenNLP has a command line tool which is used to train the models. Trained Model
  • Models and document to manage This snippet takes as inputs 4 files and it evaluates a new file sentence detected, tokenized and POSTtaggered. Sentences tokens tags Document that is sentence detected, tokenized and POSTaggered, and that could be, for example, indexed in a search engine like Apache Solr.
  • Note that the lucene-core is a hierarchical dependency. .bat file to load the classpath SemanticVectors has two main functions: 1. Building wordSpace models. To build the wordSpace model Semantic Vector needs indexes created by Apache Lucene. 2. Searching through the vectors in such models. Es: Bible chapter Indexed by Lucene
  • 1. Building wordSpace models using pitt.search.semanticvectors.LSA class from the index created by Apache Lucene (from a bible chapter). In this example the Bible chapter contains 29 documents, and in total there are 2460 terms. Semantic Vector builds: 1. 29 vectors that represent the documents (docvector.bin) 2. 2460 vectors that represent the terms (termvector.bin) This two files represent the wordSpace. Note that could be also possible to use pitt.search.semanticvectors.BuildIndex class that use Random Projection instead of LSA to reduce the dimensional representation.
  • 2. Searching through docVector and termVector 2.1 Searching for Documents using Terms Search for document vectors closest to the vector ”Abraham”:
  • 2.2 Using a document file as a source of queries Find terms most closely related to Chapter 1 of Chronicles:
  • 2.3 Search a general word Find terms most closely related to “Abraham”.
  • 2.4 Comparing words Compare “abraham” with “Isaac”. Compare “abraham” with “massimo”.