Natural language processing (NLP) introduction

3,104 views

Published on

Introduction to natural language processing (NLP), goals, theory, TF-IDF, bag-of-words, machine learning, libraries, python

Published in: Software

Natural language processing (NLP) introduction

  1. 1. Natural language processing (NLP) introduction ! Robert Lujo
  2. 2. About me • software • professionally 18 g. • python >= 2.0, django >= 0.96 • freelancer • … (linkedin)
  3. 3. NLP is …
  4. 4. NLP Natural language processing (NLP) a field of computer science … concerned with the interactions between computers and human (natural) languages. ! https://en.wikipedia.org/wiki/Natural_language_processing
  5. 5. NLP “between computers and human (natural) languages” 1. computer -> human language 2. human language -> computer
  6. 6. NLP trend • Internet is huge and easily accessible resource of information • BUT - information is mainly unstructured • usually simple scraping (scrapy) is sufficient, but sometimes it is not • NLP solves or helps in converting free text (unstructured information) to structural form
  7. 7. NLP goals some examples
  8. 8. NLP goals - group 1 • cleanup, tokenization • stemming • lemmatization • part-of-speach tagging • query expansion • sentence segmentation
  9. 9. NLP goals - group 2 • information extraction • named entity recognition (NER) • sentiment analysis • word sense disambiguation • text similarity
  10. 10. NLP goals - group 3 • machine translation • automatic summarisation • natural language generation • question answering
  11. 11. NLP goals - group 4 • optical character recognition (OCR) • speech processing • speech recognition • text-to-speech
  12. 12. NLP theory
  13. 13. Word, term, feature • word <> term • document or text chunk is an unit / entity / object! • terms are features of the document! • each term has properties: • normalized form -> term.baseform + term.transformation • position(s) in the document -> term.position(s) • frequency -> term.frequency
  14. 14. Text, document, chunk • what is document? • text segmentation • hard problem • usually we consider whole document as one unit (entity)
  15. 15. Terms, features • converting words -> terms • term frequency is usually the most important feature! • how to get the list of terms with frequencies: • preprocessing - e.g. remove all but words, remove stopwords, tokenization (regexp) • word normalization dog ~ dogs zeleno ~ najzelenijih • .tolower(), regexp, stemming, lemmatization • much harder for inflectional languages, e.g. Croatian, see text-hr :)
  16. 16. Term weight - TF-IDF • term frequency – inverse document frequency • variables: • t - term, • d - one document • D - all documents • TF - is term frequency in a document function - i.e. measure on how much information the term brings in one document • IDF - is inverse document frequency of the term function - i.e. inversed measure on how much information the term brings in all documents (corpus)
  17. 17. Terms position, syntax • sometimes term position is important • neighbours, collocation, phrase extraction, NER • from regexp to parsers • syntax trees • complex, cpu intensive
  18. 18. Terms position, syntax In their public lectures they have even claimed that the only evidence that Khufu built the pyramid is the graffiti found in the five chambers.
  19. 19. Bag of words
  20. 20. Bag of words • simplified and effective way to process documents by: • disregarding grammar (term.baseform?) • disregarding word order (term.position) • keeping only multiplicity (term.frequency)
  21. 21. Bag of words • sparse matrix • numbers can be: • binary - 0/1 • simple term frequency • weight - e.g. TF-IDF
  22. 22. Bag of words • very simple -> very fast • frequently used: • in index servers • in database for simple full-text-search operations • for processing of large datasets
  23. 23. NLP techniques
  24. 24. Machine learning • one of the Machine learning application is NLP • after text is converted to entities with features, machine learning techniques can be applied
  25. 25. Machine learning • ML algorithm families categorisation • supervised - classification (distinct), regression (numerical) • unsupervised - clustering • A lot of various methods/algorithm families, statistical, probabilistic, … decision trees, neural networks / deep learning, support vector machines, bayesian networks, markov models, genetic algorithms
  26. 26. Machine learning
  27. 27. Usual NLP methods • Naive Bayes • Markov models • SVM • Neural networks / Deep learning
  28. 28. NLP libraries ! mainly python
  29. 29. Basic string manipulation • keep it simple and stupid .lower(), .strip(), .split(), .join(), iterators, … • regexp • not only match, but transformation, extraction (1), backreferences etc. • re.options, re.multiline, repl can be function: def repl(m): … re.sub(“pattern”, repl, “string”)
  30. 30. NLTK http://www.nltk.org/ the biggest, the most popular, the most comprehensive, free book: ! ! !
  31. 31. Scikit-Learn http://scikit-learn.org/stable/index.html machine learning in python ! ! !
  32. 32. spaCy http://honnibal.github.io/spaCy/ new kid on the block - 2015-01 text processing in Python and Cython “… industrial-strength NLP … … the fastest NLP software …”
  33. 33. Stanford NLP • http://nlp.stanford.edu/software/index.shtml • statistical NLP, deep learning NLP, and rule- based NLP tools for major computational linguistics problems • famous • Java
  34. 34. Misc … • data analysis libraries - numpy, pandas, matplotlib, shapely … • parsers - BLIPP, pyparsing, parserator • MonkeyLearn service … • Java, C/C++ • effective memory representation, permanent storage etc. • lot of free resources - books, reddit, blogs, etc.
  35. 35. tutto finito … Thank you for your patience Q/A? ! robert.lujo@gmail.com @trebor74hr

×