5. NLP practice
R - research work:
set a goal →
devise an algorithm →
train the algorithm →
test its accuracy
6. NLP practice
R - research work:
set a goal →
devise an algorithm →
train the algorithm →
test its accuracy
D - development work:
implement the algorithm as an API with
sufficient performance and scaling
characteristics
7. Research
1. Set a goal
Business goal:
* Develop best/good enough/better than
Word/etc spellchecker
* Develop a set of grammar rules, that will
catch errors according to MLA Style
* Develop a thesaurus, that will produce
synonyms relevant to context
8. Translate it to measurable goal
* On a test corpus of 10000 sentences with
common errors achieve smaller number of FNs
(and FPs), that other spellcheckers/Word
spellchecker/etc
* On a corpus of examples of sentences with
each kind of error (and similar sentences
without this kind of error) find all
sentences with errors and do not find
errors in correct sentences
* On a test corpus of 1000 sentences
suggest synonyms for all meaningful words
that will be considered relevant by human
linguists in 90% of the cases
9. A Note on
Terminology
FN and FP instead of
precision (P), recall (R)
FN = 1-R
FP = 1-P or ???
f1 = P*R/(P+R) =
(1-FN-FP+FN*FP)/(2-(FN+FP))
12. 4. Test its performance
ML: one corpus, divided into
training,development,test
13. 4. Test its performance
ML: one corpus, divided into
training,development,test
Often — different corpora:
* for training some part (not
whole) of the algorithm
* for testing the whole
system
19. Pre/post-processing
What ultimately matters is
not crude performance, but...
Acceptance to users (much
harder to measure & depends
on domain).
Real-world is messier, than
any lab set-up.
20. Examples of
pre-processing
For spellcheck:
* some people tend to use
words, separated by slashes,
like: spell/grammar check
* handling of abbreviations
21. Where to get data?
Well-known sources:
* Penn Tree Bank
* Wordnet
* Web1T Google N-gram Corpus
* Linguistic Data Consortium
(http://www.ldc.upenn.edu/)
22. More data
Also well-known sources, but
with a twist:
* Wikipedia & Wiktionary,
DBPedia
* OpenWeb Common Crawl
(updated: 2010)
* Public APIs of some
services: Twitter, Wordnik
26. And remember...
“Data is ten times more
powerful than algorithms.”
-- Peter Norvig, “The Unreasonable
Effectiveness of Data.”
http://youtu.be/yvDCzhbjYWs
31. Specific NLP
requirements
* Good support for statistics
& number-crunching (matrices)
– Statistical AI
* Good support for working
with trees & symbols
– Symbolic AI
36. Heterogeneous
systems
You have to split the system
and communicate:
“Java” way vs. “Unix” way
* Sockets, Redis, ZeroMQ, etc
for communication
* JSON, SEXPs, etc for data
43. Lisp FTW
* truly interactive
environment
* very flexible => DSLs
* native tree support
* fast and solid
44. Take-aways
* Take nlp-class
* Data is key, collect it, build tools
to work with it easily and efficiently
* A good language for R&D should be
first of all interactive & malleable,
with as few barriers as possible
* ... it also helps if you don't need to
port your code for production
* Lisp is one of the good examples