Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
NLTK, Gensim
( : )
PyCon Korea 2015
2015-08-29
Lucy Park ( )
1 / 69
KoNLPy
,
? !
NLTK
,
Gensim
, .
bag-of-words, document embedding
, KoNLPy, NLTK, Gensim
.
KoNLPy: http://konlpy.org
NLTK: h...
(a.k.a. lucypark,
echojuliett, e9t)
.
CAVEAT: KoNLPy
Just another yak shaver...
11:49 <@sanxiyn> 또다시 yak shaving의 신비한 세계
1...
/
KoNLPy, NLTK, Gensim (?)
!
Gensim ,
.
,
.
.
?
3
4 / 69
1997
20 ...
5 / 69
IBM .
6 / 69
. ,
.
.
-- Bengio et al., Deep Learning 1 , Book in preparation for MIT Press, 2015.
7 / 69
" "
Representations of text
8 / 69
?
, .
" " .
9 / 69
{ , }
≈
⊂
10 / 69
, ,
(binary)
set
one-hot vector
1-of-V
= , = , = , . . . =w
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
1
0
0
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
w ( )
⎡
⎣
⎢
⎢
⎢
⎢
⎢
...
.
ex: " " " ", " " " " 0 .
( = ( = 0w )⊤
w w )⊤
w
* BOW (discrete) (local representation) .
(distributed representation) 1...
,
bag of words(BOW)
/ ( "term existance" )
n (a.k.a. term frequency (TF))
ex: term frequency–inverse document frequency (T...
.*
.
.
" ".
* : Vincent Spruyt, The Curse of Dimensionality in classification, 2014. 14 / 69
" ?"
(WordNet) X is-a Y (hypernym)
(DAG; directed acyclic graph) . ( : .)
from nltk.corpus import wordnet as wn
wn.synsets...
:
1. , .
from nltk.corpus import wordnet as wn
wn.synsets('nginx')
#=> []
2. .
wn.synsets('yolo') # : you only live once
#...
.. . ?
17 / 69
.
18 / 69
.
You shall know a word by the company it keeps.
-- J.R. Firth (1957)
19 / 69
1: ?
__ .
, __ .
__ .
: [ ]
20 / 69
2: " " ?
?
21 / 69
.
?
SO .
22 / 69
;;
23 / 69
, (context)
.
(context) := (window) /
1-8 , 3-17 *
" "
ex: "PyCon" "R" "Python" ?
* Jurafsky, Martin, Speech and Language ...
"Co-occurrence"
25 / 69
Co-occurrence
documents = [[" ", " ", " ", " "],
[" ", "R", " ", " "]]
1. Term-document matrix ( ):
Computation (!)
x_td =...
Co-occurrence matrix
.
import math
def dot_product(v1, v2):
return sum(map(lambda x: x[0] * x[1], zip(v1, v2)))
def cosine...
Neural embeddings
: !
(language model)
ex: [" ", " ", " ", " "] ? (ex: "!", "." "?")
, .
Neural network language model (NN...
word2vec *
T. Mikolov word2vec neural embedding
! (n -> m )
* Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and J...
: word2vec *
* , , , (1946-2015) , 49 2 , 2015. 30 / 69
neural embedding ?
31 / 69
doc2vec *
: ( ) !
.
.
|V|
* Le, Mikolov, Distributed Representations of Sentences and Documents, ICML, 2014. ( doc2vec par...
: Term frequency -> . .
documents = [
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ...
: doc2vec -> . .
documents = [
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ',...
.
:
Sparse, long vectors Dense, short vectors
1-hot-vectors word2vec
bag-of-words
(term existance, TF, TF-IDF )
doc2vec
.
...
Sentiment analysis for Korean movie reviews
36 / 69
, ?
37 / 69
!
38 / 69
Naver sentiment movie corpus v1.0
Maas et al., 2011
:
100 140 ( ' ')
20 ( 64 )
ratings_train.txt: 15 , ratings_test.txt: 5...
README coming soon
40 / 69
$ cat ratings_train.txt | head -n 25
id document label
9976970 .. 0
3819312 ... .... 1
10265843 0
9045019 .. .. 0
6483659 ...
, ?
1. KoNLPy
2. NLTK
3. Bag-of-words(term existence) classify
4. Gensim doc2vec classify
42 / 69
1. Data preprocessing (feat. KoNLPy)
def read_data(filename):
with open(filename, 'r') as f:
data = [line.split('t') for lin...
1. Data preprocessing (feat. KoNLPy)
# row, column
print(len(train_data)) # nrows: 150000
print(len(train_data[0])) # ncol...
1. Data preprocessing (feat. KoNLPy)
from konlpy.tag import Twitter
pos_tagger = Twitter()
def tokenize(doc):
# norm, stem...
1. Data preprocessing (feat. KoNLPy)
" ?"
, x
*
" (POS) ?"
ex: ' /Josa' vs ' /Noun'
* https://github.com/karpathy/char-rnn...
2. Data exploration (feat. NLTK)
!
Training data token
tokens = [t for d in train_docs for t in d[0]]
print(len(tokens))
#...
2. Data exploration (feat. NLTK)
nltk.Text() ! (Exploration )
document ,
.
import nltk
text = nltk.Text(tokens, name='NMSC...
2. Data exploration (feat. NLTK)
Count tokens
print(len(text.tokens)) # returns number of tokens
# => 2194536
print(len(se...
2. Data exploration (feat. NLTK)
Plot tokens by frequency
text.plot(50) # Plot sorted frequency of top 50 tokens
;;
50 / 69
2. Data exploration (feat. NLTK)
!
Troubleshooting: For those who see rectangles instead of letters in the saved plot file,...
2. Data exploration (feat. NLTK)
. .
.
52 / 69
2. Data exploration (feat. NLTK)
Collocations ( ): (ex: "text" + "mining")
text.collocations() #
# => /Determiner /Noun; /...
3. Sentiment classification with term-existance*
term .
# 2000
# WARNING: time/memory efficient
selected_words = [f[0] for f...
3. Sentiment classification with term-existance
. !
.
NLTK :
Naive Bayes Classifiers
Decision Tree Classifiers
Maximum Entrop...
3. Sentiment classification with term-existance
Naive Bayes classifier
#
classifier = nltk.NaiveBayesClassifier.train(train_xy...
4. Sentiment classification with doc2vec (feat. Gensim)
doc2vec / .
Gensim . *
: Radim Rehureck
* Gensim 0.12.0 . ! 57 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
!!!
Gensim ...
58 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
doc2vec .
doc2vec
from gensim.models.doc2vec import TaggedDocument ...
4. Sentiment classification with doc2vec (feat. Gensim)
from gensim.models import doc2vec
#
doc_vectorizer = doc2vec.Doc2Ve...
4. Sentiment classification with doc2vec (feat. Gensim)
! (1/3)
pprint(doc_vectorizer.most_similar(' /Noun'))
# => [(' /Nou...
4. Sentiment classification with doc2vec (feat. Gensim)
! (2/3)
pprint(doc_vectorizer.most_similar(' /KoreanParticle'))
# =...
4. Sentiment classification with doc2vec (feat. Gensim)
, v(KING) – v(MAN) + v(WOMAN) = v(QUEEN)
?
63 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
! (3/3)
pprint(doc_vectorizer.most_similar(positive=[' /Noun', ' /N...
4. Sentiment classification with doc2vec (feat. Gensim)
, ;;
, . ? *
text.concordance(' /Noun', lines=10)
# => Displaying 1...
4. Sentiment classification with doc2vec (feat. Gensim)
. .
train_x = [doc_vectorizer.infer_vector(doc.words) for doc in ta...
4. Sentiment classification with doc2vec (feat. Gensim)
scikit-learn LogisticRegression() .
from sklearn.linear_model impor...
.
KoNLPy , NLTK , Gensim
.
Term existance document vector .
task apply
!
/ *
/
!
* , , , : , SNUDM Technical Reports, Apr ...
:D
http://lucypark.kr
@echojuliett
69 / 69
Upcoming SlideShare
Loading in …5
×

of

한국어와 NLTK, Gensim의 만남 Slide 1 한국어와 NLTK, Gensim의 만남 Slide 2 한국어와 NLTK, Gensim의 만남 Slide 3 한국어와 NLTK, Gensim의 만남 Slide 4 한국어와 NLTK, Gensim의 만남 Slide 5 한국어와 NLTK, Gensim의 만남 Slide 6 한국어와 NLTK, Gensim의 만남 Slide 7 한국어와 NLTK, Gensim의 만남 Slide 8 한국어와 NLTK, Gensim의 만남 Slide 9 한국어와 NLTK, Gensim의 만남 Slide 10 한국어와 NLTK, Gensim의 만남 Slide 11 한국어와 NLTK, Gensim의 만남 Slide 12 한국어와 NLTK, Gensim의 만남 Slide 13 한국어와 NLTK, Gensim의 만남 Slide 14 한국어와 NLTK, Gensim의 만남 Slide 15 한국어와 NLTK, Gensim의 만남 Slide 16 한국어와 NLTK, Gensim의 만남 Slide 17 한국어와 NLTK, Gensim의 만남 Slide 18 한국어와 NLTK, Gensim의 만남 Slide 19 한국어와 NLTK, Gensim의 만남 Slide 20 한국어와 NLTK, Gensim의 만남 Slide 21 한국어와 NLTK, Gensim의 만남 Slide 22 한국어와 NLTK, Gensim의 만남 Slide 23 한국어와 NLTK, Gensim의 만남 Slide 24 한국어와 NLTK, Gensim의 만남 Slide 25 한국어와 NLTK, Gensim의 만남 Slide 26 한국어와 NLTK, Gensim의 만남 Slide 27 한국어와 NLTK, Gensim의 만남 Slide 28 한국어와 NLTK, Gensim의 만남 Slide 29 한국어와 NLTK, Gensim의 만남 Slide 30 한국어와 NLTK, Gensim의 만남 Slide 31 한국어와 NLTK, Gensim의 만남 Slide 32 한국어와 NLTK, Gensim의 만남 Slide 33 한국어와 NLTK, Gensim의 만남 Slide 34 한국어와 NLTK, Gensim의 만남 Slide 35 한국어와 NLTK, Gensim의 만남 Slide 36 한국어와 NLTK, Gensim의 만남 Slide 37 한국어와 NLTK, Gensim의 만남 Slide 38 한국어와 NLTK, Gensim의 만남 Slide 39 한국어와 NLTK, Gensim의 만남 Slide 40 한국어와 NLTK, Gensim의 만남 Slide 41 한국어와 NLTK, Gensim의 만남 Slide 42 한국어와 NLTK, Gensim의 만남 Slide 43 한국어와 NLTK, Gensim의 만남 Slide 44 한국어와 NLTK, Gensim의 만남 Slide 45 한국어와 NLTK, Gensim의 만남 Slide 46 한국어와 NLTK, Gensim의 만남 Slide 47 한국어와 NLTK, Gensim의 만남 Slide 48 한국어와 NLTK, Gensim의 만남 Slide 49 한국어와 NLTK, Gensim의 만남 Slide 50 한국어와 NLTK, Gensim의 만남 Slide 51 한국어와 NLTK, Gensim의 만남 Slide 52 한국어와 NLTK, Gensim의 만남 Slide 53 한국어와 NLTK, Gensim의 만남 Slide 54 한국어와 NLTK, Gensim의 만남 Slide 55 한국어와 NLTK, Gensim의 만남 Slide 56 한국어와 NLTK, Gensim의 만남 Slide 57 한국어와 NLTK, Gensim의 만남 Slide 58 한국어와 NLTK, Gensim의 만남 Slide 59 한국어와 NLTK, Gensim의 만남 Slide 60 한국어와 NLTK, Gensim의 만남 Slide 61 한국어와 NLTK, Gensim의 만남 Slide 62 한국어와 NLTK, Gensim의 만남 Slide 63 한국어와 NLTK, Gensim의 만남 Slide 64 한국어와 NLTK, Gensim의 만남 Slide 65 한국어와 NLTK, Gensim의 만남 Slide 66 한국어와 NLTK, Gensim의 만남 Slide 67 한국어와 NLTK, Gensim의 만남 Slide 68 한국어와 NLTK, Gensim의 만남 Slide 69
Upcoming SlideShare
The beginner’s guide to 웹 크롤링 (스크래핑)
Next
Download to read offline and view in fullscreen.

122 Likes

Share

Download to read offline

한국어와 NLTK, Gensim의 만남

Download to read offline

HTML: http://lucypark.kr/slides/2015-pyconkr
부제: 영화 리뷰를 컴퓨터가 이해할 수 있는 형식으로 표현해서 센티멘트 분석하기

한국어와 NLTK, Gensim의 만남

  1. 1. NLTK, Gensim ( : ) PyCon Korea 2015 2015-08-29 Lucy Park ( ) 1 / 69
  2. 2. KoNLPy , ? ! NLTK , Gensim , . bag-of-words, document embedding , KoNLPy, NLTK, Gensim . KoNLPy: http://konlpy.org NLTK: http://nltk.org Gensim: http://radimrehurek.com/gensim 2 / 69
  3. 3. (a.k.a. lucypark, echojuliett, e9t) . CAVEAT: KoNLPy Just another yak shaver... 11:49 <@sanxiyn> 또다시 yak shaving의 신비한 세계 11:51 <@sanxiyn> yak shaving이 뭔지 다 아시죠? 11:51 <디토군> 방금 찾아보고 왔음 11:51 <@mana> (조용히 설명을 기대중) 11:51 <@sanxiyn> 나무를 베려고 하는데 11:52 <@sanxiyn> 도끼질을 하다가 11:52 <@sanxiyn> 도끼가 더 잘 들면 나무를 쉽게 벨텐데 해서 11:52 <@sanxiyn> 도끼 날을 세우다가 11:52 <@sanxiyn> 도끼 가는 돌이 더 좋으면 도끼 날을 더 빨리 세울텐데 해서 11:52 <@sanxiyn> 좋은 숫돌이 있는 곳을 수소문해 보니 11:52 <@mana> … 11:52 <&홍민희> 그거 전형적인 제 행동이네요 11:52 <@sanxiyn> 저 멀이 어디에 세계 최고의 숫돌이 난다고 11:52 <@sanxiyn> 거기까지 야크를 타고 가려다가 11:52 <@mana> 항상하던 짓이라서 타이핑을 할 수 없었습니다 11:52 <@sanxiyn> 야크 털을 깎아서… 11:52 <@sanxiyn> etc. 3 / 69
  4. 4. / KoNLPy, NLTK, Gensim (?) ! Gensim , . , . . ? 3 4 / 69
  5. 5. 1997 20 ... 5 / 69
  6. 6. IBM . 6 / 69
  7. 7. . , . . -- Bengio et al., Deep Learning 1 , Book in preparation for MIT Press, 2015. 7 / 69
  8. 8. " " Representations of text 8 / 69
  9. 9. ? , . " " . 9 / 69
  10. 10. { , } ≈ ⊂ 10 / 69
  11. 11. , , (binary) set one-hot vector 1-of-V = , = , = , . . . =w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 1 0 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ( ) ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 0 1 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 0 0 ⋮ 1 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ wi i |V| 11 / 69
  12. 12. . ex: " " " ", " " " " 0 . ( = ( = 0w )⊤ w w )⊤ w * BOW (discrete) (local representation) . (distributed representation) 12 / 69
  13. 13. , bag of words(BOW) / ( "term existance" ) n (a.k.a. term frequency (TF)) ex: term frequency–inverse document frequency (TF-IDF) =d1 binary ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 1 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ =d1 tf ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 3 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 13 / 69
  14. 14. .* . . " ". * : Vincent Spruyt, The Curse of Dimensionality in classification, 2014. 14 / 69
  15. 15. " ?" (WordNet) X is-a Y (hypernym) (DAG; directed acyclic graph) . ( : .) from nltk.corpus import wordnet as wn wn.synsets('computer') # "computer" #=> [Synset('computer.n.01'), Synset('calculator.n.01')] : ( ) / 15 / 69
  16. 16. : 1. , . from nltk.corpus import wordnet as wn wn.synsets('nginx') #=> [] 2. . wn.synsets('yolo') # : you only live once #=> [] 3. NLTK ... ( !) wn.synsets(' ', lang='kor') #=> WordNetError: Language is not supported. 16 / 69
  17. 17. .. . ? 17 / 69
  18. 18. . 18 / 69
  19. 19. . You shall know a word by the company it keeps. -- J.R. Firth (1957) 19 / 69
  20. 20. 1: ? __ . , __ . __ . : [ ] 20 / 69
  21. 21. 2: " " ? ? 21 / 69
  22. 22. . ? SO . 22 / 69
  23. 23. ;; 23 / 69
  24. 24. , (context) . (context) := (window) / 1-8 , 3-17 * " " ex: "PyCon" "R" "Python" ? * Jurafsky, Martin, Speech and Language Processing 3rd Ed. 19.1 , Book in preparation, 2015. 24 / 69
  25. 25. "Co-occurrence" 25 / 69
  26. 26. Co-occurrence documents = [[" ", " ", " ", " "], [" ", "R", " ", " "]] 1. Term-document matrix ( ): Computation (!) x_td = [[1, 1], # [1, 0], # [0, 1], # R [1, 1], # [1, 1]] # 1. Term-term matrix ( ): syntactic, semantic ( ==5): x_tt = [[0, 1, 1, 2, 0], # [1, 0, 0, 1, 1], # [1, 0, 0, 1, 1], # R [2, 1, 1, 0, 2], # [0, 1, 1, 2, 0]] # ∈Xtd ℝ|V|×M M ∈Xtt ℝ|V|×|V| 26 / 69
  27. 27. Co-occurrence matrix . import math def dot_product(v1, v2): return sum(map(lambda x: x[0] * x[1], zip(v1, v2))) def cosine_measure(v1, v2): prod = dot_product(v1, v2) len1 = math.sqrt(dot_product(v1, v1)) len2 = math.sqrt(dot_product(v2, v2)) return prod / (len1 * len2) : skewed ( , ) discriminative (ex: list(' ')) ? 27 / 69
  28. 28. Neural embeddings : ! (language model) ex: [" ", " ", " ", " "] ? (ex: "!", "." "?") , . Neural network language model (NNLM)* * Bengio, Ducharme, Vincent, A Neural Probabilistic Language Model, {NIPS, 2000; JMLR, 2003}. 28 / 69
  29. 29. word2vec * T. Mikolov word2vec neural embedding ! (n -> m ) * Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. 29 / 69
  30. 30. : word2vec * * , , , (1946-2015) , 49 2 , 2015. 30 / 69
  31. 31. neural embedding ? 31 / 69
  32. 32. doc2vec * : ( ) ! . . |V| * Le, Mikolov, Distributed Representations of Sentences and Documents, ICML, 2014. ( doc2vec paragraph vector ) 32 / 69
  33. 33. : Term frequency -> . . documents = [ [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '] ] words = set(sum(documents, [])) print(words) # => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '} def term_frequency(document): # do something return vector vectors = [term_frequency(doc) for doc in documents] print(vectors) # => { # 'S1': [1, 1, 0, 1, 0, 0, 1, 1], # 'S2': [1, 1, 0, 1, 0, 0, 1, 1], # 'S3': [1, 0, 0, 1, 1, 1, 1, 0], # 'S4': [1, 1, 0, 1, 1, 1, 0, 0], # 'S5': [1, 0, 1, 1, 1, 0, 1, 0], # 'S6': [1, 1, 1, 1, 1, 0, 0, 0] # } |V| 33 / 69
  34. 34. : doc2vec -> . . documents = [ [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '] ] words = set(sum(documents, [])) print(words) # => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '} def doc2vec(document): # do something else ( Gensim !) return vector vectors = [doc2vec(doc) for doc in documents] print(vectors) # => { # 'S1': [-0.01599941, 0.02135301, 0.0927715], # 'S2': [0.02333261, 0.00211833, 0.00254255], # 'S3': [-0.00117272, -0.02043504, 0.00593186], # 'S4': [0.0089237, -0.00397806, -0.13199195], # 'S5': [0.02555059, 0.01561624, 0.03603476], # 'S6': [-0.02114407, -0.01552016, 0.01289922] # } |V| 34 / 69
  35. 35. . : Sparse, long vectors Dense, short vectors 1-hot-vectors word2vec bag-of-words (term existance, TF, TF-IDF ) doc2vec . 35 / 69
  36. 36. Sentiment analysis for Korean movie reviews 36 / 69
  37. 37. , ? 37 / 69
  38. 38. ! 38 / 69
  39. 39. Naver sentiment movie corpus v1.0 Maas et al., 2011 : 100 140 ( ' ') 20 ( 64 ) ratings_train.txt: 15 , ratings_test.txt: 5 / (i.e., random guess yields 50% accuracy) : 19MB URL: http://github.com/e9t/nsmc/ 39 / 69
  40. 40. README coming soon 40 / 69
  41. 41. $ cat ratings_train.txt | head -n 25 id document label 9976970 .. 0 3819312 ... .... 1 10265843 0 9045019 .. .. 0 6483659 ! 5403919 3 1 8 . ... . 0 7797314 . 0 9443947 .. 7156791 1 5912145 ? .. ? 9008700 . ♥ 1 10217543 90 !! ~ 5957425 0 8628627 . . . 9864035 6852435 1 9143163 . 4891476 0 7465483 !!♥ 3989148 , . . 1 4581211 . 1 2718894 1 9705777 . .... 471131 . 1 41 / 69
  42. 42. , ? 1. KoNLPy 2. NLTK 3. Bag-of-words(term existence) classify 4. Gensim doc2vec classify 42 / 69
  43. 43. 1. Data preprocessing (feat. KoNLPy) def read_data(filename): with open(filename, 'r') as f: data = [line.split('t') for line in f.read().splitlines()] data = data[1:] # header return data train_data = read_data('ratings_train.txt') test_data = read_data('ratings_test.txt') NOTE: pandas.read_csv() quoting parameter csv.QUOTE_NONE (3) (ex: ) 43 / 69
  44. 44. 1. Data preprocessing (feat. KoNLPy) # row, column print(len(train_data)) # nrows: 150000 print(len(train_data[0])) # ncols: 3 print(len(test_data)) # nrows: 50000 print(len(test_data[0])) # ncols: 3 44 / 69
  45. 45. 1. Data preprocessing (feat. KoNLPy) from konlpy.tag import Twitter pos_tagger = Twitter() def tokenize(doc): # norm, stem optional return ['/'.join(t) for t in pos_tagger.pos(doc, norm=True, stem=True)] train_docs = [(tokenize(row[1]), row[2]) for row in train_data] test_docs = [(tokenize(row[1]), row[2]) for row in test_data] # from pprint import pprint pprint(train_docs[0]) # => [([' /Exclamation', # ' /Noun', # '../Punctuation', # ' /Noun', # ' /Noun', # ' /Verb', # ' /Noun'], # '0')] NOTE: . 45 / 69
  46. 46. 1. Data preprocessing (feat. KoNLPy) " ?" , x * " (POS) ?" ex: ' /Josa' vs ' /Noun' * https://github.com/karpathy/char-rnn 46 / 69
  47. 47. 2. Data exploration (feat. NLTK) ! Training data token tokens = [t for d in train_docs for t in d[0]] print(len(tokens)) # => 2194536 47 / 69
  48. 48. 2. Data exploration (feat. NLTK) nltk.Text() ! (Exploration ) document , . import nltk text = nltk.Text(tokens, name='NMSC') print(text) # => <Text: NMSC> 48 / 69
  49. 49. 2. Data exploration (feat. NLTK) Count tokens print(len(text.tokens)) # returns number of tokens # => 2194536 print(len(set(text.tokens))) # returns number of unique tokens # => 48765 pprint(text.vocab().most_common(10)) # returns frequency distribution # => [('./Punctuation', 68630), # (' /Noun', 51365), # (' /Verb', 50281), # (' /Josa', 39123), # (' /Verb', 34764), # (' /Josa', 30480), # ('../Punctuation', 29055), # (' /Josa', 27108), # (' /Josa', 26696), # (' /Josa', 23481)] 49 / 69
  50. 50. 2. Data exploration (feat. NLTK) Plot tokens by frequency text.plot(50) # Plot sorted frequency of top 50 tokens ;; 50 / 69
  51. 51. 2. Data exploration (feat. NLTK) ! Troubleshooting: For those who see rectangles instead of letters in the saved plot file, include the following configurations before drawing the plot: Some example fonts: Mac OS: /Library/Fonts/AppleGothic.ttf from matplotlib import font_manager, rc font_fname = 'c:/windows/fonts/gulim.ttc' # A font of your choice font_name = font_manager.FontProperties(fname=font_fname).get_name() rc('font', family=font_name) 51 / 69
  52. 52. 2. Data exploration (feat. NLTK) . . . 52 / 69
  53. 53. 2. Data exploration (feat. NLTK) Collocations ( ): (ex: "text" + "mining") text.collocations() # # => /Determiner /Noun; /Suffix /Josa; /Determiner /Noun; /Noun # /Verb; /Noun /Josa; 10/Number /Noun; /Noun /Suffix; /Noun # /Adjective; /Noun /Josa; /Noun /Josa; /Noun /Josa; /Suffix # /Josa; /Noun /Noun; /Noun /Josa; /Suffix /Josa; /Noun # /Josa; /Noun /Josa; /Suffix /Josa; /Noun /Suffix; /Noun # /Josa NOTE: nltk.Text() , source code API . . 53 / 69
  54. 54. 3. Sentiment classification with term-existance* term . # 2000 # WARNING: time/memory efficient selected_words = [f[0] for f in text.vocab().most_common(2000)] def term_exists(doc): return {'exists({})'.format(word): (word in set(doc)) for word in selected_words} # training corpus train_docs = train_docs[:10000] train_xy = [(term_exists(d), c) for d, c in train_docs] test_xy = [(term_exists(d), c) for d, c in test_docs] Try this: . 50 ? TF, TF-IDF ? (sparse matrix) or scikit-learn TfidfVectorizer() ! * Natural Language Processing with Python Chapter 6. Learning to classify text 54 / 69
  55. 55. 3. Sentiment classification with term-existance . ! . NLTK : Naive Bayes Classifiers Decision Tree Classifiers Maximum Entropy Classifiers . scikit-learn . NLTK Naive Bayes Classifier ! 55 / 69
  56. 56. 3. Sentiment classification with term-existance Naive Bayes classifier # classifier = nltk.NaiveBayesClassifier.train(train_xy) print(nltk.classify.accuracy(classifier, test_xy)) # => 0.80418 classifier.show_most_informative_features(10) # => Most Informative Features # exists( /Noun) = True 1 : 0 = 38.0 : 1.0 # exists( /Noun) = True 0 : 1 = 30.1 : 1.0 # exists(♥/Foreign) = True 1 : 0 = 24.5 : 1.0 # exists( /Noun) = True 0 : 1 = 22.1 : 1.0 # exists( /Noun) = True 0 : 1 = 19.5 : 1.0 # exists( /Noun) = True 0 : 1 = 19.4 : 1.0 # exists( /Noun) = True 1 : 0 = 18.9 : 1.0 # exists( /Noun) = True 0 : 1 = 16.9 : 1.0 # exists( /Noun) = True 1 : 0 = 16.9 : 1.0 # exists( /Noun) = True 1 : 0 = 15.9 : 1.0 / baseline accuracy 0.50 1 accuracy 0.80 . ? 56 / 69
  57. 57. 4. Sentiment classification with doc2vec (feat. Gensim) doc2vec / . Gensim . * : Radim Rehureck * Gensim 0.12.0 . ! 57 / 69
  58. 58. 4. Sentiment classification with doc2vec (feat. Gensim) !!! Gensim ... 58 / 69
  59. 59. 4. Sentiment classification with doc2vec (feat. Gensim) doc2vec . doc2vec from gensim.models.doc2vec import TaggedDocument OK from collections import namedtuple TaggedDocument = namedtuple('TaggedDocument', 'words tags') # 15 training documents tagged_train_docs = [TaggedDocument(d, [c]) for d, c in train_docs] tagged_test_docs = [TaggedDocument(d, [c]) for d, c in test_docs] 59 / 69
  60. 60. 4. Sentiment classification with doc2vec (feat. Gensim) from gensim.models import doc2vec # doc_vectorizer = doc2vec.Doc2Vec(size=300, alpha=0.025, min_alpha=0.025, seed=1234) doc_vectorizer.build_vocab(tagged_train_docs) # Train document vectors! for epoch in range(10): doc_vectorizer.train(tagged_train_docs) doc_vectorizer.alpha -= 0.002 # decrease the learning rate doc_vectorizer.min_alpha = doc_vectorizer.alpha # fix the learning rate, no decay # To save # doc_vectorizer.save('doc2vec.model') Try this: Vector size, epoch 60 / 69
  61. 61. 4. Sentiment classification with doc2vec (feat. Gensim) ! (1/3) pprint(doc_vectorizer.most_similar(' /Noun')) # => [(' /Noun', 0.5669919848442078), # (' /Noun', 0.5522832274436951), # (' /Noun', 0.5021427869796753), # (' /Noun', 0.5000861287117004), # (' /Noun', 0.4368450343608856), # (' /Noun', 0.42848479747772217), # (' /Noun', 0.42714330554008484), # (' /Noun', 0.41590073704719543), # (' /Noun', 0.41056352853775024), # (' /Noun', 0.4052993059158325)] ... . 61 / 69
  62. 62. 4. Sentiment classification with doc2vec (feat. Gensim) ! (2/3) pprint(doc_vectorizer.most_similar(' /KoreanParticle')) # => [(' /KoreanParticle', 0.5768033862113953), # (' /KoreanParticle', 0.4822016954421997), # ('!!!!/Punctuation', 0.4395076632499695), # ('!!!/Punctuation', 0.4077949523925781), # ('!!/Punctuation', 0.4026390314102173), # ('~/Punctuation', 0.40038347244262695), # ('~~/Punctuation', 0.39946430921554565), # ('!/Punctuation', 0.3899948000907898), # ('^^/Punctuation', 0.3852730989456177), # ('~^^/Punctuation', 0.3797937035560608)] ! . 62 / 69
  63. 63. 4. Sentiment classification with doc2vec (feat. Gensim) , v(KING) – v(MAN) + v(WOMAN) = v(QUEEN) ? 63 / 69
  64. 64. 4. Sentiment classification with doc2vec (feat. Gensim) ! (3/3) pprint(doc_vectorizer.most_similar(positive=[' /Noun', ' /Noun'], negative=[' /Noun'])) # => [(' /Noun', 0.32974398136138916), # (' /Noun', 0.305545836687088), # (' /Noun', 0.2899821400642395), # (' /Noun', 0.2856029272079468), # (' /Noun', 0.2840743064880371), # (' /Noun', 0.28247544169425964), # (' /Noun', 0.2795347571372986), # (' /Noun', 0.2794691324234009), # (' /Noun', 0.27567577362060547), # (' /Noun', 0.2734225392341614)] ...? ? 64 / 69
  65. 65. 4. Sentiment classification with doc2vec (feat. Gensim) , ;; , . ? * text.concordance(' /Noun', lines=10) # => Displaying 10 of 145 matches: # Josa /Noun /Josa ,,/Punctuation /Noun /Noun ...../Punctuation /Noun # /Noun /Noun ../Punctuation /Noun /Noun /Noun /Noun /Noun /Josa /Nou # nction /Noun /Josa /Adjective /Noun /Verb /Verb /Noun /Josa # /Noun /Noun /Noun ?/Punctuation /Noun /Noun ./Punctuation /Noun /No # b /Noun /Noun /KoreanParticle /Noun /Noun /Adjective ./Punctuation # osa /Noun /Josa /Noun /Josa /Noun /Josa /Noun /Noun /Josa /Nou # /Noun /Josa /Noun ./Punctuation /Noun ,/Punctuation /Noun /Noun /Suf # Josa /Noun /Noun /Noun /Noun /Noun /Josa /Noun /Suffix /Josa / # /Verb ./Punctuation /Adjective /Noun /Noun .../Punctuation /Noun # tive /Noun /Noun .../Punctuation /Noun /Noun ../Punctuation /Noun /J * Huang, Socher, Improving Word Representations via Global Context and Multiple Word Prototypes, ACL, 2012. 65 / 69
  66. 66. 4. Sentiment classification with doc2vec (feat. Gensim) . . train_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_train_docs] train_y = [doc.tags[0] for doc in tagged_train_docs] len(train_x) # term existance # => 150000 len(train_x[0]) # => 300 test_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_test_docs] test_y = [doc.tags[0] for doc in tagged_test_docs] len(test_x) # => 50000 len(test_x[0]) # => 300 66 / 69
  67. 67. 4. Sentiment classification with doc2vec (feat. Gensim) scikit-learn LogisticRegression() . from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state=1234) classifier.fit(train_x, train_y) classifier.score(test_x, test_y) # => 0.78246000000000004 ! Accuracy 0.78 . Term existance accuracy 2% , . , term existance , doc2vec ! 67 / 69
  68. 68. . KoNLPy , NLTK , Gensim . Term existance document vector . task apply ! / * / ! * , , , : , SNUDM Technical Reports, Apr 14, 2015. 68 / 69
  69. 69. :D http://lucypark.kr @echojuliett 69 / 69
  • SteveHan5

    Feb. 8, 2020
  • jieunshin5

    Oct. 31, 2019
  • DalleKim

    Sep. 16, 2019
  • JakePark16

    Jun. 10, 2019
  • kyml

    Apr. 25, 2019
  • JeongminPark14

    Mar. 28, 2019
  • YANGHIKIM

    Jan. 26, 2019
  • ssuser32e51e

    Aug. 16, 2018
  • KwanghyunLee2

    Apr. 9, 2018
  • taekminkim

    Mar. 27, 2018
  • yoondaewon5

    Mar. 20, 2018
  • ssusera265b1

    Dec. 20, 2017
  • cosmoxlip

    Dec. 5, 2017
  • BongsuLee2

    Nov. 22, 2017
  • ssuser82965c

    Nov. 17, 2017
  • ssuser32b5c8

    Nov. 16, 2017
  • wall72

    Nov. 16, 2017
  • KimKunhee

    Oct. 30, 2017
  • HwanheeKim2

    Oct. 17, 2017
  • Hwanee

    Sep. 28, 2017

HTML: http://lucypark.kr/slides/2015-pyconkr 부제: 영화 리뷰를 컴퓨터가 이해할 수 있는 형식으로 표현해서 센티멘트 분석하기

Views

Total views

20,006

On Slideshare

0

From embeds

0

Number of embeds

800

Actions

Downloads

821

Shares

0

Comments

0

Likes

122

×