Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NLP using Deep learning

740 views

Published on

We explore how to implement Natural language processing using LSTM. Also illustrate the underlying principles behind creating Word2Vec.

Published in: Technology
  • Be the first to comment

NLP using Deep learning

  1. 1. NLP using Deep Learning Babu Priyavrat
  2. 2. contents • Sentiment Analysis • Word2Vec • FFN, RNN & LSTM • Translation • Live video commentary using CNN-LSTM 2
  3. 3. Sentiment analysis 3 The sentence is tokenized into words and sentiment analysis of each word is done to conclude the overall sentiment of the sentence.
  4. 4. Word2Vec 4 Words with positive sentiment will be near to each other. Word2vec is a two-layer neural net that processes text. Its input is a text corpus and its output is a set of vectors: feature vectors for words in that corpus. While Word2vec is not a deep neural network, it turns text into a numerical form that deep nets can understand.
  5. 5. Constructing Word2Vec • https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py 5 1. Identify the dataset to created Word2Vec Relation : http://mattmahoney.net/dc/ 2. Read the data into a list of strings. 3. Build the dictionary and replace rare words with unique key (UNK) token. 1. Dictionary – map of words (strings) to their codes 2. Count – map of words( strings) to count of occurrences 3. Reverse Dictionary – map codes ( integers) to words ( strings). 4. generate a training batch for the skip-gram model. 5. Build and train a skip-gram model. 1. Construct the SGD optimizer using a learning rate of 1.0. 2. Compute the cosine similarity between minibatch examples and all embeddings. 6. Begin training 7. Visualize the embeddings
  6. 6. Skip-gram model 6 Skip-gram model is neural network implementation which gives the probability of two words occurring adjacent to each other
  7. 7. word2Vec - Visualizing Embeddings 7
  8. 8. Why FFN is not fit for NLP? 𝑊(1) x1 x2 𝑊(2) 𝑧(2) 𝑧(3) 8 There is no communication between neurons in the same layer, thus not able to relate the context of one word with another.
  9. 9. Is RNN better? 𝑊(1) x1 x2 𝑊(2) 𝑧(2) 𝑧(3) 9 RNN seems to solve the problem but it doesn’t memory to remember the context from earlier sentences.
  10. 10. LSTM Cell 10 LSTM cell has inbuilt memory cell which can be passed to other layer of LSTM cells. Please note that this memory cell is passed without applying any filter like softmax.
  11. 11. Loading Word2Vec 11 • https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow Loading already trained Word2Vec instead of building from scratch
  12. 12. Loading Training Data 12
  13. 13. Implementing LSTM 13
  14. 14. Implementing LSTM • https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow 14
  15. 15. LSTM accuracy 15
  16. 16. Stacked LSTM def lstm_cell(): return tf.contrib.rnn.BasicLSTMCell(lstm_size) stacked_lstm = tf.contrib.rnn.MultiRNNCell( [lstm_cell() for _ in range(number_of_layers)]) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(num_steps): # The value of state is updated after processing each batch of words. # The rest of the code. # ... 16 Stacked LSTM improves performance but only upto 8 layers. After that, the performance is not great.
  17. 17. Question & Answers
  18. 18. Next • Is Word2Vector good enough? • Can we use LSTM for translation? • Can we combine LSTM with other networks like CNN? 18
  19. 19. References • https://github.com/NLeSC/mcfly/wiki/User-manual • https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using- tensorflow • http://www.asimovinstitute.org/neural-network-zoo-prequel-cells-layers/ 19

×