This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
2. Recurrent Neural Networks : For what?
2017-03-28 CPSLAB (EECS) 2
Human remembers and uses the pattern of sequence.
• Try ‘a b c d e f g…’
• But how about ‘z y x w v u t s…’ ?
The idea behind RNN is to make use of sequential
information.
Let’s learn a pattern of a sequence, and utilize (estimate,
generate, etc…) it!
But HOW?
3. Recurrent Neural Networks : Typical RNNs
2017-03-28 CPSLAB (EECS) 3
OUTPUT
INPUT
ONE
STEP
DELAY
HIDDEN
STATE
RNNs are called “RECURRENT” because they
perform the same task for every element of a
sequence, with the output being depended on
the previous computations.
RNNs have a “memory” which captures
information about what has been calculated so
far.
The hidden state ℎ 𝑡 captures some information
about a sequence.
If we use 𝑓 = tanh , Vanishing/Exploding
gradient problem happens.
For overcome this, we use LSTM/GRU.
𝒉 𝒕
𝒚 𝒕
𝒙 𝒕
ℎ 𝑡 = 𝑓 𝑈𝑥 𝑡 + 𝑊ℎ 𝑡−1 + 𝑏
𝑦𝑡 = 𝑉ℎ 𝑡 + 𝑐
𝑈
𝑊
𝑉
4. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 4
Let’s think about the machine, which guesses the dinner menu from
things in shopping bag.
Umm,,
Carbonara!
5. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 5
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
6. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 6
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
Forget
Some
Memories!
7. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 7
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
Forget
Some
Memories!
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
8. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 8
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
Insert
Some
Memories!
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
9. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 9
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
10. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 10
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒙 𝒕
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
11. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 11
𝑪 𝒕
Cell state,
Internal memory unit,
Like a conveyor belt!
𝒉 𝒕
𝒚 𝒕
𝒙 𝒕
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
12. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 12
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
Figures from http://colah.github.io/posts/2015-08-Understanding-LSTMs/
13. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 13
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
Figures from http://colah.github.io/posts/2015-08-Understanding-LSTMs/
14. Recurrent Neural Networks : LSTM
2017-03-28 CPSLAB (EECS) 14
LSTM learns (1) How to forget a memory when the ℎ 𝑡−1 and new input 𝑥 𝑡 is given,
(2) Then how to add the new memory with given ℎ 𝑡−1 and 𝑥 𝑡.
Figures from http://colah.github.io/posts/2015-08-Understanding-LSTMs/
16. Sequence to Sequence Model: What is it?
2017-03-28 CPSLAB (EECS) 16
ℎ 𝑒(1) ℎ 𝑒(2) ℎ 𝑒(3) ℎ 𝑒(4) ℎ 𝑒(5)
LSTM/GRU
Encoder
LSTM/GRU
Decoder
ℎ 𝑑(1) ℎ 𝑑(𝑇𝑒)
Western Food
To
Korean Food
Transition
17. Sequence to Sequence Model: Implementation
2017-03-28 CPSLAB (EECS) 17
The simplest way to implement sequence to sequence model is
to just pass the last hidden state of decoder 𝒉 𝑻
to the first GRU cell of encoder!
However, this method’s power gets weaker when the encoder need to
generate longer sequence.