SlideShare a Scribd company logo
1 of 30
Download to read offline
Dueling Network Architectures for
Deep Reinforcement Learning
2016-06-28
Taehoon Kim
Motivation
• Recent advances
• Design improved control and RL algorithms
• Incorporate existing NN into RL methods
• We,
• focus on innovating a NN that is better suited for model-free RL
• Separate
• the representation of state value
• (state-dependent) action advantages
2
Overview
3
state	value	function
advantage	function
sharing convolutional	feature	learning	module
aggregating	layer
state-action	value	function
Dueling network
• Single Q network with two streams
• Produce separate estimations of state value func and advantage func
• without any extra supervision
• which states are valuable?
• without having to learn the effect of each action for each state
4
Saliency map on the Atari game Enduro
5
1.	Focus	on	horizon,
where	new	cars	appear
2.	Focus	on	the	score
Not	pay	much	attention
when	there	are	no	cars	in	front
Attention	on	car	immediately	in	front
making	its	choice	of	action	very	relevant
Definitions
• Value 𝑉(𝑠), how good it is to be in particular state 𝑠
• Advantage 𝐴(𝑠, 𝑎)
• Policy 𝜋
• Return 𝑅* = ∑ 𝛾./*
𝑟.
∞
.1* , where 𝛾 ∈ [0,1]
• Q function 𝑄8
𝑠, 𝑎 = 𝔼 𝑅* 𝑠* = 𝑠, 𝑎* = 𝑎, 𝜋
• State-value function 𝑉8 𝑠 = 𝔼:~8(<)[𝑄8 𝑠, 𝑎 ]
6
Bellman equation
• Recursively with dynamic programming
• 𝑄8
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾𝔼:`~8(<`) 𝑄8
𝑠`, 𝑎` 	|𝑠, 𝑎, 𝜋
• Optimal Q∗
𝑠, 𝑎 = max
8
𝑄8
𝑠, 𝑎
• Deterministic policy a = arg max
:`∈𝒜
Q∗
𝑠, 𝑎`
• Optimal V∗
𝑠	 = max
:
𝑄∗
𝑠, 𝑎
• Bellman equation Q∗
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max
:`
𝑄∗
𝑠`, 𝑎` |𝑠, 𝑎
7
Advantage function
• Bellman equation Q∗
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max
:`
𝑄∗
𝑠`, 𝑎` |𝑠, 𝑎
• Advantage function A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝑉8
(𝑠)
• 𝔼:~8(<`) 𝐴8
𝑠, 𝑎 = 0
8
Advantage function
• Value 𝑉(𝑠), how good it is to be in particular state 𝑠
• Q(𝑠, 𝑎), the value of choosing a particular action 𝑎 when in state 𝑠
• A = 𝑉 − 𝑄 to obtain a relative measure of importance of each action
9
Deep Q-network (DQN)
• Model Free
• states and rewards are produced by the environment
• Off policy
• states and rewards are obtained with a behavior policy (epsilon greedy)
• different from the online policy that is being learned
10
Deep Q-network: 1) Target network
• Deep Q-network 𝑄 𝑠, 𝑎; 𝜽
• Target network 𝑄 𝑠, 𝑎; 𝜽/
• 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊
W
• 𝑦O
STU
= 𝑟 + 𝛾 max
:`
𝑄 𝑠`, 𝑎`; 𝜽/
• Freeze parameters for a fixed number of iterations
• 𝛻YZ
𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊 𝛻YZ
𝑄 𝑠, 𝑎; 𝜽𝒊
11
𝑠`
𝑠
Deep Q-network: 2) Experience memory
• Experience 𝑒* = (𝑠*, 𝑎*, 𝑟*, 𝑠*])
• Accumulates a dataset 𝒟* = 𝑒], 𝑒W, … , 𝑒*
• 𝐿O 𝜃O = 𝔼 <,:,Q,<` ~𝒰(𝒟) 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊
W
12
Double Deep Q-network (DDQN)
• In DQN
• the max operator uses the same values to both select and evaluate an action
• lead to overoptimistic value estimates
• 𝑦O
STU
= 𝑟 + 𝛾 max
:`
𝑄 𝑠`, 𝑎`; 𝜽/
• To migrate this problem, in DDQN
• 𝑦O
SSTU
= 𝑟 + 𝛾𝑄 𝑠`,arg max
:`
𝑄(𝑠`, 𝑎`; 𝜃O); 𝜽/
13
Prioritized Replay (Schaul et al., 2016)
• To increase the replay probability of experience tuples
• that have a high expected learning progress
• use importance sampling weight measured via the proxy of absolute TD-error
• sampling transitions with high absolute TD-errors
• Led to faster learning and to better final policy quality
14
Dueling Network Architecture : Key insight
• For many states
• unnecessary to estimate the value of each action choice
• For example, move left or right only matters when a collision is eminent
• In most of states, the choice of action has no affect on what happens
• For bootstrapping based algorithm
• the estimation of state value is of great importance for every state
• bootstrapping	: update estimates on the basis of other estimates.
15
Formulation
• A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝑉8
(𝑠)
• 𝑉8
𝑠 = 𝔼:~8(<) 𝑄8
𝑠, 𝑎
• A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝔼:~8(<) 𝑄8
𝑠, 𝑎
• 𝔼:~8(<) 𝐴8
𝑠, 𝑎 = 0
• For a deterministic policy, 𝑎∗
= argmax
a`∈𝒜
𝑄 𝑠, 𝑎`
• 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) and 𝐴 𝑠, 𝑎∗ = 0
16
Formulation
• Dueling network = CNN + fully-connected layers that output
• a scalar 𝑉 𝑠; 𝜃, 𝛽
• an 𝒜 -dimensional vector 𝐴(𝑠, 𝑎; 𝜃, 𝛼)
• Tempt to construct the aggregating module
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴(𝑠, 𝑎; 𝜃, 𝛼)
17
Aggregation module 1: simple add
• But 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 is only a parameterized estimate of the true Q-
function
• Unidentifiable
• Given 𝑄, 𝑉 and 𝐴 can’t uniquely be recovered
• We force the 𝐴 to have zero at the chosen action
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − max
:`∈𝒜
𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
18
Aggregation module 2: subtract max
• For 𝑎∗
= argmax
a`∈𝒜
𝑄(𝑠, 𝑎`; 𝜃, 𝛼, 𝛽) = argmax
a`∈𝒜
𝐴 𝑠, 𝑎; 𝜃, 𝛼
• obtain 𝑄 𝑠, 𝑎∗
; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽
• 𝑄 𝑠, 𝑎∗
= 𝑉(𝑠)
• An alternative module replace max operator with an average
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 −
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
19
Aggregation module 3: subtract average
• An alternative module replace max operator with an average
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 −
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
• Now loses the original semantics of 𝑉 and 𝐴
• because now off-target by a constant,
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
• But increases the stability of the optimization
• 𝐴 only need to change as fast as the mean
• Instead of having to compensate any change to the optimal action’s advantage
20
max
:`∈𝒜
𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
Aggregation module 3: subtract average
• Subtracting mean is the best
• helps identifiability
• not change the relative rank of 𝐴 (and hence Q)
• Aggregation module is a part of the network not a algorithmic step
• training of dueling network requires only back-propagation
21
Compatibility
• Because the output of dueling network is Q function
• DQN
• DDQN
• SARSA
• On-policy, off-policy, whatever
22
Definition: Generalized policy iteration
23
Experiments: Policy evaluation
• Useful for evaluating network architecture
• devoid of confounding factors such as choice of exploration strategy, and
interaction between policy improvement and policy evaluation
• In experiment, employ temporal difference learning
• optimizing 𝑦O = 𝑟 + 𝛾𝔼:`~8(<`) 𝑄 𝑠`, 𝑎`, 𝜃O
• Corridor environment
• exact 𝑄8
(𝑠, 𝑎) can be computed separately for all 𝑠, 𝑎 ∈ 𝒮×𝒜
24
Experiments: Policy evaluation
• Test for 5, 10, and 20 actions (first tackled by DDQN)
• The stream 𝑽 𝒔; 𝜽, 𝜷 learn a general value shared across many
similar actions at 𝑠
• Hence leading to faster convergence
25
Performance	gap	increasing	with	the	number	of	actions
Experiments: General Atari Game-Playing
• Similar to DQN (Mnih et al., 2015) and add fully-connected layers
• Rescale the combined gradient entering the last convolutional layer
by 1/ 2, which mildly increases stability
• Clipped gradients to their norm less than or equal to 10
• clipping is not a standard practice in RL
26
Performance: Up-to 30 no-op random start
• Duel Clip > Single Clip > Single
• Good job Dueling network
27
Performance: Human start
28
• Not necessarily have to generalize well to play the Atari games
• Can achieve good performance by simply remembering sequences of
actions
• To obtain a more robust measure that use 100 starting points
sampled from a human expert’s trajectory
• from each starting points, evaluate up to 108,000 frames
• again, good job Dueling network
Combining with Prioritized Experience Replay
• Prioritization and the dueling architecture address very different
aspects of the learning process
• Although orthogonal in their objectives, these extensions
(prioritization, dueling and gradient clipping) interact in subtle ways
• Prioritization interacts with gradient clipping
• Sampling transitions with high absolute TD-errors more often leads to gradients with higher
norms so re-tuned the hyper parameters
29
References
1. [Wang, 2015] Wang, Z., de Freitas, N., & Lanctot, M. (2015). Dueling network architectures for
deep reinforcement learning. arXiv preprint arXiv:1511.06581.
2. [Van, 2015] Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep reinforcement learning with
double Q-learning. CoRR, abs/1509.06461.
3. [Schaul, 2015] Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience
replay. arXiv preprint arXiv:1511.05952.
4. [Sutton, 1998] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction(Vol.
1, No. 1). Cambridge: MIT press.
30

More Related Content

What's hot

Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement LearningSalem-Kabbani
 
Actor critic algorithm
Actor critic algorithmActor critic algorithm
Actor critic algorithmJie-Han Chen
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningKhaled Saleh
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialOmar Enayet
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement LearningJungyeol
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkKnoldus Inc.
 
Reinforcement Learning 6. Temporal Difference Learning
Reinforcement Learning 6. Temporal Difference LearningReinforcement Learning 6. Temporal Difference Learning
Reinforcement Learning 6. Temporal Difference LearningSeung Jae Lee
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement LearningDongHyun Kwak
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introductionTaehoon Kim
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionJaroslaw Szymczak
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learningSubrat Panda, PhD
 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsBill Liu
 
Lecture 9 Markov decision process
Lecture 9 Markov decision processLecture 9 Markov decision process
Lecture 9 Markov decision processVARUN KUMAR
 
deep reinforcement learning with double q learning
deep reinforcement learning with double q learningdeep reinforcement learning with double q learning
deep reinforcement learning with double q learningSeungHyeok Baek
 
알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017Taehoon Kim
 
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기이 의령
 

What's hot (20)

Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Actor critic algorithm
Actor critic algorithmActor critic algorithm
Actor critic algorithm
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement Learning
 
Lec3 dqn
Lec3 dqnLec3 dqn
Lec3 dqn
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners Tutorial
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Deep Q-learning explained
Deep Q-learning explainedDeep Q-learning explained
Deep Q-learning explained
 
Reinforcement Learning 6. Temporal Difference Learning
Reinforcement Learning 6. Temporal Difference LearningReinforcement Learning 6. Temporal Difference Learning
Reinforcement Learning 6. Temporal Difference Learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competition
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learning
 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its Applications
 
Lecture 9 Markov decision process
Lecture 9 Markov decision processLecture 9 Markov decision process
Lecture 9 Markov decision process
 
deep reinforcement learning with double q learning
deep reinforcement learning with double q learningdeep reinforcement learning with double q learning
deep reinforcement learning with double q learning
 
알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017
 
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
 

Similar to Dueling network architectures for deep reinforcement learning

Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satChenYiHuang5
 
Parallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemMLParallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemMLJanani C
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationYan Xu
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...ssuser4b1f48
 
PR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation LearningPR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation LearningSungchul Kim
 
A framework for nonlinear model predictive control
A framework for nonlinear model predictive controlA framework for nonlinear model predictive control
A framework for nonlinear model predictive controlModelon
 
DQN Variants: A quick glance
DQN Variants: A quick glanceDQN Variants: A quick glance
DQN Variants: A quick glanceTejas Kotha
 
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Ryo Takahashi
 
Paper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelineChenYiHuang5
 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Taehoon Kim
 
08 neural networks
08 neural networks08 neural networks
08 neural networksankit_ppt
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!ChenYiHuang5
 
An Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAn Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAnirban Santara
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15Hao Zhuang
 
Final Presentation - Edan&Itzik
Final Presentation - Edan&ItzikFinal Presentation - Edan&Itzik
Final Presentation - Edan&Itzikitzik cohen
 

Similar to Dueling network architectures for deep reinforcement learning (20)

Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Parallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemMLParallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemML
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
 
PR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation LearningPR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation Learning
 
A framework for nonlinear model predictive control
A framework for nonlinear model predictive controlA framework for nonlinear model predictive control
A framework for nonlinear model predictive control
 
DQN Variants: A quick glance
DQN Variants: A quick glanceDQN Variants: A quick glance
DQN Variants: A quick glance
 
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Paper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipeline
 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
 
Iiwas19 yamazaki slide
Iiwas19 yamazaki slideIiwas19 yamazaki slide
Iiwas19 yamazaki slide
 
An Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAn Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGI
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
 
Introduction to Genetic algorithm and its significance in VLSI design and aut...
Introduction to Genetic algorithm and its significance in VLSI design and aut...Introduction to Genetic algorithm and its significance in VLSI design and aut...
Introduction to Genetic algorithm and its significance in VLSI design and aut...
 
230727_HB_JointJournalClub.pptx
230727_HB_JointJournalClub.pptx230727_HB_JointJournalClub.pptx
230727_HB_JointJournalClub.pptx
 
Final Presentation - Edan&Itzik
Final Presentation - Edan&ItzikFinal Presentation - Edan&Itzik
Final Presentation - Edan&Itzik
 

More from Taehoon Kim

LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...Taehoon Kim
 
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링Taehoon Kim
 
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018Taehoon Kim
 
Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]Taehoon Kim
 
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017Taehoon Kim
 
카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29Taehoon Kim
 
Differentiable Neural Computer
Differentiable Neural ComputerDifferentiable Neural Computer
Differentiable Neural ComputerTaehoon Kim
 
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016Taehoon Kim
 
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016Taehoon Kim
 
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016Taehoon Kim
 
쉽게 쓰여진 Django
쉽게 쓰여진 Django쉽게 쓰여진 Django
쉽게 쓰여진 DjangoTaehoon Kim
 
영화 서비스에 대한 생각
영화 서비스에 대한 생각영화 서비스에 대한 생각
영화 서비스에 대한 생각Taehoon Kim
 

More from Taehoon Kim (13)

LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
 
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
 
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
 
Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]
 
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
 
카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29
 
Differentiable Neural Computer
Differentiable Neural ComputerDifferentiable Neural Computer
Differentiable Neural Computer
 
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
 
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
 
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
 
Deep Reasoning
Deep ReasoningDeep Reasoning
Deep Reasoning
 
쉽게 쓰여진 Django
쉽게 쓰여진 Django쉽게 쓰여진 Django
쉽게 쓰여진 Django
 
영화 서비스에 대한 생각
영화 서비스에 대한 생각영화 서비스에 대한 생각
영화 서비스에 대한 생각
 

Recently uploaded

Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 

Recently uploaded (20)

Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 

Dueling network architectures for deep reinforcement learning

  • 1. Dueling Network Architectures for Deep Reinforcement Learning 2016-06-28 Taehoon Kim
  • 2. Motivation • Recent advances • Design improved control and RL algorithms • Incorporate existing NN into RL methods • We, • focus on innovating a NN that is better suited for model-free RL • Separate • the representation of state value • (state-dependent) action advantages 2
  • 4. Dueling network • Single Q network with two streams • Produce separate estimations of state value func and advantage func • without any extra supervision • which states are valuable? • without having to learn the effect of each action for each state 4
  • 5. Saliency map on the Atari game Enduro 5 1. Focus on horizon, where new cars appear 2. Focus on the score Not pay much attention when there are no cars in front Attention on car immediately in front making its choice of action very relevant
  • 6. Definitions • Value 𝑉(𝑠), how good it is to be in particular state 𝑠 • Advantage 𝐴(𝑠, 𝑎) • Policy 𝜋 • Return 𝑅* = ∑ 𝛾./* 𝑟. ∞ .1* , where 𝛾 ∈ [0,1] • Q function 𝑄8 𝑠, 𝑎 = 𝔼 𝑅* 𝑠* = 𝑠, 𝑎* = 𝑎, 𝜋 • State-value function 𝑉8 𝑠 = 𝔼:~8(<)[𝑄8 𝑠, 𝑎 ] 6
  • 7. Bellman equation • Recursively with dynamic programming • 𝑄8 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾𝔼:`~8(<`) 𝑄8 𝑠`, 𝑎` |𝑠, 𝑎, 𝜋 • Optimal Q∗ 𝑠, 𝑎 = max 8 𝑄8 𝑠, 𝑎 • Deterministic policy a = arg max :`∈𝒜 Q∗ 𝑠, 𝑎` • Optimal V∗ 𝑠 = max : 𝑄∗ 𝑠, 𝑎 • Bellman equation Q∗ 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max :` 𝑄∗ 𝑠`, 𝑎` |𝑠, 𝑎 7
  • 8. Advantage function • Bellman equation Q∗ 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max :` 𝑄∗ 𝑠`, 𝑎` |𝑠, 𝑎 • Advantage function A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝑉8 (𝑠) • 𝔼:~8(<`) 𝐴8 𝑠, 𝑎 = 0 8
  • 9. Advantage function • Value 𝑉(𝑠), how good it is to be in particular state 𝑠 • Q(𝑠, 𝑎), the value of choosing a particular action 𝑎 when in state 𝑠 • A = 𝑉 − 𝑄 to obtain a relative measure of importance of each action 9
  • 10. Deep Q-network (DQN) • Model Free • states and rewards are produced by the environment • Off policy • states and rewards are obtained with a behavior policy (epsilon greedy) • different from the online policy that is being learned 10
  • 11. Deep Q-network: 1) Target network • Deep Q-network 𝑄 𝑠, 𝑎; 𝜽 • Target network 𝑄 𝑠, 𝑎; 𝜽/ • 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 W • 𝑦O STU = 𝑟 + 𝛾 max :` 𝑄 𝑠`, 𝑎`; 𝜽/ • Freeze parameters for a fixed number of iterations • 𝛻YZ 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 𝛻YZ 𝑄 𝑠, 𝑎; 𝜽𝒊 11 𝑠` 𝑠
  • 12. Deep Q-network: 2) Experience memory • Experience 𝑒* = (𝑠*, 𝑎*, 𝑟*, 𝑠*]) • Accumulates a dataset 𝒟* = 𝑒], 𝑒W, … , 𝑒* • 𝐿O 𝜃O = 𝔼 <,:,Q,<` ~𝒰(𝒟) 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 W 12
  • 13. Double Deep Q-network (DDQN) • In DQN • the max operator uses the same values to both select and evaluate an action • lead to overoptimistic value estimates • 𝑦O STU = 𝑟 + 𝛾 max :` 𝑄 𝑠`, 𝑎`; 𝜽/ • To migrate this problem, in DDQN • 𝑦O SSTU = 𝑟 + 𝛾𝑄 𝑠`,arg max :` 𝑄(𝑠`, 𝑎`; 𝜃O); 𝜽/ 13
  • 14. Prioritized Replay (Schaul et al., 2016) • To increase the replay probability of experience tuples • that have a high expected learning progress • use importance sampling weight measured via the proxy of absolute TD-error • sampling transitions with high absolute TD-errors • Led to faster learning and to better final policy quality 14
  • 15. Dueling Network Architecture : Key insight • For many states • unnecessary to estimate the value of each action choice • For example, move left or right only matters when a collision is eminent • In most of states, the choice of action has no affect on what happens • For bootstrapping based algorithm • the estimation of state value is of great importance for every state • bootstrapping : update estimates on the basis of other estimates. 15
  • 16. Formulation • A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝑉8 (𝑠) • 𝑉8 𝑠 = 𝔼:~8(<) 𝑄8 𝑠, 𝑎 • A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝔼:~8(<) 𝑄8 𝑠, 𝑎 • 𝔼:~8(<) 𝐴8 𝑠, 𝑎 = 0 • For a deterministic policy, 𝑎∗ = argmax a`∈𝒜 𝑄 𝑠, 𝑎` • 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) and 𝐴 𝑠, 𝑎∗ = 0 16
  • 17. Formulation • Dueling network = CNN + fully-connected layers that output • a scalar 𝑉 𝑠; 𝜃, 𝛽 • an 𝒜 -dimensional vector 𝐴(𝑠, 𝑎; 𝜃, 𝛼) • Tempt to construct the aggregating module • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴(𝑠, 𝑎; 𝜃, 𝛼) 17
  • 18. Aggregation module 1: simple add • But 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 is only a parameterized estimate of the true Q- function • Unidentifiable • Given 𝑄, 𝑉 and 𝐴 can’t uniquely be recovered • We force the 𝐴 to have zero at the chosen action • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − max :`∈𝒜 𝐴(𝑠, 𝑎`; 𝜃, 𝛼) 18
  • 19. Aggregation module 2: subtract max • For 𝑎∗ = argmax a`∈𝒜 𝑄(𝑠, 𝑎`; 𝜃, 𝛼, 𝛽) = argmax a`∈𝒜 𝐴 𝑠, 𝑎; 𝜃, 𝛼 • obtain 𝑄 𝑠, 𝑎∗ ; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 • 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) • An alternative module replace max operator with an average • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` 19
  • 20. Aggregation module 3: subtract average • An alternative module replace max operator with an average • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` • Now loses the original semantics of 𝑉 and 𝐴 • because now off-target by a constant, ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` • But increases the stability of the optimization • 𝐴 only need to change as fast as the mean • Instead of having to compensate any change to the optimal action’s advantage 20 max :`∈𝒜 𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
  • 21. Aggregation module 3: subtract average • Subtracting mean is the best • helps identifiability • not change the relative rank of 𝐴 (and hence Q) • Aggregation module is a part of the network not a algorithmic step • training of dueling network requires only back-propagation 21
  • 22. Compatibility • Because the output of dueling network is Q function • DQN • DDQN • SARSA • On-policy, off-policy, whatever 22
  • 24. Experiments: Policy evaluation • Useful for evaluating network architecture • devoid of confounding factors such as choice of exploration strategy, and interaction between policy improvement and policy evaluation • In experiment, employ temporal difference learning • optimizing 𝑦O = 𝑟 + 𝛾𝔼:`~8(<`) 𝑄 𝑠`, 𝑎`, 𝜃O • Corridor environment • exact 𝑄8 (𝑠, 𝑎) can be computed separately for all 𝑠, 𝑎 ∈ 𝒮×𝒜 24
  • 25. Experiments: Policy evaluation • Test for 5, 10, and 20 actions (first tackled by DDQN) • The stream 𝑽 𝒔; 𝜽, 𝜷 learn a general value shared across many similar actions at 𝑠 • Hence leading to faster convergence 25 Performance gap increasing with the number of actions
  • 26. Experiments: General Atari Game-Playing • Similar to DQN (Mnih et al., 2015) and add fully-connected layers • Rescale the combined gradient entering the last convolutional layer by 1/ 2, which mildly increases stability • Clipped gradients to their norm less than or equal to 10 • clipping is not a standard practice in RL 26
  • 27. Performance: Up-to 30 no-op random start • Duel Clip > Single Clip > Single • Good job Dueling network 27
  • 28. Performance: Human start 28 • Not necessarily have to generalize well to play the Atari games • Can achieve good performance by simply remembering sequences of actions • To obtain a more robust measure that use 100 starting points sampled from a human expert’s trajectory • from each starting points, evaluate up to 108,000 frames • again, good job Dueling network
  • 29. Combining with Prioritized Experience Replay • Prioritization and the dueling architecture address very different aspects of the learning process • Although orthogonal in their objectives, these extensions (prioritization, dueling and gradient clipping) interact in subtle ways • Prioritization interacts with gradient clipping • Sampling transitions with high absolute TD-errors more often leads to gradients with higher norms so re-tuned the hyper parameters 29
  • 30. References 1. [Wang, 2015] Wang, Z., de Freitas, N., & Lanctot, M. (2015). Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581. 2. [Van, 2015] Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep reinforcement learning with double Q-learning. CoRR, abs/1509.06461. 3. [Schaul, 2015] Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience replay. arXiv preprint arXiv:1511.05952. 4. [Sutton, 1998] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction(Vol. 1, No. 1). Cambridge: MIT press. 30