SlideShare a Scribd company logo
1 of 25
Download to read offline
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
옥찬호
utilForever@gmail.com
Introduction
• Deep Q Network solve problems...
• High-dimensional observation spaces
• Discrete and low-dimensional action spaces
• Limitation of DQN
• For example, a 7 degree of freedom system with
the coarsest discretization 𝑎𝑖 ∈ −𝑘, 0, 𝑘 for each
joint leads to an action space with dimensionality:
37
= 2187.
→ How to solve high-dimensional action spaces problem?
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Introduction
• Use Deterministic Policy Gradient (DPG) algorithm!
• However, a naive application of this actor-critic method with
neural function approximators is unstable for challenging problems.
→ How?
• Combine actor-critic approach with insights from Deep Q Network (DQN)
• Use replay buffer to minimize correlations between samples.
• Use target Q network to give consistent targets.
• Use batch normalization.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Background
• The action-value function is used in many reinforcement
learning algorithms. It describes the expected return after
taking an action 𝑎 𝑡 in state 𝑠𝑡 and thereafter following policy 𝜋:
• Action-value function using Bellman-equation
• If target policy is deterministic,
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Background
• Q-learning, a commonly used off-policy algorithm, uses the
greedy policy 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎). We consider function
approximators parameterized by 𝜃 𝑄
, which we optimize by
minimizing the loss:
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• It is not possible to straightforwardly apply Q-learning
to continuous action space.
• In continuous spaces finding the greedy policy requires
an optimization of 𝑎 𝑡 at every timestep.
𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎)
• This optimization is too slow to be practical with large,
unconstrained function approximators and nontrivial action spaces.
• Instead, here we used an actor-critic approach
based on the DPG algorithm.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• The DPG algorithm maintains a parameterized actor function
𝜇 𝑠 𝜃 𝜇
) which specifies the current policy by deterministically
mapping states to a specific action.
• Critic network (Value): Using the Bellman equation as in DQN
• Actor network (Policy):
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Replay buffer
• As with Q learning, introducing non-linear function approximators means
that convergence is no longer guaranteed. However, such approximators
appear essential in order to learn and generalize on large state spaces.
• NFQCA uses the same update rules as DPG but with neural network function
approximators, uses batch learning for stability, which is intractable for large
networks. A minibatch version of NFQCA which does not reset the policy at
each update, as would be required to scale to large networks.
• Our contribution here is to provide modifications to DPG, inspired by the
success of DQN, which allow it to use neural network function approximators
to learn in large state and action spaces online. We refer to our algorithm as
Deep DPG (DDPG, Algorithm 1).
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Replay buffer
• As in DQN, we used a replay buffer to address these issues. The replay buffer
is a finite sized cache ℛ. Transitions were sampled from the environment
according to the exploration policy and the tuple (𝑠𝑡, 𝑎 𝑡, 𝑟𝑡, 𝑠𝑡+1) was store in
the replay buffer. When the replay buffer was full the oldest samples were
discarded. At each timestep the actor and critic are updated by sampling
minibatch uniformly from the buffer. Because DDPG is an off-policy
algorithm, the replay buffer can be large, allowing the algorithm to benefit
from learning across a set of uncorrelated transitions.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Soft target update
• Directly implementing Q learning (equation 4) with neural networks proved
to be unstable in many environments. Since the network 𝑄(𝑠, 𝑎|𝜃 𝑄
) being
updated is also used in calculating the target value (equation 5), the Q
update is prone to divergence.
• Our solution is similar to the target network used in (Mnih et al., 2013) but
modified for actor-critic and using “soft” target updates, rather than directly
copying the weights.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Soft target update
• We create a copy of the actor and critic networks, 𝑄′(𝑠, 𝑎|𝜃 𝑄′
) and 𝜇′(𝑠|𝜃 𝜇′
)
respectively, that are used for calculating the target values. The weights of
these target networks are then updated by having them slowly track the
learned networks: 𝜽′ ← 𝝉𝜽 + 𝟏 − 𝝉 𝜽′ with 𝝉 ≪ 𝟏. This means that the target
values are constrained to change slowly, greatly improving the stability of
learning.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Batch normalization
• When learning from low dimensional feature vector observations, the
different components of the observation may have different physical units
(for example, positions versus velocities) and the ranges may vary across
environments. This can make it difficult for the network to learn effectively
and may make it difficult to find hyper-parameters which generalize across
environments with different scales of state values.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Batch normalization
• One approach to this problem is to manually scale the features so they are in
similar ranges across environments and units. We address this issue by
adapting a recent technique from deep learning called batch normalization.
This technique normalizes each dimension across the samples in a minibatch
to have unit mean and variance.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Noise process
• A major challenge of learning in continuous action spaces is exploration. An
advantage of off-policies algorithms such as DDPG is that we can treat the
problem of exploration independently from the learning algorithm.
• We constructed an exploration policy 𝜇′
by adding noise sampled from
a noise process 𝒩 to our actor policy 𝜇′ 𝑠𝑡 = 𝜇 𝑠𝑡 𝜃𝑡
𝜇
+ 𝒩, where 𝒩 can be
chosen to suit the environment.
• As detailed in the supplementary materials we used an Ornstein-Uhlenbeck
process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated
exploration for exploration efficiency in physical control problems with inertia
(similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)).
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Noise process
• Ornstein–Uhlenbeck process
• It is a stochastic process that returns to the mean. 𝜃 means a parameter
that indicates how quickly this process returns to the mean and 𝜇 means
the mean. 𝜎 means variability of processes and 𝑊𝑡 means Wiener process.
• Therefore, it is temporarily correlated with previous noise.
• The reason for using the temporally correlated noise process is that it is
more effective when learning in an inertial environment such as physical
control.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• Network structure
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
Algorithm
• Critic network
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝐿 =
1
𝑁
෍
𝑖
𝑦𝑖 − 𝑄 𝑠𝑖, 𝑎𝑖 𝜃 𝑄
2
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
Algorithm
• Critic network
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝜕𝑄 𝑠, 𝑎, 𝑤
𝜕𝑎
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
𝜕𝑎
𝜕𝜃
Algorithm Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Results Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Results Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Conclusion
• The work combines insights from recent advances in deep learning
and reinforcement learning, resulting in an algorithm that robustly
solves challenging problems across a variety of domains with
continuous action spaces, even when using raw pixels for
observations.
• As with most reinforcement learning algorithms, the use of non-
linear function approximators nullifies any convergence guarantees;
however, our experimental results demonstrate that stable learning
without the need for any modifications between environments.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Conclusion
• Interestingly, all of our experiments used substantially fewer steps
of experience than was used by DQN learning to find solutions in
the Atari domain.
• Most notably, as with most model-free reinforcement approaches,
DDPG requires a large number of training episodes to find solutions.
However, we believe that a robust model-free approach may be an
important component of larger systems which may attack these
limitations.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
References
• https://nervanasystems.github.io/coach/components/agents/poli
cy_optimization/ddpg.html
• https://youtu.be/h2WSVBAC1t4
• https://reinforcement-learning-kr.github.io/2018/06/27/2_dpg/
• https://reinforcement-learning-kr.github.io/2018/06/26/3_ddpg/
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Thank you!

More Related Content

What's hot

Reinforcement learning 7313
Reinforcement learning 7313Reinforcement learning 7313
Reinforcement learning 7313
Slideshare
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
Taehoon Kim
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
butest
 

What's hot (20)

Reinforcement learning 7313
Reinforcement learning 7313Reinforcement learning 7313
Reinforcement learning 7313
 
Deep deterministic policy gradient
Deep deterministic policy gradientDeep deterministic policy gradient
Deep deterministic policy gradient
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement Learning - DQN
Reinforcement Learning - DQNReinforcement Learning - DQN
Reinforcement Learning - DQN
 
Reinforcement Learning Q-Learning
Reinforcement Learning   Q-Learning Reinforcement Learning   Q-Learning
Reinforcement Learning Q-Learning
 
Deep sarsa, Deep Q-learning, DQN
Deep sarsa, Deep Q-learning, DQNDeep sarsa, Deep Q-learning, DQN
Deep sarsa, Deep Q-learning, DQN
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement Learning
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement Learning
 
분산 강화학습 논문(DeepMind IMPALA) 구현
분산 강화학습 논문(DeepMind IMPALA) 구현분산 강화학습 논문(DeepMind IMPALA) 구현
분산 강화학습 논문(DeepMind IMPALA) 구현
 
Reinforcement learning, Q-Learning
Reinforcement learning, Q-LearningReinforcement learning, Q-Learning
Reinforcement learning, Q-Learning
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
 
DQN (Deep Q-Network)
DQN (Deep Q-Network)DQN (Deep Q-Network)
DQN (Deep Q-Network)
 
Deep Q-Learning
Deep Q-LearningDeep Q-Learning
Deep Q-Learning
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners Tutorial
 
Deep Q-learning from Demonstrations DQfD
Deep Q-learning from Demonstrations DQfDDeep Q-learning from Demonstrations DQfD
Deep Q-learning from Demonstrations DQfD
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 

Similar to Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015

consistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_reviewconsistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_review
Yoonho Na
 

Similar to Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015 (20)

Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
 
Summary of BRAC
Summary of BRACSummary of BRAC
Summary of BRAC
 
Reinforcement Learning Guide For Beginners
Reinforcement Learning Guide For BeginnersReinforcement Learning Guide For Beginners
Reinforcement Learning Guide For Beginners
 
Deep Reinforcement learning
Deep Reinforcement learningDeep Reinforcement learning
Deep Reinforcement learning
 
Proximal Policy Optimization Algorithms, Schulman et al, 2017
Proximal Policy Optimization Algorithms, Schulman et al, 2017Proximal Policy Optimization Algorithms, Schulman et al, 2017
Proximal Policy Optimization Algorithms, Schulman et al, 2017
 
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
 
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
 
Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
 
DQN Variants: A quick glance
DQN Variants: A quick glanceDQN Variants: A quick glance
DQN Variants: A quick glance
 
consistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_reviewconsistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_review
 
Learning To Run
Learning To RunLearning To Run
Learning To Run
 
Lec3 dqn
Lec3 dqnLec3 dqn
Lec3 dqn
 
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
 
Introduction to cyclical learning rates for training neural nets
Introduction to cyclical learning rates for training neural netsIntroduction to cyclical learning rates for training neural nets
Introduction to cyclical learning rates for training neural nets
 
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsPR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
 
Deep learning crash course
Deep learning crash courseDeep learning crash course
Deep learning crash course
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
PR12-193 NISP: Pruning Networks using Neural Importance Score Propagation
PR12-193 NISP: Pruning Networks using Neural Importance Score PropagationPR12-193 NISP: Pruning Networks using Neural Importance Score Propagation
PR12-193 NISP: Pruning Networks using Neural Importance Score Propagation
 
Policy Based reinforcement Learning for time series Anomaly detection
Policy Based reinforcement Learning for time series Anomaly detectionPolicy Based reinforcement Learning for time series Anomaly detection
Policy Based reinforcement Learning for time series Anomaly detection
 

More from Chris Ohk

More from Chris Ohk (20)

인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
 
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
 
Momenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStoneMomenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStone
 
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
 
Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2
 
Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1
 
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
 
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
 
Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015
 
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
 
[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기
 
[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기
 
C++20 Key Features Summary
C++20 Key Features SummaryC++20 Key Features Summary
C++20 Key Features Summary
 
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
 
디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게
 
청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기
 
[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기
 
My Way, Your Way
My Way, Your WayMy Way, Your Way
My Way, Your Way
 
Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발
 
[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics
 

Recently uploaded

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
FIDO Alliance
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
FIDO Alliance
 

Recently uploaded (20)

AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 
2024 May Patch Tuesday
2024 May Patch Tuesday2024 May Patch Tuesday
2024 May Patch Tuesday
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
ERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage IntacctERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage Intacct
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software Engineering
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Navigating Identity and Access Management in the Modern Enterprise
Navigating Identity and Access Management in the Modern EnterpriseNavigating Identity and Access Management in the Modern Enterprise
Navigating Identity and Access Management in the Modern Enterprise
 
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
 
Overview of Hyperledger Foundation
Overview of Hyperledger FoundationOverview of Hyperledger Foundation
Overview of Hyperledger Foundation
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 

Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015

  • 1. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 옥찬호 utilForever@gmail.com
  • 2. Introduction • Deep Q Network solve problems... • High-dimensional observation spaces • Discrete and low-dimensional action spaces • Limitation of DQN • For example, a 7 degree of freedom system with the coarsest discretization 𝑎𝑖 ∈ −𝑘, 0, 𝑘 for each joint leads to an action space with dimensionality: 37 = 2187. → How to solve high-dimensional action spaces problem? Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 3. Introduction • Use Deterministic Policy Gradient (DPG) algorithm! • However, a naive application of this actor-critic method with neural function approximators is unstable for challenging problems. → How? • Combine actor-critic approach with insights from Deep Q Network (DQN) • Use replay buffer to minimize correlations between samples. • Use target Q network to give consistent targets. • Use batch normalization. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 4. Background • The action-value function is used in many reinforcement learning algorithms. It describes the expected return after taking an action 𝑎 𝑡 in state 𝑠𝑡 and thereafter following policy 𝜋: • Action-value function using Bellman-equation • If target policy is deterministic, Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 5. Background • Q-learning, a commonly used off-policy algorithm, uses the greedy policy 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎). We consider function approximators parameterized by 𝜃 𝑄 , which we optimize by minimizing the loss: Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 6. Algorithm • It is not possible to straightforwardly apply Q-learning to continuous action space. • In continuous spaces finding the greedy policy requires an optimization of 𝑎 𝑡 at every timestep. 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎) • This optimization is too slow to be practical with large, unconstrained function approximators and nontrivial action spaces. • Instead, here we used an actor-critic approach based on the DPG algorithm. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 7. Algorithm • The DPG algorithm maintains a parameterized actor function 𝜇 𝑠 𝜃 𝜇 ) which specifies the current policy by deterministically mapping states to a specific action. • Critic network (Value): Using the Bellman equation as in DQN • Actor network (Policy): Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 8. Replay buffer • As with Q learning, introducing non-linear function approximators means that convergence is no longer guaranteed. However, such approximators appear essential in order to learn and generalize on large state spaces. • NFQCA uses the same update rules as DPG but with neural network function approximators, uses batch learning for stability, which is intractable for large networks. A minibatch version of NFQCA which does not reset the policy at each update, as would be required to scale to large networks. • Our contribution here is to provide modifications to DPG, inspired by the success of DQN, which allow it to use neural network function approximators to learn in large state and action spaces online. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1). Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 9. Replay buffer • As in DQN, we used a replay buffer to address these issues. The replay buffer is a finite sized cache ℛ. Transitions were sampled from the environment according to the exploration policy and the tuple (𝑠𝑡, 𝑎 𝑡, 𝑟𝑡, 𝑠𝑡+1) was store in the replay buffer. When the replay buffer was full the oldest samples were discarded. At each timestep the actor and critic are updated by sampling minibatch uniformly from the buffer. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowing the algorithm to benefit from learning across a set of uncorrelated transitions. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 10. Soft target update • Directly implementing Q learning (equation 4) with neural networks proved to be unstable in many environments. Since the network 𝑄(𝑠, 𝑎|𝜃 𝑄 ) being updated is also used in calculating the target value (equation 5), the Q update is prone to divergence. • Our solution is similar to the target network used in (Mnih et al., 2013) but modified for actor-critic and using “soft” target updates, rather than directly copying the weights. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 11. Soft target update • We create a copy of the actor and critic networks, 𝑄′(𝑠, 𝑎|𝜃 𝑄′ ) and 𝜇′(𝑠|𝜃 𝜇′ ) respectively, that are used for calculating the target values. The weights of these target networks are then updated by having them slowly track the learned networks: 𝜽′ ← 𝝉𝜽 + 𝟏 − 𝝉 𝜽′ with 𝝉 ≪ 𝟏. This means that the target values are constrained to change slowly, greatly improving the stability of learning. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 12. Batch normalization • When learning from low dimensional feature vector observations, the different components of the observation may have different physical units (for example, positions versus velocities) and the ranges may vary across environments. This can make it difficult for the network to learn effectively and may make it difficult to find hyper-parameters which generalize across environments with different scales of state values. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 13. Batch normalization • One approach to this problem is to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization. This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 14. Noise process • A major challenge of learning in continuous action spaces is exploration. An advantage of off-policies algorithms such as DDPG is that we can treat the problem of exploration independently from the learning algorithm. • We constructed an exploration policy 𝜇′ by adding noise sampled from a noise process 𝒩 to our actor policy 𝜇′ 𝑠𝑡 = 𝜇 𝑠𝑡 𝜃𝑡 𝜇 + 𝒩, where 𝒩 can be chosen to suit the environment. • As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efficiency in physical control problems with inertia (similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)). Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 15. Noise process • Ornstein–Uhlenbeck process • It is a stochastic process that returns to the mean. 𝜃 means a parameter that indicates how quickly this process returns to the mean and 𝜇 means the mean. 𝜎 means variability of processes and 𝑊𝑡 means Wiener process. • Therefore, it is temporarily correlated with previous noise. • The reason for using the temporally correlated noise process is that it is more effective when learning in an inertial environment such as physical control. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 16. Algorithm • Network structure Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′
  • 17. Algorithm • Critic network Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝐿 = 1 𝑁 ෍ 𝑖 𝑦𝑖 − 𝑄 𝑠𝑖, 𝑎𝑖 𝜃 𝑄 2 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′
  • 18. Algorithm • Critic network Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝜕𝑄 𝑠, 𝑎, 𝑤 𝜕𝑎 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′ 𝜕𝑎 𝜕𝜃
  • 19. Algorithm Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 20. Results Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 21. Results Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 22. Conclusion • The work combines insights from recent advances in deep learning and reinforcement learning, resulting in an algorithm that robustly solves challenging problems across a variety of domains with continuous action spaces, even when using raw pixels for observations. • As with most reinforcement learning algorithms, the use of non- linear function approximators nullifies any convergence guarantees; however, our experimental results demonstrate that stable learning without the need for any modifications between environments. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 23. Conclusion • Interestingly, all of our experiments used substantially fewer steps of experience than was used by DQN learning to find solutions in the Atari domain. • Most notably, as with most model-free reinforcement approaches, DDPG requires a large number of training episodes to find solutions. However, we believe that a robust model-free approach may be an important component of larger systems which may attack these limitations. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 24. References • https://nervanasystems.github.io/coach/components/agents/poli cy_optimization/ddpg.html • https://youtu.be/h2WSVBAC1t4 • https://reinforcement-learning-kr.github.io/2018/06/27/2_dpg/ • https://reinforcement-learning-kr.github.io/2018/06/26/3_ddpg/ Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.