SlideShare a Scribd company logo
Proximal Policy Optimization Algorithms,
Schulman et al, 2017
옥찬호
utilForever@gmail.com
Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction
• In recent years, several different approaches have been
proposed for reinforcement learning with neural network
function approximators. The leading contenders are deep Q-
learning, “vanilla” policy gradient methods, and trust region /
natural policy gradient methods.
• However, there is room for improvement in developing a
method that is scalable (to large models and parallel
implementations), data efficient, and robust (i.e., successful
on a variety of problems without hyperparameter tuning).
Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction
• DQN
• fails on many simple problems and is poorly understood
• A3C – “Vanilla” policy gradient methods
• have poor data efficiency and robustness
• TRPO
• relatively complicated
• is not compatible with architectures that include noise (such as dropout) or
parameter sharing (between the policy and value function, or with auxiliary
tasks)
Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction
• We propose a novel objective with clipped probability ratios,
which forms a pessimistic estimate (i.e., lower bound) of the
performance of the policy.
• This paper seeks to improve the current state of affairs by
introducing an algorithm that attains the data efficiency and
reliable performance of TRPO, while using only first-order
optimization.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction
• To optimize policies, we alternate between sampling data from
the policy and performing several epochs of optimization on
the sampled data.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction
• Our experiments compare the performance of various different
versions of the surrogate objective, and find that the version
with the clipped probability ratios performs best.
• We also compare PPO to several previous algorithms from the
literature.
• On continuous control tasks, it performs better than the algorithms we
compare against.
• On Atari, it performs significantly better (in terms of sample complexity) than
A2C and similarly to ACER though it is much simpler.
PPO, Schulman et al, 2017Background: Policy Optimization
• Policy gradient methods work by computing an estimator of
the policy gradient and plugging it into a stochastic gradient
ascent algorithm. The most commonly used gradient estimator
has the form
ො𝑔 = ෡𝔼 𝑡 ∇ 𝜃 log 𝜋 𝜃 𝑎 𝑡 𝑠𝑡
መ𝐴 𝑡
• where 𝜋 𝜃 is a stochastic policy and መ𝐴 𝑡 is an estimator of the advantage
function at timestep 𝑡. Here, the expectation ෡𝔼 𝑡 … indicates the empirical
average over a finite batch of samples, in an algorithm that alternates
between sampling and optimization.
PPO, Schulman et al, 2017Background: Policy Optimization
• Implementations that use automatic differentiation software
work by constructing an objective function whose gradient is
the policy gradient estimator; the estimator ො𝑔 is obtained by
differentiating the objective
𝐿 𝑃𝐺
𝜃 = ෡𝔼 𝑡 log 𝜋 𝜃 𝑎 𝑡 𝑠𝑡
መ𝐴 𝑡
PPO, Schulman et al, 2017Background: Policy Optimization
• While it is appealing to perform multiple steps of optimization
on this loss 𝐿 𝑃𝐺
using the same trajectory, doing so is not well-
justified, and empirically it often leads to destructively large
policy updates.
PPO, Schulman et al, 2017Background: Policy Optimization
• In TRPO, an objective function (the “surrogate” objective) is
maximized subject to a constraint on the size of the policy
update. Specifically,
maximize
𝜃
෡𝔼 𝑡
𝜋 𝜃(𝑎 𝑡|𝑠𝑡)
𝜋 𝜃old
(𝑎 𝑡|𝑠𝑡)
መ𝐴 𝑡
subject to ෡𝔼 𝑡 𝐾𝐿 𝜋 𝜃old
∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡 ≤ 𝛿
• Here, 𝜃old is the vector of policy parameters before the update.
PPO, Schulman et al, 2017Background: Policy Optimization
• This problem can efficiently be approximately solved using the
conjugate gradient algorithm, after making a linear
approximation to the objective and a quadratic approximation
to the constraint.
PPO, Schulman et al, 2017Background: Policy Optimization
• The theory justifying TRPO actually suggests using a penalty
instead of a constraint, i.e., solving the unconstrained
optimization problem for some coefficient 𝛽.
maximize
𝜃
෡𝔼 𝑡
𝜋 𝜃(𝑎 𝑡|𝑠𝑡)
𝜋 𝜃old
(𝑎 𝑡|𝑠𝑡)
መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old
∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡
PPO, Schulman et al, 2017Background: Policy Optimization
• This follows from the fact that a certain surrogate objective
(which computes the max KL over states instead of the mean)
forms a lower bound (i.e., a pessimistic bound) on the
performance of the policy 𝜋.
• TRPO uses a hard constraint rather than a penalty because it is
hard to choose a single value of 𝜷 that performs well across
different problems—or even within a single problem, where the
characteristics change over the course of learning.
PPO, Schulman et al, 2017Clipped Surrogate Objective
• Let 𝑟𝑡(𝜃) denote the probability ratio 𝑟𝑡 𝜃 =
𝜋 𝜃(𝑎 𝑡|𝑠 𝑡)
𝜋 𝜃old
(𝑎 𝑡|𝑠 𝑡)
,
so 𝑟𝑡 𝜃old = 1. TRPO maximizes a “surrogate” objective
𝐿 𝐶𝑃𝐼
𝜃 = ෡𝔼 𝑡
𝜋 𝜃(𝑎 𝑡|𝑠𝑡)
𝜋 𝜃old
(𝑎 𝑡|𝑠𝑡)
መ𝐴 𝑡 = ෡𝔼 𝑡 𝑟𝑡 𝜃 መ𝐴 𝑡
• The superscript CPI refers to conservative policy iteration,
where this objective was proposed.
PPO, Schulman et al, 2017Clipped Surrogate Objective
• Without a constraint, maximization of 𝐿 𝐶𝑃𝐼
would lead to an
excessively large policy update; hence, we now consider how
to modify the objective, to penalize changes to the policy that
move 𝑟𝑡(𝜃) away from 1.
• The main objective we propose is the following:
𝐿 𝐶𝐿𝐼𝑃
𝜃 = ෡𝔼 𝑡 min(𝑟𝑡 𝜃 መ𝐴 𝑡, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡)
• where epsilon is a hyperparameter, say, 𝜖 = 0.2.
PPO, Schulman et al, 2017Clipped Surrogate Objective
• The motivation for this objective is as follows.
• The first term inside the min is 𝐿 𝐶𝑃𝐼 𝜃 .
• The second term, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡, modifies the surrogate objective
by clipping the probability ratio, which removes the incentive for moving
𝑟𝑡 𝜃 outside of the interval 1 − 𝜖, 1 + 𝜖 .
• Finally, we take the minimum of the clipped and unclipped objective, so the
final objective is a lower bound (i.e., a pessimistic bound) on the unclipped
objective.
PPO, Schulman et al, 2017Clipped Surrogate Objective
Figure 1: Plots showing one term (i.e., a single timestep) of the surrogate function
𝐿 𝐶𝐿𝐼𝑃
as a function of the probability ratio 𝑟, for positive advantages (left) and negative
advantages (right). The red circle on each plot shows the starting point for the
optimization, i.e., 𝑟 = 1. Note that 𝐿 𝐶𝐿𝐼𝑃
sums many of these terms.
PPO, Schulman et al, 2017Clipped Surrogate Objective
Figure 2: Surrogate objectives, as we interpolate between the initial policy parameter
𝜃old, and the updated policy parameter, which we compute after one iteration of PPO.
The updated policy has a KL divergence of about 0.02 from the initial policy, and this is
the point at which 𝐿 𝐶𝐿𝐼𝑃
is maximal.
PPO, Schulman et al, 2017Adaptive KL Penalty Coefficient
• Another approach, which can be used as an alternative to the
clipped surrogate objective, or in addition to it, is to use a
penalty on KL divergence, and to adapt the penalty coefficient
so that we achieve some target value of the KL divergence
𝑑targ each policy update.
• In our experiments, we found that the KL penalty performed worse than the
clipped surrogate objective, however, we’ve included it here because it’s an
important baseline.
PPO, Schulman et al, 2017Adaptive KL Penalty Coefficient
• In the simplest instantiation of this algorithm, we perform the
following steps in each policy update:
• Using several epochs of minibatch SGD, optimize the KL-penalized objective
𝐿 𝐾𝐿𝑃𝐸𝑁 𝜃 = ෡𝔼 𝑡
𝜋 𝜃(𝑎 𝑡|𝑠𝑡)
𝜋 𝜃old
(𝑎 𝑡|𝑠𝑡)
መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old
∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡
• Compute 𝒅 = ෡𝔼 𝒕 𝑲𝑳 𝝅 𝜽 𝒐𝒍𝒅
∙ 𝒔 𝒕 , 𝝅 𝜽 ∙ 𝒔 𝒕
• If 𝑑 < Τ𝑑 𝑡𝑎𝑟𝑔 1.5 , 𝛽 ← Τ𝛽 2
• If 𝑑 > 𝑑 𝑡𝑎𝑟𝑔 × 1.5, 𝛽 ← 𝛽 × 2
• The updated 𝛽 is used for the next policy update.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• Most techniques for computing variance-reduced advantage-
function estimators make use a learned state-value function 𝑉(𝑠)
• Generalized advantage estimation
• The finite-horizon estimators
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• If using a neural network architecture that shares parameters
between the policy and value function, we must use a loss function
that combines the policy surrogate and a value function error term.
• This objective can further be augmented by adding an entropy bonus
to ensure sufficient exploration, as suggested in past work (A3C).
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• Combining these terms, we obtain the following objective,
which is (approximately) maximized each iteration:
𝐿 𝑡
𝐶𝐿𝐼𝑃+𝑉𝐹+𝑆
𝜃 = ෡𝔼 𝑡 𝐿 𝑡
𝐶𝐿𝐼𝑃
𝜃 − 𝑐1 𝐿 𝑡
𝑉𝐹
𝜃 + 𝑐2 𝑆 𝜋 𝜃 𝑠𝑡
• where 𝑐1, 𝑐2 are coefficients, and 𝑆 denotes an entropy bonus,
and 𝐿 𝑡
𝑉𝐹
is a squared-error loss 𝑉𝜃 𝑠𝑡 − 𝑉𝑡
targ 2
.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• One style of policy gradient implementation, popularized in
A3C and well-suited for use with recurrent neural networks,
runs the policy for 𝑻 timesteps (where 𝑻 is much less than the
episode length), and uses the collected samples for an update.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• This style requires an advantage estimator that does not look
beyond timestep 𝑻. The estimator used by A3C is
መ𝐴 𝑡 = −𝑉 𝑠𝑡 + 𝑟𝑡 + 𝛾𝑟𝑡+1 + ⋯ + 𝛾 𝑇−𝑡+1
𝑟 𝑇−1 + 𝛾 𝑇−𝑡
𝑉 𝑠 𝑇
• where 𝑡 specifies the time index in 0, 𝑇 ,
within a given length-𝑇 trajectory segment.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• Generalizing this choice, we can use a truncated version of
generalized advantage estimation, which reduces to previous
equation when 𝜆 = 1:
መ𝐴 𝑡 = 𝛿𝑡 + 𝛾𝜆 𝛿𝑡+1 + ⋯ + ⋯ + 𝛾𝜆 𝑇−𝑡+1
𝛿 𝑇−1
where 𝛿𝑡 = 𝑟𝑡 + 𝛾𝑉 𝑠𝑡+1 − 𝑉 𝑠𝑡
Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm
• A proximal policy optimization (PPO) algorithm that uses
fixed-length trajectory segments is shown below.
• Each iteration, each of 𝑁 (parallel) actors collect 𝑇 timesteps of data.
• Then we construct the surrogate loss on these 𝑁𝑇 timesteps of data, and
optimize it with minibatch SGD (or usually for better performance, Adam),
for 𝐾 epochs.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
• First, we compare several different surrogate objectives under
different hyperparameters. Here, we compare the surrogate
objective 𝐿 𝐶𝐿𝐼𝑃
to several natural variations and ablated
versions.
• No clipping or penalty: 𝐿 𝑡(𝜃) = 𝑟𝑡 𝜃 መ𝐴 𝑡
• Clipping: 𝐿 𝑡(𝜃) = min(𝑟𝑡 𝜃 መ𝐴 𝑡, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡)
• KL penalty (fixed or adaptive): 𝐿 𝑡 𝜃 = 𝑟𝑡 𝜃 መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old
, 𝜋 𝜃
Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
Proximal Policy Optimization Algorithms, Schulman et al, 2017Conclusion
• We have introduced proximal policy optimization, a family of
policy optimization methods that use multiple epochs of
stochastic gradient ascent to perform each policy update.
Proximal Policy Optimization Algorithms, Schulman et al, 2017Conclusion
• These methods have the stability and reliability of trust-region
methods but are much simpler to implement, requiring only
few lines of code change to a vanilla policy gradient (A3C)
implementation, applicable in more general settings (for
example, when using a joint architecture for the policy and
value function), and have better overall performance.
References
• https://reinforcement-learning-kr.github.io/2018/06/22/7_ppo/
• https://lynnn.tistory.com/73
• https://jay.tech.blog/2018/10/09/trpo%EC%99%80-ppo/
• https://talkingaboutme.tistory.com/entry/RL-Policy-Gradient-
Algorithms
Proximal Policy Optimization Algorithms, Schulman et al, 2017
Thank you!

More Related Content

What's hot

Control as Inference.pptx
Control as Inference.pptxControl as Inference.pptx
Control as Inference.pptx
ssuserbd1647
 
Wrapper feature selection method
Wrapper feature selection methodWrapper feature selection method
Wrapper feature selection method
Amir Razmjou
 
Policy gradient
Policy gradientPolicy gradient
Policy gradient
Jie-Han Chen
 
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
Kyunghwan Kim
 
Deep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter TuningDeep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter Tuning
Shubhmay Potdar
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competition
Jaroslaw Szymczak
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Marina Santini
 
Nonlinear dimension reduction
Nonlinear dimension reductionNonlinear dimension reduction
Nonlinear dimension reduction
Yan Xu
 
Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)
Dongmin Lee
 
Machine Learning Chapter 11 2
Machine Learning Chapter 11 2Machine Learning Chapter 11 2
Machine Learning Chapter 11 2
butest
 
From REINFORCE to PPO
From REINFORCE to PPOFrom REINFORCE to PPO
From REINFORCE to PPO
Woong won Lee
 
A Comparison of Loss Function on Deep Embedding
A Comparison of Loss Function on Deep EmbeddingA Comparison of Loss Function on Deep Embedding
A Comparison of Loss Function on Deep Embedding
Cenk Bircanoğlu
 
Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017
mooopan
 
Actor critic algorithm
Actor critic algorithmActor critic algorithm
Actor critic algorithm
Jie-Han Chen
 
Xgboost
XgboostXgboost
Reinforcement Learning Q-Learning
Reinforcement Learning   Q-Learning Reinforcement Learning   Q-Learning
Reinforcement Learning Q-Learning
Melaku Eneayehu
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
Wangyu Han
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
홍배 김
 
Optimizers
OptimizersOptimizers
Optimizers
Il Gu Yi
 
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
Simplilearn
 

What's hot (20)

Control as Inference.pptx
Control as Inference.pptxControl as Inference.pptx
Control as Inference.pptx
 
Wrapper feature selection method
Wrapper feature selection methodWrapper feature selection method
Wrapper feature selection method
 
Policy gradient
Policy gradientPolicy gradient
Policy gradient
 
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
강화학습 해부학 교실: Rainbow 이론부터 구현까지 (2nd dlcat in Daejeon)
 
Deep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter TuningDeep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter Tuning
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competition
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
 
Nonlinear dimension reduction
Nonlinear dimension reductionNonlinear dimension reduction
Nonlinear dimension reduction
 
Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)
 
Machine Learning Chapter 11 2
Machine Learning Chapter 11 2Machine Learning Chapter 11 2
Machine Learning Chapter 11 2
 
From REINFORCE to PPO
From REINFORCE to PPOFrom REINFORCE to PPO
From REINFORCE to PPO
 
A Comparison of Loss Function on Deep Embedding
A Comparison of Loss Function on Deep EmbeddingA Comparison of Loss Function on Deep Embedding
A Comparison of Loss Function on Deep Embedding
 
Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017
 
Actor critic algorithm
Actor critic algorithmActor critic algorithm
Actor critic algorithm
 
Xgboost
XgboostXgboost
Xgboost
 
Reinforcement Learning Q-Learning
Reinforcement Learning   Q-Learning Reinforcement Learning   Q-Learning
Reinforcement Learning Q-Learning
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
 
Optimizers
OptimizersOptimizers
Optimizers
 
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
Random Forest In R | Random Forest Algorithm | Random Forest Tutorial |Machin...
 

Similar to Proximal Policy Optimization Algorithms, Schulman et al, 2017

Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015
Chris Ohk
 
LFA-NPG-Paper.pdf
LFA-NPG-Paper.pdfLFA-NPG-Paper.pdf
LFA-NPG-Paper.pdf
harinsrikanth
 
A brief introduction to Searn Algorithm
A brief introduction to Searn AlgorithmA brief introduction to Searn Algorithm
A brief introduction to Searn Algorithm
Supun Abeysinghe
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
MohamedAliHabib3
 
Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models
ananth
 
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNINGAUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
gerogepatton
 
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
Chris Ohk
 
Summary of BRAC
Summary of BRACSummary of BRAC
Summary of BRAC
ssuser0e9ad8
 
Reinforcement Learning Guide For Beginners
Reinforcement Learning Guide For BeginnersReinforcement Learning Guide For Beginners
Reinforcement Learning Guide For Beginners
gokulprasath06
 
safe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learningsafe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learning
Ryo Iwaki
 
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
Hye-min Ahn
 
How to formulate reinforcement learning in illustrative ways
How to formulate reinforcement learning in illustrative waysHow to formulate reinforcement learning in illustrative ways
How to formulate reinforcement learning in illustrative ways
YasutoTamura1
 
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Chris Ohk
 
Adapted Branch-and-Bound Algorithm Using SVM With Model Selection
Adapted Branch-and-Bound Algorithm Using SVM With Model SelectionAdapted Branch-and-Bound Algorithm Using SVM With Model Selection
Adapted Branch-and-Bound Algorithm Using SVM With Model Selection
IJECEIAES
 
Deep Reinforcement learning
Deep Reinforcement learningDeep Reinforcement learning
Deep Reinforcement learning
Cairo University
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
Hadrian7
 
Pydata presentation
Pydata presentationPydata presentation
Pydata presentation
Thomas Huijskens
 
Analysis and Design of Algorithms
Analysis and Design of AlgorithmsAnalysis and Design of Algorithms
Analysis and Design of Algorithms
Bulbul Agrawal
 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
IAEME Publication
 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...
IAEME Publication
 

Similar to Proximal Policy Optimization Algorithms, Schulman et al, 2017 (20)

Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015
 
LFA-NPG-Paper.pdf
LFA-NPG-Paper.pdfLFA-NPG-Paper.pdf
LFA-NPG-Paper.pdf
 
A brief introduction to Searn Algorithm
A brief introduction to Searn AlgorithmA brief introduction to Searn Algorithm
A brief introduction to Searn Algorithm
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models
 
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNINGAUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNING
 
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015
 
Summary of BRAC
Summary of BRACSummary of BRAC
Summary of BRAC
 
Reinforcement Learning Guide For Beginners
Reinforcement Learning Guide For BeginnersReinforcement Learning Guide For Beginners
Reinforcement Learning Guide For Beginners
 
safe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learningsafe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learning
 
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
1118_Seminar_Continuous_Deep Q-Learning with Model based acceleration
 
How to formulate reinforcement learning in illustrative ways
How to formulate reinforcement learning in illustrative waysHow to formulate reinforcement learning in illustrative ways
How to formulate reinforcement learning in illustrative ways
 
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
 
Adapted Branch-and-Bound Algorithm Using SVM With Model Selection
Adapted Branch-and-Bound Algorithm Using SVM With Model SelectionAdapted Branch-and-Bound Algorithm Using SVM With Model Selection
Adapted Branch-and-Bound Algorithm Using SVM With Model Selection
 
Deep Reinforcement learning
Deep Reinforcement learningDeep Reinforcement learning
Deep Reinforcement learning
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
Pydata presentation
Pydata presentationPydata presentation
Pydata presentation
 
Analysis and Design of Algorithms
Analysis and Design of AlgorithmsAnalysis and Design of Algorithms
Analysis and Design of Algorithms
 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...
 

More from Chris Ohk

인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
Chris Ohk
 
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
Chris Ohk
 
Momenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStoneMomenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStone
Chris Ohk
 
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
Chris Ohk
 
Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2
Chris Ohk
 
Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1
Chris Ohk
 
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Chris Ohk
 
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Chris Ohk
 
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
Chris Ohk
 
[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기
Chris Ohk
 
[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기
Chris Ohk
 
C++20 Key Features Summary
C++20 Key Features SummaryC++20 Key Features Summary
C++20 Key Features Summary
Chris Ohk
 
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
Chris Ohk
 
디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게
Chris Ohk
 
청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기
Chris Ohk
 
[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기
Chris Ohk
 
My Way, Your Way
My Way, Your WayMy Way, Your Way
My Way, Your Way
Chris Ohk
 
Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발
Chris Ohk
 
[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics
Chris Ohk
 
C++17 Key Features Summary - Ver 2
C++17 Key Features Summary - Ver 2C++17 Key Features Summary - Ver 2
C++17 Key Features Summary - Ver 2
Chris Ohk
 

More from Chris Ohk (20)

인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
 
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
 
Momenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStoneMomenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStone
 
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
 
Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2
 
Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1
 
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
 
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
 
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
 
[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기
 
[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기
 
C++20 Key Features Summary
C++20 Key Features SummaryC++20 Key Features Summary
C++20 Key Features Summary
 
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
 
디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게
 
청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기
 
[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기
 
My Way, Your Way
My Way, Your WayMy Way, Your Way
My Way, Your Way
 
Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발
 
[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics
 
C++17 Key Features Summary - Ver 2
C++17 Key Features Summary - Ver 2C++17 Key Features Summary - Ver 2
C++17 Key Features Summary - Ver 2
 

Recently uploaded

Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Jeffrey Haguewood
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
Jakub Marek
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024
Intelisync
 
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying AheadDigital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Wask
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
Antonios Katsarakis
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Alpen-Adria-Universität
 
Trusted Execution Environment for Decentralized Process Mining
Trusted Execution Environment for Decentralized Process MiningTrusted Execution Environment for Decentralized Process Mining
Trusted Execution Environment for Decentralized Process Mining
LucaBarbaro3
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
Pixlogix Infotech
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
ssuserfac0301
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
DanBrown980551
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
akankshawande
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
marufrahmanstratejm
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Tatiana Kojar
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
Zilliz
 
Azure API Management to expose backend services securely
Azure API Management to expose backend services securelyAzure API Management to expose backend services securely
Azure API Management to expose backend services securely
Dinusha Kumarasiri
 

Recently uploaded (20)

Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024
 
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying AheadDigital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying Ahead
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
 
Trusted Execution Environment for Decentralized Process Mining
Trusted Execution Environment for Decentralized Process MiningTrusted Execution Environment for Decentralized Process Mining
Trusted Execution Environment for Decentralized Process Mining
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
 
Azure API Management to expose backend services securely
Azure API Management to expose backend services securelyAzure API Management to expose backend services securely
Azure API Management to expose backend services securely
 

Proximal Policy Optimization Algorithms, Schulman et al, 2017

  • 1. Proximal Policy Optimization Algorithms, Schulman et al, 2017 옥찬호 utilForever@gmail.com
  • 2. Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction • In recent years, several different approaches have been proposed for reinforcement learning with neural network function approximators. The leading contenders are deep Q- learning, “vanilla” policy gradient methods, and trust region / natural policy gradient methods. • However, there is room for improvement in developing a method that is scalable (to large models and parallel implementations), data efficient, and robust (i.e., successful on a variety of problems without hyperparameter tuning).
  • 3. Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction • DQN • fails on many simple problems and is poorly understood • A3C – “Vanilla” policy gradient methods • have poor data efficiency and robustness • TRPO • relatively complicated • is not compatible with architectures that include noise (such as dropout) or parameter sharing (between the policy and value function, or with auxiliary tasks)
  • 4. Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction • We propose a novel objective with clipped probability ratios, which forms a pessimistic estimate (i.e., lower bound) of the performance of the policy. • This paper seeks to improve the current state of affairs by introducing an algorithm that attains the data efficiency and reliable performance of TRPO, while using only first-order optimization.
  • 5. Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction • To optimize policies, we alternate between sampling data from the policy and performing several epochs of optimization on the sampled data.
  • 6. Proximal Policy Optimization Algorithms, Schulman et al, 2017Introduction • Our experiments compare the performance of various different versions of the surrogate objective, and find that the version with the clipped probability ratios performs best. • We also compare PPO to several previous algorithms from the literature. • On continuous control tasks, it performs better than the algorithms we compare against. • On Atari, it performs significantly better (in terms of sample complexity) than A2C and similarly to ACER though it is much simpler.
  • 7. PPO, Schulman et al, 2017Background: Policy Optimization • Policy gradient methods work by computing an estimator of the policy gradient and plugging it into a stochastic gradient ascent algorithm. The most commonly used gradient estimator has the form ො𝑔 = ෡𝔼 𝑡 ∇ 𝜃 log 𝜋 𝜃 𝑎 𝑡 𝑠𝑡 መ𝐴 𝑡 • where 𝜋 𝜃 is a stochastic policy and መ𝐴 𝑡 is an estimator of the advantage function at timestep 𝑡. Here, the expectation ෡𝔼 𝑡 … indicates the empirical average over a finite batch of samples, in an algorithm that alternates between sampling and optimization.
  • 8. PPO, Schulman et al, 2017Background: Policy Optimization • Implementations that use automatic differentiation software work by constructing an objective function whose gradient is the policy gradient estimator; the estimator ො𝑔 is obtained by differentiating the objective 𝐿 𝑃𝐺 𝜃 = ෡𝔼 𝑡 log 𝜋 𝜃 𝑎 𝑡 𝑠𝑡 መ𝐴 𝑡
  • 9. PPO, Schulman et al, 2017Background: Policy Optimization • While it is appealing to perform multiple steps of optimization on this loss 𝐿 𝑃𝐺 using the same trajectory, doing so is not well- justified, and empirically it often leads to destructively large policy updates.
  • 10. PPO, Schulman et al, 2017Background: Policy Optimization • In TRPO, an objective function (the “surrogate” objective) is maximized subject to a constraint on the size of the policy update. Specifically, maximize 𝜃 ෡𝔼 𝑡 𝜋 𝜃(𝑎 𝑡|𝑠𝑡) 𝜋 𝜃old (𝑎 𝑡|𝑠𝑡) መ𝐴 𝑡 subject to ෡𝔼 𝑡 𝐾𝐿 𝜋 𝜃old ∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡 ≤ 𝛿 • Here, 𝜃old is the vector of policy parameters before the update.
  • 11. PPO, Schulman et al, 2017Background: Policy Optimization • This problem can efficiently be approximately solved using the conjugate gradient algorithm, after making a linear approximation to the objective and a quadratic approximation to the constraint.
  • 12. PPO, Schulman et al, 2017Background: Policy Optimization • The theory justifying TRPO actually suggests using a penalty instead of a constraint, i.e., solving the unconstrained optimization problem for some coefficient 𝛽. maximize 𝜃 ෡𝔼 𝑡 𝜋 𝜃(𝑎 𝑡|𝑠𝑡) 𝜋 𝜃old (𝑎 𝑡|𝑠𝑡) መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old ∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡
  • 13. PPO, Schulman et al, 2017Background: Policy Optimization • This follows from the fact that a certain surrogate objective (which computes the max KL over states instead of the mean) forms a lower bound (i.e., a pessimistic bound) on the performance of the policy 𝜋. • TRPO uses a hard constraint rather than a penalty because it is hard to choose a single value of 𝜷 that performs well across different problems—or even within a single problem, where the characteristics change over the course of learning.
  • 14. PPO, Schulman et al, 2017Clipped Surrogate Objective • Let 𝑟𝑡(𝜃) denote the probability ratio 𝑟𝑡 𝜃 = 𝜋 𝜃(𝑎 𝑡|𝑠 𝑡) 𝜋 𝜃old (𝑎 𝑡|𝑠 𝑡) , so 𝑟𝑡 𝜃old = 1. TRPO maximizes a “surrogate” objective 𝐿 𝐶𝑃𝐼 𝜃 = ෡𝔼 𝑡 𝜋 𝜃(𝑎 𝑡|𝑠𝑡) 𝜋 𝜃old (𝑎 𝑡|𝑠𝑡) መ𝐴 𝑡 = ෡𝔼 𝑡 𝑟𝑡 𝜃 መ𝐴 𝑡 • The superscript CPI refers to conservative policy iteration, where this objective was proposed.
  • 15. PPO, Schulman et al, 2017Clipped Surrogate Objective • Without a constraint, maximization of 𝐿 𝐶𝑃𝐼 would lead to an excessively large policy update; hence, we now consider how to modify the objective, to penalize changes to the policy that move 𝑟𝑡(𝜃) away from 1. • The main objective we propose is the following: 𝐿 𝐶𝐿𝐼𝑃 𝜃 = ෡𝔼 𝑡 min(𝑟𝑡 𝜃 መ𝐴 𝑡, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡) • where epsilon is a hyperparameter, say, 𝜖 = 0.2.
  • 16. PPO, Schulman et al, 2017Clipped Surrogate Objective • The motivation for this objective is as follows. • The first term inside the min is 𝐿 𝐶𝑃𝐼 𝜃 . • The second term, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡, modifies the surrogate objective by clipping the probability ratio, which removes the incentive for moving 𝑟𝑡 𝜃 outside of the interval 1 − 𝜖, 1 + 𝜖 . • Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective.
  • 17. PPO, Schulman et al, 2017Clipped Surrogate Objective Figure 1: Plots showing one term (i.e., a single timestep) of the surrogate function 𝐿 𝐶𝐿𝐼𝑃 as a function of the probability ratio 𝑟, for positive advantages (left) and negative advantages (right). The red circle on each plot shows the starting point for the optimization, i.e., 𝑟 = 1. Note that 𝐿 𝐶𝐿𝐼𝑃 sums many of these terms.
  • 18. PPO, Schulman et al, 2017Clipped Surrogate Objective Figure 2: Surrogate objectives, as we interpolate between the initial policy parameter 𝜃old, and the updated policy parameter, which we compute after one iteration of PPO. The updated policy has a KL divergence of about 0.02 from the initial policy, and this is the point at which 𝐿 𝐶𝐿𝐼𝑃 is maximal.
  • 19. PPO, Schulman et al, 2017Adaptive KL Penalty Coefficient • Another approach, which can be used as an alternative to the clipped surrogate objective, or in addition to it, is to use a penalty on KL divergence, and to adapt the penalty coefficient so that we achieve some target value of the KL divergence 𝑑targ each policy update. • In our experiments, we found that the KL penalty performed worse than the clipped surrogate objective, however, we’ve included it here because it’s an important baseline.
  • 20. PPO, Schulman et al, 2017Adaptive KL Penalty Coefficient • In the simplest instantiation of this algorithm, we perform the following steps in each policy update: • Using several epochs of minibatch SGD, optimize the KL-penalized objective 𝐿 𝐾𝐿𝑃𝐸𝑁 𝜃 = ෡𝔼 𝑡 𝜋 𝜃(𝑎 𝑡|𝑠𝑡) 𝜋 𝜃old (𝑎 𝑡|𝑠𝑡) መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old ∙ 𝑠𝑡 , 𝜋 𝜃 ∙ 𝑠𝑡 • Compute 𝒅 = ෡𝔼 𝒕 𝑲𝑳 𝝅 𝜽 𝒐𝒍𝒅 ∙ 𝒔 𝒕 , 𝝅 𝜽 ∙ 𝒔 𝒕 • If 𝑑 < Τ𝑑 𝑡𝑎𝑟𝑔 1.5 , 𝛽 ← Τ𝛽 2 • If 𝑑 > 𝑑 𝑡𝑎𝑟𝑔 × 1.5, 𝛽 ← 𝛽 × 2 • The updated 𝛽 is used for the next policy update.
  • 21. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • Most techniques for computing variance-reduced advantage- function estimators make use a learned state-value function 𝑉(𝑠) • Generalized advantage estimation • The finite-horizon estimators
  • 22. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • If using a neural network architecture that shares parameters between the policy and value function, we must use a loss function that combines the policy surrogate and a value function error term. • This objective can further be augmented by adding an entropy bonus to ensure sufficient exploration, as suggested in past work (A3C).
  • 23. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • Combining these terms, we obtain the following objective, which is (approximately) maximized each iteration: 𝐿 𝑡 𝐶𝐿𝐼𝑃+𝑉𝐹+𝑆 𝜃 = ෡𝔼 𝑡 𝐿 𝑡 𝐶𝐿𝐼𝑃 𝜃 − 𝑐1 𝐿 𝑡 𝑉𝐹 𝜃 + 𝑐2 𝑆 𝜋 𝜃 𝑠𝑡 • where 𝑐1, 𝑐2 are coefficients, and 𝑆 denotes an entropy bonus, and 𝐿 𝑡 𝑉𝐹 is a squared-error loss 𝑉𝜃 𝑠𝑡 − 𝑉𝑡 targ 2 .
  • 24. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • One style of policy gradient implementation, popularized in A3C and well-suited for use with recurrent neural networks, runs the policy for 𝑻 timesteps (where 𝑻 is much less than the episode length), and uses the collected samples for an update.
  • 25. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • This style requires an advantage estimator that does not look beyond timestep 𝑻. The estimator used by A3C is መ𝐴 𝑡 = −𝑉 𝑠𝑡 + 𝑟𝑡 + 𝛾𝑟𝑡+1 + ⋯ + 𝛾 𝑇−𝑡+1 𝑟 𝑇−1 + 𝛾 𝑇−𝑡 𝑉 𝑠 𝑇 • where 𝑡 specifies the time index in 0, 𝑇 , within a given length-𝑇 trajectory segment.
  • 26. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • Generalizing this choice, we can use a truncated version of generalized advantage estimation, which reduces to previous equation when 𝜆 = 1: መ𝐴 𝑡 = 𝛿𝑡 + 𝛾𝜆 𝛿𝑡+1 + ⋯ + ⋯ + 𝛾𝜆 𝑇−𝑡+1 𝛿 𝑇−1 where 𝛿𝑡 = 𝑟𝑡 + 𝛾𝑉 𝑠𝑡+1 − 𝑉 𝑠𝑡
  • 27. Proximal Policy Optimization Algorithms, Schulman et al, 2017Algorithm • A proximal policy optimization (PPO) algorithm that uses fixed-length trajectory segments is shown below. • Each iteration, each of 𝑁 (parallel) actors collect 𝑇 timesteps of data. • Then we construct the surrogate loss on these 𝑁𝑇 timesteps of data, and optimize it with minibatch SGD (or usually for better performance, Adam), for 𝐾 epochs.
  • 28. Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments • First, we compare several different surrogate objectives under different hyperparameters. Here, we compare the surrogate objective 𝐿 𝐶𝐿𝐼𝑃 to several natural variations and ablated versions. • No clipping or penalty: 𝐿 𝑡(𝜃) = 𝑟𝑡 𝜃 መ𝐴 𝑡 • Clipping: 𝐿 𝑡(𝜃) = min(𝑟𝑡 𝜃 መ𝐴 𝑡, clip(𝑟𝑡 𝜃 , 1 − 𝜖, 1 + 𝜖) መ𝐴 𝑡) • KL penalty (fixed or adaptive): 𝐿 𝑡 𝜃 = 𝑟𝑡 𝜃 መ𝐴 𝑡 − 𝛽𝐾𝐿 𝜋 𝜃old , 𝜋 𝜃
  • 29. Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
  • 30. Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
  • 31. Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
  • 32. Proximal Policy Optimization Algorithms, Schulman et al, 2017Experiments
  • 33. Proximal Policy Optimization Algorithms, Schulman et al, 2017Conclusion • We have introduced proximal policy optimization, a family of policy optimization methods that use multiple epochs of stochastic gradient ascent to perform each policy update.
  • 34. Proximal Policy Optimization Algorithms, Schulman et al, 2017Conclusion • These methods have the stability and reliability of trust-region methods but are much simpler to implement, requiring only few lines of code change to a vanilla policy gradient (A3C) implementation, applicable in more general settings (for example, when using a joint architecture for the policy and value function), and have better overall performance.
  • 35. References • https://reinforcement-learning-kr.github.io/2018/06/22/7_ppo/ • https://lynnn.tistory.com/73 • https://jay.tech.blog/2018/10/09/trpo%EC%99%80-ppo/ • https://talkingaboutme.tistory.com/entry/RL-Policy-Gradient- Algorithms Proximal Policy Optimization Algorithms, Schulman et al, 2017