This is a presntation of my second year intership on optimal stochastic theory and how we can apply it on some financial application then how we can solve such problems using finite differences methods!
Enjoy it !
Runtime Analysis of Population-based Evolutionary AlgorithmsPK Lehre
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes.
The tutorial begins with a brief overview of the population‐based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level‐based analyses.
To illustrate how these techniques can be applied, we consider several fundamental questions: When are populations necessary for efficient optimisation with EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA's tolerance for uncertainty, e.g. in form of noisy or partially available fitness?
This tutorial was presented at the 2015 IEEE Congress on Evolutionary Computation at Sendai, Japan, May 25th 2015.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
A tutorial, and very new algorithms -- more details on arXiv and at NIPS 2017 https://arxiv.org/abs/1707.03770
Part of the Data Science Summer School at École Polytechnique: http://www.ds3-datascience-polytechnique.fr/program/
---------
2018 Updates:
See Zap slides from ISMP 2018 for new inverse-free optimal algorithms
Simons tutorial, March 2018 [one month before most discoveries announced at ISMP]
Part I (Basics, with focus on variance of algorithms)
https://www.youtube.com/watch?v=dhEF5pfYmvc
Part II (Zap Q-learning)
https://www.youtube.com/watch?v=Y3w8f1xIb6s
Big 2017 survey on variance in SA:
Fastest convergence for Q-learning
https://arxiv.org/abs/1707.03770
You will find the infinite-variance Q result there.
Our NIPS 2017 paper is distilled from this.
Runtime Analysis of Population-based Evolutionary AlgorithmsPK Lehre
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes.
The tutorial begins with a brief overview of the population‐based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level‐based analyses.
To illustrate how these techniques can be applied, we consider several fundamental questions: When are populations necessary for efficient optimisation with EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA's tolerance for uncertainty, e.g. in form of noisy or partially available fitness?
This tutorial was presented at the 2015 IEEE Congress on Evolutionary Computation at Sendai, Japan, May 25th 2015.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
A tutorial, and very new algorithms -- more details on arXiv and at NIPS 2017 https://arxiv.org/abs/1707.03770
Part of the Data Science Summer School at École Polytechnique: http://www.ds3-datascience-polytechnique.fr/program/
---------
2018 Updates:
See Zap slides from ISMP 2018 for new inverse-free optimal algorithms
Simons tutorial, March 2018 [one month before most discoveries announced at ISMP]
Part I (Basics, with focus on variance of algorithms)
https://www.youtube.com/watch?v=dhEF5pfYmvc
Part II (Zap Q-learning)
https://www.youtube.com/watch?v=Y3w8f1xIb6s
Big 2017 survey on variance in SA:
Fastest convergence for Q-learning
https://arxiv.org/abs/1707.03770
You will find the infinite-variance Q result there.
Our NIPS 2017 paper is distilled from this.
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
Non-linear optimization applications in finance including volatility estimation with ARCH and GARCH models, line search methods, Newton's method, steepest descent method, golden section search method, and conjugate gradient method.
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
Non-linear optimization applications in finance including volatility estimation with ARCH and GARCH models, line search methods, Newton's method, steepest descent method, golden section search method, and conjugate gradient method.
A numerical method to solve fractional Fredholm-Volterra integro-differential...OctavianPostavaru
The Goolden ratio is famous for the predictability it provides both in the microscopic world as well as in the dynamics of macroscopic structures of the universe. The extension of the Fibonacci series to the Fibonacci polynomials gives us the opportunity to use this powerful tool in the study of Fredholm-Volterra integro-differential equations. In this paper, we define a new hybrid fractional function consisting of block-pulse functions and Fibonacci polynomials (FHBPF). For this, in the Fibonacci polynomials we perform the transformation $x\to x^{\alpha}$, with $\alpha$ a real parameter. In the method developed in this paper, we propose that the unknown function $D^{\alpha}f(x)$ be written as a linear combination of FHBPF. We consider the fractional derivative $D^{\alpha}$ in the Caputo sense. Using theoretical considerations, we can write both the function $f(x)$ and other involved functions of type $D^{\beta}f(x)$ on the same basis. For this operation, we have to define an integral operator of Riemann-Liouville type associated to FHBPF, and with the help of hypergeometric functions, we can express this operator exactly. All these ingredients together with the collocation in the Newton-Cotes nodes transform the integro-differential equation into an algebraic system that we solve by applying Newton's iterative method. We conclude the paper with some examples to demonstrate that the proposed method is simple to implement and accurate. There are situations when by simply considering $\alpha\ne1$, we obtain an improvement in accuracy by 12 orders of magnitude.
We apply tensor train (TT) data format to solve an elliptic PDE with uncertain coefficients. We reduce complexity and storage from exponential to linear. Post-processing in TT format is also provided.
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
Bayesian Experimental Design for Stochastic Kinetic ModelsColin Gillespie
In recent years, the use of the Bayesian paradigm for estimating the optimal experimental design has increased. However, standard techniques are
computationally intensive for even relatively small stochastic kinetic models. One solution to this problem is to couple cloud computing with a model emulator.
By running simulations simultaneously in the cloud, the large design space can be explored. A Gaussian process is then fitted to this output, enabling the
optimal design parameters to be estimated.
Exploring Abhay Bhutada’s Views After Poonawalla Fincorp’s Collaboration With...beulahfernandes8
The financial landscape in India has witnessed a significant development with the recent collaboration between Poonawalla Fincorp and IndusInd Bank.
The launch of the co-branded credit card, the IndusInd Bank Poonawalla Fincorp eLITE RuPay Platinum Credit Card, marks a major milestone for both entities.
This strategic move aims to redefine and elevate the banking experience for customers.
how to sell pi coins in South Korea profitably.DOT TECH
Yes. You can sell your pi network coins in South Korea or any other country, by finding a verified pi merchant
What is a verified pi merchant?
Since pi network is not launched yet on any exchange, the only way you can sell pi coins is by selling to a verified pi merchant, and this is because pi network is not launched yet on any exchange and no pre-sale or ico offerings Is done on pi.
Since there is no pre-sale, the only way exchanges can get pi is by buying from miners. So a pi merchant facilitates these transactions by acting as a bridge for both transactions.
How can i find a pi vendor/merchant?
Well for those who haven't traded with a pi merchant or who don't already have one. I will leave the telegram id of my personal pi merchant who i trade pi with.
Tele gram: @Pi_vendor_247
#pi #sell #nigeria #pinetwork #picoins #sellpi #Nigerian #tradepi #pinetworkcoins #sellmypi
If you are looking for a pi coin investor. Then look no further because I have the right one he is a pi vendor (he buy and resell to whales in China). I met him on a crypto conference and ever since I and my friends have sold more than 10k pi coins to him And he bought all and still want more. I will drop his telegram handle below just send him a message.
@Pi_vendor_247
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...beulahfernandes8
Role in Financial System
NBFCs are critical in bridging the financial inclusion gap.
They provide specialized financial services that cater to segments often neglected by traditional banks.
Economic Impact
NBFCs contribute significantly to India's GDP.
They support sectors like micro, small, and medium enterprises (MSMEs), housing finance, and personal loans.
Introduction to Indian Financial System ()Avanish Goel
The financial system of a country is an important tool for economic development of the country, as it helps in creation of wealth by linking savings with investments.
It facilitates the flow of funds form the households (savers) to business firms (investors) to aid in wealth creation and development of both the parties
how to sell pi coins on Bitmart crypto exchangeDOT TECH
Yes. Pi network coins can be exchanged but not on bitmart exchange. Because pi network is still in the enclosed mainnet. The only way pioneers are able to trade pi coins is by reselling the pi coins to pi verified merchants.
A verified merchant is someone who buys pi network coins and resell it to exchanges looking forward to hold till mainnet launch.
I will leave the telegram contact of my personal pi merchant to trade with.
@Pi_vendor_247
What website can I sell pi coins securely.DOT TECH
Currently there are no website or exchange that allow buying or selling of pi coins..
But you can still easily sell pi coins, by reselling it to exchanges/crypto whales interested in holding thousands of pi coins before the mainnet launch.
Who is a pi merchant?
A pi merchant is someone who buys pi coins from miners and resell to these crypto whales and holders of pi..
This is because pi network is not doing any pre-sale. The only way exchanges can get pi is by buying from miners and pi merchants stands in between the miners and the exchanges.
How can I sell my pi coins?
Selling pi coins is really easy, but first you need to migrate to mainnet wallet before you can do that. I will leave the telegram contact of my personal pi merchant to trade with.
Tele-gram.
@Pi_vendor_247
Even tho Pi network is not listed on any exchange yet.
Buying/Selling or investing in pi network coins is highly possible through the help of vendors. You can buy from vendors[ buy directly from the pi network miners and resell it]. I will leave the telegram contact of my personal vendor.
@Pi_vendor_247
BYD SWOT Analysis and In-Depth Insights 2024.pptxmikemetalprod
Indepth analysis of the BYD 2024
BYD (Build Your Dreams) is a Chinese automaker and battery manufacturer that has snowballed over the past two decades to become a significant player in electric vehicles and global clean energy technology.
This SWOT analysis examines BYD's strengths, weaknesses, opportunities, and threats as it competes in the fast-changing automotive and energy storage industries.
Founded in 1995 and headquartered in Shenzhen, BYD started as a battery company before expanding into automobiles in the early 2000s.
Initially manufacturing gasoline-powered vehicles, BYD focused on plug-in hybrid and fully electric vehicles, leveraging its expertise in battery technology.
Today, BYD is the world’s largest electric vehicle manufacturer, delivering over 1.2 million electric cars globally. The company also produces electric buses, trucks, forklifts, and rail transit.
On the energy side, BYD is a major supplier of rechargeable batteries for cell phones, laptops, electric vehicles, and energy storage systems.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
Poonawalla Fincorp and IndusInd Bank Introduce New Co-Branded Credit Cardnickysharmasucks
The unveiling of the IndusInd Bank Poonawalla Fincorp eLITE RuPay Platinum Credit Card marks a notable milestone in the Indian financial landscape, showcasing a successful partnership between two leading institutions, Poonawalla Fincorp and IndusInd Bank. This co-branded credit card not only offers users a plethora of benefits but also reflects a commitment to innovation and adaptation. With a focus on providing value-driven and customer-centric solutions, this launch represents more than just a new product—it signifies a step towards redefining the banking experience for millions. Promising convenience, rewards, and a touch of luxury in everyday financial transactions, this collaboration aims to cater to the evolving needs of customers and set new standards in the industry.
Currently pi network is not tradable on binance or any other exchange because we are still in the enclosed mainnet.
Right now the only way to sell pi coins is by trading with a verified merchant.
What is a pi merchant?
A pi merchant is someone verified by pi network team and allowed to barter pi coins for goods and services.
Since pi network is not doing any pre-sale The only way exchanges like binance/huobi or crypto whales can get pi is by buying from miners. And a merchant stands in between the exchanges and the miners.
I will leave the telegram contact of my personal pi merchant. I and my friends has traded more than 6000pi coins successfully
Tele-gram
@Pi_vendor_247
Research internship on optimal stochastic theory with financial application using finite differences method foer anumerical resolution
1. 2nd Year Internship at LAMSIN: Optimal stochastic
control problem with financial applications
Asma BEN SLIEMENE
ENSIIE
asma.ben-slimene@polytechnique.fr
from June 2016 to September 2016
2. Overview
1 Optimal stochastic problem theory
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
2 Resolution methods
Probabilistic approach
Numerical/Deterministic approach with PDEs
3 Financial applications
Merton portfolio allocation Problem
Investment/consumption Problem
4 Numerical results on C++ and Scilab
For the investment problem
For the investment/consumption problem
3. LAMSIN
Traning objective: An open door into financial mathematics
research
located at ´Ecole Nationale d’Ing´enieurs de Tunis (Tunisia)
comprises 83 researchers, including 40 doctoral students. Each year,
6 to 8 students complete their Master’s theses within the laboratory.
1983: Creation of a research group in numeric analysis at ENIT.
2001: becomes Research Laboratory associated with INRIA (e-didon
team).
in July 2003: was selected by the Agence Universitaire de la
Francophonie (AUF) to be a regional center of excellence in Applied
Mathematics.
Fields of study research: Inverse problems, financial mathematics
including optimoiszation control problems etc.
4. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory
2 Applications in finance
3 Dynamic programming principle
4 Hamilton Jacobi Bellman equation
4 / 74
5. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory
2 Applications in finance
3 Dynamic programming principle
4 Hamilton Jacobi Bellman equation
5 / 74
6. 1 State of the system: Xt (ω) and its dynamics through a SDE
dXt = b(Xt , αt )dt + σ(Xt , αt )dWt , (1)
2 Control: a process α = (αt )t that satisfy somme constraints and defined
in A the set of admissible control.
3 Performance/cost criterion: maximize (or minimize) over all admissible
controls J(X, α)
Consider objective functionals in the form
E
T
0
f(Xs, ω, αs)ds + g(XT , ω)X = x , on a finite horizon T
and
E
∞
0
eβt
f(Xs, ω, αs)ds |X = x , on a infinite horizon
f is a running profit function, g is a terminal reward function, and β > 0 is
a discount factor.
Objective: find the value functionv(x) = supα J(X, α)
7. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory
2 Applications in finance
3 Dynamic programming principle
4 Hamilton Jacobi Bellman equation
7 / 74
8. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
Portfolio allocation
Production-consumption model
Irreversible investment model
Quadratic hedging of options
Superreplication cost in uncertain volatility
Optimal selling of an asset
Valuation of natural resources
Ergodic and risk-sensitive control problems
Superreplication under gamma constraints
Robust utility maximization problem and risk measures
Forward performance criterion
8 / 74
9. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
Portfolio allocation
Production-consumption model
Irreversible investment model
Quadratic hedging of options
Superreplication cost in uncertain volatility
Optimal selling of an asset
Valuation of natural resources
Ergodic and risk-sensitive control problems
Superreplication under gamma constraints
Robust utility maximization problem and risk measures
Forward performance criterion
9 / 74
10. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
Portfolio allocation
Production-consumption model
Irreversible investment model
Quadratic hedging of options
Superreplication cost in uncertain volatility
Optimal selling of an asset
Valuation of natural resources
Ergodic and risk-sensitive control problems
Superreplication under gamma constraints
Robust utility maximization problem and risk measures
Forward performance criterion
10 / 74
11. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory
2 Applications in finance
3 Dynamic programming principle
4 Hamilton Jacobi Bellman equation
11 / 74
12. Definition
Bellman’s principle of optimality
” An optimal policy has the property that whatever the initial state and initial
decision are, the remaining decisions must constitute an optimal policy with
regard to the state resulting from the first decision”
Mathematical formulation of the Bellman’s principle or Dynamic
Programming Principle (DPP)
The usual version of the DPP is written as
v(t, x) = sup
α∈A(t,x)
E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ )
for any stopping time θ ∈ Tt,T (set of stopping times valued in [t, T]).
13. Usual version of the DPP
(1) Finite horizon: let (t, x) ∈ [0, T] × Rn
. Then ∀ θ ∈ Tt,T
v(t, x) = sup
α∈A(t,x)
sup
θ∈Tt,T
E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ ) (2)
= sup
α∈A(t,x)
inf
θ∈Tt,T
E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ ) (3)
(2) Infinite horizon: let x ∈ [0, T]Rn
. Then ∀ θ ∈ Tt,T we have
v(t, x) = sup
α∈A(x)
sup
θ∈T
E
θ
0
e−βs
f(Xx
s , αs) dx + e−βs
v(Xx
θ ) (4)
= sup
α∈A(x)
inf
θ∈T
E
θ
0
e−βs
f(Xx
s , αs) dx + e−βθ
v(Xx
θ ) (5)
14. Strong version of the DPP
Lemma
Dynamic programming principle (i) For all α ∈ A(t, x) and θ ∈ Tt,T :
v(t, x) ≥ E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ ) (6)
(ii) For all > 0, there exists α ∈ A(t, x) such that for all θ ∈ Tt,T :
v(t, x) − ≤ E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ ) (7)
for any stopping time θ ∈ Tt,T .
We can assume that:
v(t, x) = sup
α∈A(t,x)
E
θ
t
f(s, Xt,x
s , αs) ds + v(θ, Xt,x
θ ) (8)
16. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Dynamic Programming Principle
Hamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory
2 Applications in finance
3 Dynamic programming principle
4 Hamilton Jacobi Bellman equation
16 / 74
17. Formal derivation of HJB
Assume that the value function is smooth enough (i.e. is C2
) to apply Itˆo’s
formula.
For any α ∈ A, and a controlled process Xt,x
apply Itˆo’s formula to
v(s, Xt,x
) between s = t and s = t + h:
v(t +h, Xt,x
t+h) = v(t, x)+
t+h
t
∂v
∂t
+ La
v (s, Xt,x
s )ds +(local)martingal,
where for a ∈ A, La
is the second-order operator associated to the
diffusion X with constant control a:f
La
w = b(x, a) x w +
1
2
tr(σ(x, a)σ (s, a)) 2
x w
Plug into the DPP:
Devide by h, send h to zero, and obtain by the mean-value theorem, the
so-called HJB equation
18. Formal derivation of HJB
The Parabolic HJB equation
−
∂v
∂t
(t, x) + H1(t, x, x v(t, x), 2
x v(t, x)) = 0, ∀(t, x) ∈ [0, T[×Rn
, (9)
where ∀(t, x, p, M) ∈ Rn
× Rn
× Sn :
H1(t, x, p, M) = sup
a∈A
−b(x, a)p −
1
2
tr(σσ (x, a))M − f(t, x, a) . (10)
The Elliptic HJB equation
βv(x) − H2(x; x v(x), 2
x v(x)) = 0, ∀x ∈ Rn
,
Where ∀(x, p, M) ∈ Rn
× Rn
× Sn,
H2(x, p, M) = sup
a∈A
b(x, a)p +
1
2
tr(σ(x, a)σ (x, a)M + f(x, a) = 0,
19. Formal derivation of HJB
The Parabolic HJB equation
−
∂v
∂t
(t, x) + H1(t, x, x v(t, x), 2
x v(t, x)) = 0, ∀(t, x) ∈ [0, T[×Rn
, (9)
where ∀(t, x, p, M) ∈ Rn
× Rn
× Sn :
H1(t, x, p, M) = sup
a∈A
−b(x, a)p −
1
2
tr(σσ (x, a))M − f(t, x, a) . (10)
The Elliptic HJB equation
βv(x) − H2(x; x v(x), 2
x v(x)) = 0, ∀x ∈ Rn
,
Where ∀(x, p, M) ∈ Rn
× Rn
× Sn,
H2(x, p, M) = sup
a∈A
b(x, a)p +
1
2
tr(σ(x, a)σ (x, a)M + f(x, a) = 0,
20. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
II) Resolution methods
1 Probabilistic approach
2 PDE approach
20 / 74
21. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
II) Resolution methods
1 Probabilistic approach
2 PDE approach
21 / 74
22. Probabilistic approach
Approximate the process Xt with a Marcov chain n such 0 = x. Under
some conditions, n converges in law to Xt .
Monte Carlo algorithms one of the methods widely used to obtain a
numerical approximation.
Case g = 0: Let X(1)
, ..., X(k)
be an i.i.d. sample drawn in the
distribution of Xt,x
T , and compute the mean:
ˆvn(t, x) :=
1
k
n
i=1
f X(i)
.
Law of Large Numbers: ˆvn(t, x) −→ v(t, x) Pa.s.
The Central Limit Theorem:
√
n( ˆvn(t, x) − v(t, x)) −→ N 0, Var f Xt,x
T in distribution,
23. Probabilistic approach
Approximate the process Xt with a Marcov chain n such 0 = x. Under
some conditions, n converges in law to Xt .
Monte Carlo algorithms one of the methods widely used to obtain a
numerical approximation.
Case g = 0: Let X(1)
, ..., X(k)
be an i.i.d. sample drawn in the
distribution of Xt,x
T , and compute the mean:
ˆvn(t, x) :=
1
k
n
i=1
f X(i)
.
Law of Large Numbers: ˆvn(t, x) −→ v(t, x) Pa.s.
The Central Limit Theorem:
√
n( ˆvn(t, x) − v(t, x)) −→ N 0, Var f Xt,x
T in distribution,
24. Probabilistic approach
Approximate the process Xt with a Marcov chain n such 0 = x. Under
some conditions, n converges in law to Xt .
Monte Carlo algorithms one of the methods widely used to obtain a
numerical approximation.
Case g = 0: Let X(1)
, ..., X(k)
be an i.i.d. sample drawn in the
distribution of Xt,x
T , and compute the mean:
ˆvn(t, x) :=
1
k
n
i=1
f X(i)
.
Law of Large Numbers: ˆvn(t, x) −→ v(t, x) Pa.s.
The Central Limit Theorem:
√
n( ˆvn(t, x) − v(t, x)) −→ N 0, Var f Xt,x
T in distribution,
25. Probabilistic approach
Approximate the process Xt with a Marcov chain n such 0 = x. Under
some conditions, n converges in law to Xt .
Monte Carlo algorithms one of the methods widely used to obtain a
numerical approximation.
Case g = 0: Let X(1)
, ..., X(k)
be an i.i.d. sample drawn in the
distribution of Xt,x
T , and compute the mean:
ˆvn(t, x) :=
1
k
n
i=1
f X(i)
.
Law of Large Numbers: ˆvn(t, x) −→ v(t, x) Pa.s.
The Central Limit Theorem:
√
n( ˆvn(t, x) − v(t, x)) −→ N 0, Var f Xt,x
T in distribution,
26. Probabilistic approach
Approximate the process Xt with a Marcov chain n such 0 = x. Under
some conditions, n converges in law to Xt .
Monte Carlo algorithms one of the methods widely used to obtain a
numerical approximation.
Case g = 0: Let X(1)
, ..., X(k)
be an i.i.d. sample drawn in the
distribution of Xt,x
T , and compute the mean:
ˆvn(t, x) :=
1
k
n
i=1
f X(i)
.
Law of Large Numbers: ˆvn(t, x) −→ v(t, x) Pa.s.
The Central Limit Theorem:
√
n( ˆvn(t, x) − v(t, x)) −→ N 0, Var f Xt,x
T in distribution,
27. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
II) Resolution methods
1 Probabilistic approach
2 PDE approach
27 / 74
28. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
Steps
PDE approach is based on:
Step 1: Discretization of time and space sets/Approximating derivatives
Step 2: Discretizing boundary conditions (Dirichlet/Neumann
Step 3: soving problem (Policy/Value iteration, Howard)
v: the value function
Optimal control strategy/stopping time
28 / 74
29. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
Steps
PDE approach is based on:
Step 1: Discretization of time and space sets/Approximating derivatives
Step 2: Discretizing boundary conditions (Dirichlet/Neumann
Step 3: soving problem (Policy/Value iteration, Howard)
v: the value function
Optimal control strategy/stopping time
29 / 74
30. Time and space descretization
Let Ω = [0, 1], ∆t = T
N , N ∈ N∗
, tk=0...N := k∆t, h step in space, tk = k∆t,
xj = jh. Ωh, Lα
, vk
j (x),bk
j ,ak,α
j approximate Ω, Lα
, b(tk , xj ), α, a(tk , xj , α)
Approximation of first
derivative:
∂v
∂x
(tk , xj ) :=
vk
j+1 − vk
j−1
2h1
(11)
∂v
∂x
(tk , xj ) :=
vk
j+1 − vk
j
h
(12)
or
∂v
∂x
(tk , xj ) :=
vk
j − vk
j−1
h
(13)
Approximation of second derivative
∂2
v
∂x2
(tk , xj ) :=
vk
j+1 − 2vk
j + vk
j−1
h2
(14)
Approximation of time derivative
∂v
∂t
(tk , xj ) :=
vk
j − vk−1
j
∆t
(15)
or
∂v
∂t
(tk , xj ) :=
vk+1
j − vk
j
∆t
(16)
31. Time and space descretization
Let Ω = [0, 1], ∆t = T
N , N ∈ N∗
, tk=0...N := k∆t, h step in space, tk = k∆t,
xj = jh. Ωh, Lα
, vk
j (x),bk
j ,ak,α
j approximate Ω, Lα
, b(tk , xj ), α, a(tk , xj , α)
Approximation of first
derivative:
∂v
∂x
(tk , xj ) :=
vk
j+1 − vk
j−1
2h1
(11)
∂v
∂x
(tk , xj ) :=
vk
j+1 − vk
j
h
(12)
or
∂v
∂x
(tk , xj ) :=
vk
j − vk
j−1
h
(13)
Approximation of second derivative
∂2
v
∂x2
(tk , xj ) :=
vk
j+1 − 2vk
j + vk
j−1
h2
(14)
Approximation of time derivative
∂v
∂t
(tk , xj ) :=
vk
j − vk−1
j
∆t
(15)
or
∂v
∂t
(tk , xj ) :=
vk+1
j − vk
j
∆t
(16)
32. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Probabilistic approach
Numerical/Deterministic approach with PDEs
Dirichlet boundary conditions: v = g in ∂Ω × [0, T[
Neumann boundary conditions:
∂v
∂x = g2 in Ω × [0, T[
In case f = 0 and g = xp
/p, p ∈]0, 1[
vN
j = gj =
x
p
j
p
and
vk
M −vk
M−1
h
= p
xM
vk
M = xp−1
M , k ∈ 0..N − 1, j ∈ 0..M
vk
M = vk
M−1
vk
M = 0, and vk
0 = 0
NB: In portfolio allocation problem − > Black and Scholes-Merton Problem of
stocks:
dSt = µdt + σdWt ,
dS0 = rS0dt
32 / 74
33. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Merton portfolio allocation Problem
Investment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem
2 Investment/consumption Problem
33 / 74
34. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Merton portfolio allocation Problem
Investment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem
2 Investment/consumption Problem
34 / 74
35. Applications 1: Merton portfolio allocation problem in
finite horizon
An agent invests at any time t a proportion αt of his wealth X in a stock of
price S and 1 − αt in a bond of price S0
with interest rate r.
The dynamics of the controlled wealth process is:
dXt =
Xt αt
St
dSt +
Xt (1 − αt )
S0
t
dS0
t
”Utility maximization problem at a finite horizon T ”:
v(t, x) = sup
α∈A
E U Xt,x
T , ∀ (t, x) ∈ [0, T] × (0, ∞) .
HJB eqaution for Merton’s problem
vt + rxvx + sup
a∈A
a (µ − r) xvx +
1
2
x2
a2
σ2
vxx = 0 (17)
v(T, x) = U(x) (18)
36. Utility function
U is C1
, strictly increasing and concave on (0, ∞), and satisies the Inada
conditions:
U (0) = ∞ U (∞) = 0 :
Convex conjugate of U:
ˆU(y) := sup
x>0
[U(x) − xy]
We use the CRRA utility function:
U(x) =
xp
p
, p 1, p 0
Relative Risk Aversion RRA: −xU”
(x)/U (x) = 1 − p.
→ if the person experiences an increase in wealth, he/she will choose to
increase (or keep unchanged, or decrease) the fraction of the portfolio
held in the risky asset if relative risk aversion is decreasing (or constant, or
increasing).
37. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
Merton portfolio allocation Problem
Investment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem
2 Investment/consumption Problem
37 / 74
38. Investment/consumption problem on infinite horizon
The SDE governing the wealth process
dXt = Xt (αt µ + (1 − αt )r − ct )dt + Xt αt αt dWt ,
The goal is to maximize over strategies (α, c) the expected utility from
intertemporal consumption up to a random time horizon τ:
v(x) = sup
(α,c)∈A×C
E
τ
0
e−βt
u(ct Xx
t ) dt .
τ is independent of F∞, denote by F(t) = P[τ ≤ t] = P[τ ≤ t|F∞] the
distribution function of τ.
Assume an exponential distribution for the random time horizon:
1 − F(t) = exp−λt
for some positive constant λ.
Infinite horizon problem:
v(x) = sup
(α,c)∈A×C
E
∞
0
e−(β+λ)t
u(ct Xx
t ) dt
39. The HJB equation associated is
ˆβv(x) − sup
a∈A,c≥0
[La,c
v(x) + u(cx)] = 0, x ≥ 0, (19)
where La,c
v(x) = x(aµ + (1 − a)r − c)v + 1
2 x2
a2
σ2
v
Explicit solution
The discount factor β shall satisfy: β > ρ − λ
v(x) = Ku(x) solves the HJB equation where
K =
1 − p
β + λ − ρ
1−p
and ρ =
(µ − r)2
2σ2
p
1 − p
+ rp
The optimal controls are constant given by (ˆa, ˆc)
ˆa = arg max
a∈A
[a(µ − r) + r −
1
2
a2
(1 − p)σ2
]
ˆc =
1
x
(v (x))
1
p−1 .
40. The HJB equation associated is
ˆβv(x) − sup
a∈A,c≥0
[La,c
v(x) + u(cx)] = 0, x ≥ 0, (19)
where La,c
v(x) = x(aµ + (1 − a)r − c)v + 1
2 x2
a2
σ2
v
Explicit solution
The discount factor β shall satisfy: β > ρ − λ
v(x) = Ku(x) solves the HJB equation where
K =
1 − p
β + λ − ρ
1−p
and ρ =
(µ − r)2
2σ2
p
1 − p
+ rp
The optimal controls are constant given by (ˆa, ˆc)
ˆa = arg max
a∈A
[a(µ − r) + r −
1
2
a2
(1 − p)σ2
]
ˆc =
1
x
(v (x))
1
p−1 .
41. The HJB equation associated is
ˆβv(x) − sup
a∈A,c≥0
[La,c
v(x) + u(cx)] = 0, x ≥ 0, (19)
where La,c
v(x) = x(aµ + (1 − a)r − c)v + 1
2 x2
a2
σ2
v
Explicit solution
The discount factor β shall satisfy: β > ρ − λ
v(x) = Ku(x) solves the HJB equation where
K =
1 − p
β + λ − ρ
1−p
and ρ =
(µ − r)2
2σ2
p
1 − p
+ rp
The optimal controls are constant given by (ˆa, ˆc)
ˆa = arg max
a∈A
[a(µ − r) + r −
1
2
a2
(1 − p)σ2
]
ˆc =
1
x
(v (x))
1
p−1 .
42. The HJB equation associated is
ˆβv(x) − sup
a∈A,c≥0
[La,c
v(x) + u(cx)] = 0, x ≥ 0, (19)
where La,c
v(x) = x(aµ + (1 − a)r − c)v + 1
2 x2
a2
σ2
v
Explicit solution
The discount factor β shall satisfy: β > ρ − λ
v(x) = Ku(x) solves the HJB equation where
K =
1 − p
β + λ − ρ
1−p
and ρ =
(µ − r)2
2σ2
p
1 − p
+ rp
The optimal controls are constant given by (ˆa, ˆc)
ˆa = arg max
a∈A
[a(µ − r) + r −
1
2
a2
(1 − p)σ2
]
ˆc =
1
x
(v (x))
1
p−1 .
43. Why Markov Chain approach?
solving the descritized system requires some conditions on the matrix A
of the differential operator Lα
Case where A is not defined positive, we can obtain a descretization
system such satisfy the ” Discrete Maximum principle ”
Under specific condition on the space step of discretization h we get a
convergent Markov Chain. [page 89 A. SULEM, J-P. PHILIPPE, M´ethode
num´erique en contr ole stochastique]
The convergence of the scheme can be found and explained using
standard arguments provided by D. Kushner [Numerical Methods for
Stochastic Control Problems in Continuous Time.
NB Depending on the sign of the drift b of Xt , we use the right-hand-side
scheme upwind when b is positive and the left-hand-side upwind
scheme when b is negative to obtain a sort of transition probabilities
(∈ [0, 1] )
44. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
For the investment problem
For the investment/consumption problem
IV) Numerical results on C++ and Scilab
1. Results for the investment problem
Approximated scheme
Resolution method/Coding
Results
2. Results for the investment/consumption problem
Approximated scheme
Resolution method/Coding
Results
44 / 74
45. Approximated scheme
Approximated scheme: two different scheme were used.
The forward upwind scheme
the HJB approximated is:
vk−1
j = supα 1 − ∆t
h |bk,α
j | − ∆t
h2 ak,α
j vk
j +
∆t
h (bk,α
j )+ + 1
2
∆t
h2 ak,α
j vk
j+1 + ∆t
h (bk,α
j )− + 1
2
∆t
h2 ak,α
j vk
j−1
vN
j = gj
Denote
pα
j = p(xj , xj |α), pα
+ j
= p(xj , xj+1|α), pα
− j
= p(xj , xj−1|α)
the transition probabilities that define the transition matrix Aα
.
Matrix notations: vk−1
= supα (I − ∆tAα
) vk
Explicit solution is given in [1]:
46. Approximated scheme
Approximated scheme: two different scheme were used.
The forward upwind scheme
the HJB approximated is:
vk−1
j = supα 1 − ∆t
h |bk,α
j | − ∆t
h2 ak,α
j vk
j +
∆t
h (bk,α
j )+ + 1
2
∆t
h2 ak,α
j vk
j+1 + ∆t
h (bk,α
j )− + 1
2
∆t
h2 ak,α
j vk
j−1
vN
j = gj
Denote
pα
j = p(xj , xj |α), pα
+ j
= p(xj , xj+1|α), pα
− j
= p(xj , xj−1|α)
the transition probabilities that define the transition matrix Aα
.
Matrix notations: vk−1
= supα (I − ∆tAα
) vk
Explicit solution is given in [1]:
47. Algorithm C++
Algorithm of the forward scheme
Initialization: ∀j in 0, ..., M, vN
j =
√
xj
Repeat for all k from N − 1 to 0 do
vk
0 = 0
calculate vk
j ∈ h := v(tk , xj ) = supαi
w(tk , xj , αi )
Repeat for all j in 1, ..., M − 1,
for each αi in [ˆα − , ˆα + ] do
calculate (bαi
j )+ and (bαi
j )−
solve
vk
j = supαi
1 − ∆t
h |bαi
j | − ∆t
h2 aαi
j vk+1
j +
∆t
h (bαi
j )+ + 1
2
∆t
h2 aαi
j vk+1
j+1 + ∆t
h (bαi
j )− + 1
2
∆t
h2 aαi
j vk+1
j−1
vN
j = vN−1
j
48. Algorithm C++
Algorithm of the forward scheme
Initialization: ∀j in 0, ..., M, vN
j =
√
xj
Repeat for all k from N − 1 to 0 do
vk
0 = 0
calculate vk
j ∈ h := v(tk , xj ) = supαi
w(tk , xj , αi )
Repeat for all j in 1, ..., M − 1,
for each αi in [ˆα − , ˆα + ] do
calculate (bαi
j )+ and (bαi
j )−
solve
vk
j = supαi
1 − ∆t
h |bαi
j | − ∆t
h2 aαi
j vk+1
j +
∆t
h (bαi
j )+ + 1
2
∆t
h2 aαi
j vk+1
j+1 + ∆t
h (bαi
j )− + 1
2
∆t
h2 aαi
j vk+1
j−1
vN
j = vN−1
j
49. Results
The shape of approximated value function and the explicit solution are
very close at the time 0.
A very small difference is observed in the limit of x = xM
50. Results
Error in value function (10−3
).
The implementation requires a big number of points (the more N is big
also for M)
51. Results
Control: Results are satisfying.
The error gets bigger from a state of time to another in the boundary set
of X Ω
53. The shape of the Value function density
We can draw the shape of the approximated value function in function of time
and space since we stock the different value in an Excel file.
54. Backward scheme
The backward upwind scheme
the HJB approximated is:
vk
j = vk+1
j + supα
∆t
h (−|bα
j |) − ∆t
h2 aα
j vk
j
+ ∆t
h (bα
j )+ + 1
2
∆t
h2 aα
j vk
j+1 + ∆t
h (bα
j )− + 1
2
∆t
h2 aα
j vk
j−1
vN
j = gj
vk
N −vk
N−1
h = p
xN
vk
N
k ∈ 0..M − 1, j ∈ 0..N
Denote pα
j = ∆t
h (−|bα
j |) − ∆t
h2 aα
j , pα
+ j
= ∆t
h (bα
j )+ + 1
2
∆t
h2 aα
j ,
pα
− j
= ∆t
h (bα
j )− + 1
2
∆t
h2 aα
j the transition probabilities that define a
Marcov Chain with the transition matrix Aα
.
Matrix notations: supα (I + ∆tAα
h ) vk+1
− vk
= 0
55. Backward scheme
The backward upwind scheme
the HJB approximated is:
vk
j = vk+1
j + supα
∆t
h (−|bα
j |) − ∆t
h2 aα
j vk
j
+ ∆t
h (bα
j )+ + 1
2
∆t
h2 aα
j vk
j+1 + ∆t
h (bα
j )− + 1
2
∆t
h2 aα
j vk
j−1
vN
j = gj
vk
N −vk
N−1
h = p
xN
vk
N
k ∈ 0..M − 1, j ∈ 0..N
Denote pα
j = ∆t
h (−|bα
j |) − ∆t
h2 aα
j , pα
+ j
= ∆t
h (bα
j )+ + 1
2
∆t
h2 aα
j ,
pα
− j
= ∆t
h (bα
j )− + 1
2
∆t
h2 aα
j the transition probabilities that define a
Marcov Chain with the transition matrix Aα
.
Matrix notations: supα (I + ∆tAα
h ) vk+1
− vk
= 0
56. Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solve
minα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij
1. Initialize α0
in A.
2. Iterate for k ≥ 0 :
(i) find xk
∈ N
solution of B(α)xk
= b.
(ii) αk+1
:= argminα∈An B(α)xk
− b .
3. k=k+1
Note that at each iteration, we have to find the control value of α
57. Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solve
minα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij
1. Initialize α0
in A.
2. Iterate for k ≥ 0 :
(i) find xk
∈ N
solution of B(α)xk
= b.
(ii) αk+1
:= argminα∈An B(α)xk
− b .
3. k=k+1
Note that at each iteration, we have to find the control value of α
58. Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solve
minα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij
1. Initialize α0
in A.
2. Iterate for k ≥ 0 :
(i) find xk
∈ N
solution of B(α)xk
= b.
(ii) αk+1
:= argminα∈An B(α)xk
− b .
3. k=k+1
Note that at each iteration, we have to find the control value of α
59. Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solve
minα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij
1. Initialize α0
in A.
2. Iterate for k ≥ 0 :
(i) find xk
∈ N
solution of B(α)xk
= b.
(ii) αk+1
:= argminα∈An B(α)xk
− b .
3. k=k+1
Note that at each iteration, we have to find the control value of α
60. Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solve
minα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij
1. Initialize α0
in A.
2. Iterate for k ≥ 0 :
(i) find xk
∈ N
solution of B(α)xk
= b.
(ii) αk+1
:= argminα∈An B(α)xk
− b .
3. k=k+1
Note that at each iteration, we have to find the control value of α
62. Results: Error between Value functions
Let’s illustrates the error between both functions, an error of around
10−3
.
Error increases in the boundary state of x: it can be explained by
boundary conditions used in the model.
63. Results: Optimal control α
The shape of the optimal control α compared to the the explicit solution
Same comments with the terminal condition imposed on x
64. Results: Error between control solutions
In the Howard algorithm, both boundary conditions type Dirichlet then
those type Neumann were used ⇒ Neumann conditions give better
results.
65. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
For the investment problem
For the investment/consumption problem
IV) Numerical results on C++ and Scilab
1. Results for the investment problem
Approximated scheme
Resolution method/Coding
Results
2. Results for the investment/consumption problem
Approximated scheme
Resolution method/Coding
Results
65 / 74
66. Introducing to Markov Chain approach
There is k > 0 and a Markov matrix Mα
h verifying
Aα
h = −ˆβIh +
1
k
(Mα
h − Ih)or Mα
h = Ih + k(Aα
h + ˆβIh) (20)
Hence
(Mα
h )ij =
1 + k(ˆβ + (Aα
h )ii ) if i = j,
k(Aα
h )ij if i = j.
we choose k such that k ≤ 1
ˆβ+|(Aα
h
)ii |
, ∀i = 1, ..., d which make all matrix
coefficients (Mα
h )ij positive:
(Mα
h )ij = 1 + k ˆβ + kMα
h )ij
= 1 if Neumann,
< 1 if Dirichlet
(20) can be written as: supα∈A(Mα
h − Ih − ˆβk)vh + k ˆuh = 0
⇒ HJB equation of a conntrol problem of a Marcov chain with a discount rate
ˆβh, and instant cost k ˆuh and transition matrix Mα
h
71. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
For the investment problem
For the investment/consumption problem
Comments
71 / 74
72. Optimal stochastic problem theory
Resolution methods
Financial applications
Numerical results on C++ and Scilab
For the investment problem
For the investment/consumption problem
Conclusion
Optimal stochastic control problem: an interesting field of research.
Merton portfolio allocation without/with consumption as classic
examples.
Numerical methods (forward and backward methods, Howard and policy
iteration) approximatie the optimal solutions/ must verify stability,
consistence and convergence ⇒ controlled Markov chain has been
used.
Numerical results were satisfying despite the fact of the presence of the
error related to sophistic boundary conditions.
DPP supposes a minimum of smoothness of value function to apply Itˆo’s
formula!Not always the case ⇒ viscosity approach widely used in
finance.
Imagine problems more complicated such investment problems with
transaction costs (singular optimal control problem). what methods to
use in modeling solutions?
72 / 74
73. References
D. Lamberton and B. Lapeyre,
Une Introduction au Calcul Stochastique Appliqu´ee `a laFinance.
Editions Eyrolles, 1997.
H. Pham.
Continous-time Stochastic Control and Optimization with Financial Applications.
Springer, 2008.
Jean-Philippe Chancelier et Agn`es Sulem.
M´ethode num´erique en contrˆole stochastique.
Le Cermics. 22 f´evrier 2005.
Kushner H.J. and Dupuis P.
Numerical Methods for stochastic Control Problems in Continuous Time.
Springer Verlag, 1992.
S. Cr´epey.
Financial Modeling.
Springer, 2013.
http://www.cmap.polytechnique.fr/ touzi/Fields-LN.pdf
http://www.math.fsu.edu/ pgarreau/files/merton.pdf