Upcoming SlideShare
×

# Coding and Equity pricing

301 views

Published on

Published in: Economy & Finance
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
301
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
2
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Coding and Equity pricing

1. 1. Giulio Laudani #12 Cod. 20247 APPLIED NUMERICAL FINANCE Discrete time framework:.....................................................................................................................................1 How to compute Excepted value: ....................................................................................................................1 American Option: .............................................................................................................................................2 Lattice approach: ..............................................................................................................................................3 Continuous time Framework: ...............................................................................................................................3 A brief review of Original Black’s:.....................................................................................................................3 Modeling more than one security: ...................................................................................................................4 American Option: .............................................................................................................................................5 Jump diffusion process: ....................................................................................................................................6 Monte Carlo ..........................................................................................................................................................7 What is about?..................................................................................................................................................7 A passage through Bias and Efficiency: ............................................................................................................8 Discretization procedure: .................................................................................................................................8 Variance reduction technique: ...................................................................................................................... 10Discrete time framework:This section is basically the Ortus part. We spend few words only on new, or remarkable part.How to compute excepted value:The first cornerstone of finance is the equivalence in value between the price of an asset and the replicating portfolio , where the replicating strategy is a self-financing one (European case) , whilethe discounted one is .The replicating strategy can be computed by a backwardrecursion thatinvolves the conditional covariance of the option value withthe underlying S1: , which is also called thedelta of the portfolio and it also the regression coefficient betweenThe second cornerstone if that the conditional expected value under Q of the option payoff is equal to the today priceitself .The backward recursion formula exploits (and is equivalent to) the Q-martingality of the discounted value of theEuropean derivative X. Starting from the terminal value ; we determine by backward induction V X(t) fromthe value of at the step before, i.e. at t + 1; for t = T -1; …; 0. This approach is precious when dealing with Americanoptions, because it can be generalized to account for the early exercise premium. 1
2. 2. Giulio Laudani #12 Cod. 20247American Option:The American option pricing is: where is a random variable, representing the optimalinvestor time to early exercise the option before maturity, using the info available up to time Pt. This expectation is calledSnell envelope and its properties are: 1. hence it must have a decreasing mean, since the early exercise premium will lose value 2. The concept of super-matingality must be associated with the lowest one among all the possible available, this is an important condition from the seller prospective 3. The variable is chosen as the minimum time value that ensure that the option value is equal to the immediate payoff, this condition is to state that waiting is equal to lose moneyThe American option algorithm uses in the binomial model is equivalent to the free boundary solution, basically we willlook after the maximum value between the expected present value and the immediate payoff. The consequence of thispricing formula is not to have a self-financing replicating strategy; since the option payoff may have intermediate cashflow (this consideration is important for hedging purpose).To solve this problem together with the usual replicating strategy we need to introduce a consumption process C(t) whichis an increasing (no strictly) function [the writer of the option decide thanks to this process how much he will consume atthe beginning of the period, and this strategy is equivalent to the optimal buyer’s strategy]. ,hence the consumption variation is equal to the decrease in the expected value of the option (those money represent thevalue that the writer earn if the buyer do not early exercise when he is supposed to do so).Markovianity is an useful feature of a price process. It allows to pricederivative securities, whose payoff depends only onthe current underlyingstock price, in a fast way.Hence, instead of computing the entire information structure for apriceprocess S1; we can compute only thetree that describes the evolution of S1, hence T+1 nodes , basically the evolutiontill t-1 plus the two new possible evolution(Binomial case) instead of .Alook back American optionpayoff . It is not a Markovian process (better, we cannotsimply use the simplified tree method seen above), we need to Markovianize the process by proceeding trough thefollowing procedure: 1. Introduce a State vector variable defined as or (to have a better understanding) running maximum 2. Construct the tree for S and F and this will be a Markovian process, so the process will depend only to F(t) and S(t), so we can rewrite the process as follow .This method allows to F(t) to not be a recombining function, however the node to be considered are much more than the simpler one, we have a quadratic function (still manageable). 3. To reduce the time required to compute price option has been introduced various approximation. 1 a. The first one is calledthe forward shooting gridapproach , where we are going to introduce an auxiliary vector representing the running maximum. b. Then we compute the immediate payoff for each node c. The backward part consist on using the backward pricing formula , here we might consider that the binomial tree features allow as to say that in case of an upper movement (with1 This method is suitable for Asian option as well 2
3. 3. Giulio Laudani #12 Cod. 20247 probability ) the updated running maximum is in case of an upper movement and F(t) in case of a down movement. d. We will use this state vector to compute the continuation value of the option, however there could be a mismatch between the updated running maxima different and the F(T+1), so we need to define a selection procedure to proxy the result: i. Chose the closest F(t+1) to the updated F(t) ii. Chose two F(t+1) which bound the update one and interpolate e. Check than for immediate payoff value if it is higher than the excepted valueThe price will depend on the algorithm chosen for both the forward and backward partLattice approach:We are going to present a possible framework to develop tree analysis to price path dependent option(J. Cox S.A. Ross andRubinstein, 1979), where the usual methodology do not provide enough information. Our aim is to add to each node ofthe tree more information by means of an auxiliary statevector.The state vectoris used to capture the specific path-dependent feature of the option contract.To enhance the accuracy ofthe lattice methods without burdening thecomputational cost it is also possible to refine the tree representation oftheunderlying in option-specific regions(S. Figlewski and B. Gao, 1999).The Adaptive Mesh Model (AMM) sharply reduces the nonlinearity error. The non-linearity error refers to the factthatwhen the option value is highly nonlinear with respect to the underlying asset (for instance around the strike atexpiration), a uniform refinement of the step size does not efficiently increase accuracy, because much of thecomputational effort is wasted on unimportant regions. The idea of the AMM isto graft one or more small sections of thefine high-resolution lattice onto atree with coarser time and price steps to increase the computational accuracy only onthose regions where needed.The AMM approach canbe adapted to a wide variety of contingent claims. For some commonproblems, accuracy increases by several orders of magnitude with no increase inexecution time.Discrete barrier options are often approximated with continuous barrier options (i.e. options where the barrier ismonitored continuously in time), byusing the closed formula that can be derived in the continuous-time framework. Suchapproximation overprices systematically the knock-in discreteoption and underprices the knock-out discrete options. Toreduce this errorone can apply a suitable correction for option barrier (Broadie, Mark, & Kou, 1997). Basically we will use asuitable higher or lower (depending on the initial position) barrier.Continuous time Framework:Those models are the most used since the daily trading activity is on a continuous base. The models that will discuss in thissection are the base Black and the more advance topic regarding jump diffusion models, mean reverting and tailoring thepricing to fat tails empirical evidence.A brief review of Original Black’s:First of all this model is based on Gaussian distribution assumption, with continuous payoff evolution and we are assumingthat the information comes into the market following a filtration rule. The dynamics used to model the risk free is simplya time depend function, while the securities’ one is assumed to be defined by a deterministic drift plus a stochastic 2component(diffusion) which is assumed to be a Brownian motion .2 The property of the motion are: zero mean, a time dependent volatility “t”, a Gaussian distribution (for difference of motion with different time interval)and each interval is independent from the previously one 3
4. 4. Giulio Laudani #12 Cod. 20247The presence of the diffusion element made the solving equation depending on a stochastic integral which do not allowusing the normal calculus solving methodology. To solve this equation we need to modify the payoff so that to eliminatethe dependency to S(t) of the drift and the diffusion; we can do that by applying the Ito Formula:Thanks to this trick we can solve the SDE and obtain the PDE of the security dynamics as following:Modeling more than one security:In order to describe a given correlation structure among the log-returns ofthe risky securities, we are going to employmany risk factors. In particular,a k-dimensional Brownian motion on the filtered probability space (W; F; P)is used torepresent the riskiness of the market. The classic approach allows only perfect correlated asset, hence we need somethingmore powerful to model non trivial Var-Cov matrix. 3The model consists on defining k independent Brownian motion (one for each securities) and the related diffusioncoefficient that is a vector (in the simple case a constant) representing the sensitivity to each of the k Brownian motion.From this vector we end up with the Var-Cov matrix of the whole market; note that the covariance is the product betweenthe diffusion coefficient and the variance is the sum of the square of the sensitivity coefficient.The next step now is to define the EMM for all the securities involved, basically we need to find the unique vector whichdefines the risk price. We achieve this result by applying the usual Girsanov’s Theorem and transforming the Brownianmotion under probability “P” into the motion under “Q” by the usual drift transformation.To solve the SDE for each security we can still apply the Ito’s formula with the transformation to add the joint derivativesterms to account the presence of more than one risky factor. The PDE is:The Ito’s formula in the multi and the change of drift are:3We can achieve the same result of modeling the correlation by assuming a correlated Brownian motion 4
5. 5. Giulio Laudani #12 Cod. 20247The parameters are the one of “ ”. The hedging strategy in this case is similar to the one-dimensional case, it will change depending on the underling.American Option:The most common analytic approaches to state andsolve the American option problem in the continuous time frameworkare the variational inequality and the free boundary problem.As a preliminary step we need to define the concept of super-martingale, in fact in continuous time we cannot use thebackward recursive formula. To convert this concept in continuous time we need to formalize the stoppingtime , if this event happens we won’t follow any more the continuous region and we won’t have themartingale property, but instead we will be in the early exercise region, where the process will be a super martingale.Hence the option is equal to the European option in the continuous region and equal to the immediate payoff elsewhere.The first procedure consists on having a negative drift under the risk neutral measure for the super-martingale discountpayoff and that the terminal value is anchored to the final payoff/underlying value and that there could be only twopossible case (2) that the continuous region where the process is q-Martingale or the immediate payoff one 1. 2. 3. 4.The first equation is the Ito’s formula applied to a portfolio short on the derivatives and long on “h” units of the underling,by imposing to be a risk free portfolio, hence .In the variational inequality problem, as we have seen, the description of thecontinuation region and the early exerciseregion is implicit. The variationalinequality problem can be tackled with numerical techniques such as finite differenceschemes or finite elements techniques.Another way to address the American option problem is to first describethe continuation region and the early exerciseregion and then impose theBlack-Scholes PDE only on the continuation region. This approach leads tothe free boundaryproblem. The free boundary is the line dividing between thecontinuation region and the early exercise region. Its featuresdepend on the payoff you are considering, and on the parameters of the model. 1. 2. 3. 4. +We focus onthe case of the put option, where the immediate payoff is f(S) = (K - S) .It can be proved that the American putoption price F(t; S) inherits theconvexity with respect to the underlying S and the decreasing monotonicity property withrespect to S from the payoff function f. Moreover F isdecreasing with respect to t:Basically we want to find the critical value S(t)* which define the ends of the continuous region and the beginning of theearly exercise region for any given time. The value S*(t) is called the critical price of S at t and it can be defined as the 4thresholdunder which it is optimal to exercise the option at "t". Unfortunately, noanalytical formula is available to4 For infinite maturity it exist a closed formula 5
6. 6. Giulio Laudani #12 Cod. 20247compute S* as a function of time (andthe other parameters of the American option problem). This is why noclosedformula is available in the finite maturity case for plain vanilla putoptions. The critical value evolution across time is anincreasing function of time (convex) and at maturity it coincides with the strike level K.The infinite maturity solution we have that:Where the elements in the basket are respectively: and and . The solution comes from applying the Ito’s formula applied to and condition 1, which leadsto the solution “a” and we should take the negative value since we want a decreasing function in mean. two possible solutions, only the negative is possibleNote that the value of this option will always dominate the value of the discrete formulaJump diffusion process:In this section we want to add to our motion jumps at random time with stochastic amplitude to model discontinuity inthe option payoff. The base of our study is the original works of Merton (Merton, 1976) later on generalized with the LevyProcess or marketed point process (Schonbucher, 2003).The risk free asset is modeled as the usual Black’s world, the securities are instead model as following:where Y is a random variable and N(t)is a Counting process of the number of jump up to t included and right continuous. 5This last process is distributed according to a Poisson distribution , with mean and variance equal to and marginal 6probability of occurrence , where is called intensity of the process. The J process is a compounded Poison process,where the number of jump is still as the standard distribution, but the size is defined by the sequence of i.i.d randomvariable, i.e. the Y.Now we need to solve the SDE, first of all we need to have a better insight on the jump dynamics/effect: before the eventthe price evolution is equal to the classic Black’s formula at the jump the price will be , so the jump effectwill prevail over all the other time evolution effect. Hence the PDE is:All the process involved W, J and N are independent we can easily compute the first moment and the variance of the PDE:5We have chosen this distribution since it ensure an independent, stationary increment equal 1, right continuous and non-decreasing6It is possible to model the intensity as a function of time (inhomogeneous process) or as a stochastic function (Cox process) 6
7. 7. Giulio Laudani #12 Cod. 20247It is easy to see that the variance in this case is higher than the simple lognormal process, this is a good property to betterfit the fatter tail of the empirical data. You need to note that E(y)-1=0 only if the probability to do not have any lose invalue for any jumps equal 1 (no jump effect). Note that those measures are under P.This model grants NA, however it is incomplete, in fact there exist many super-martingale measure, from who will choosethe lower one. We need to apply Girsanov’s theorem to change the probability measure to the Jump process as following 7, where and jointlywe need to change the drift of the Wiener process .So the SDE under Q of the discounted stock differential will be: where .If we compute the expected value we notice that the last term is equivalent to a pure jump martingale under Q , hence the mean is zero.We end up withand to be drift less (NA requirement) we do not have a unique solution since we have two parameters.The two parameters cannot be uniquely define since we have one equation (imposing the drift to be zero), Mertonpropose to choose as since in his opinion the jump risk can be perfectly hedged (in his mind), hence investor mustbe neutral on it: by substituting theThe PDE , note that the drift under q is and that thevolatility is unchanged, so to compute the first and second moment we can simply change the drift, so that we have: each traded security must earn the risk free rate under any EMM-QTo price option we can use an intuitive approach base on the decision to choose a number “n” of jump during the tenor orby applying the Ito’s formula:The first one will be the intuitive one, besides the trick we assume that Y is log normal(a; ) which is equivalent to , so the PDE will be if we modify the equation to use thestandardized distribution: .Now we notice that the expected value of the present value option payoff is the product of the probabilityand where the last term is the BSformula and this last change has been made to change r with to fit the BS formula.The solution of the SDE with Ito formula is made by a modified version of the standard one, in fact to the usual term willadd the for the dF(t), if we compute the integral of it will became and all the other term are expressed as integral, since we are looking for the punctual estimate and not theinfinitesimal increment. We choose the usual log transformation we will have our PDE as seen above.Monte CarloIn this section we will speak about three main topics: what is a MC simulation, how to improve the efficiency and thepossible drawback and finally some comments on practical example.What is about?7 The conditional distribution is unchanged, the size of the event is unaffected by the change of measure, the number of jump will be changed. There existother forms of the Girsonov’s theorem which allow changing the size of the jump as well. 7
8. 8. Giulio Laudani #12 Cod. 20247MC simulation are used to estimate not solvable equation with analytical solution, basically we are going to use an 8estimator based on “large number rules ”.Since this is an estimate it is not a number but it carries with itself a distribution and an error, that’s why we have an IC forthat estimates that we need to minimize in order to improve our following consideration based on those results. The MC 9methods has a rate of convergence equal to , which is better than try to solve the integral where we are consideringhigh dimension problem (more than 4 elements).To perform a simulation we need to know or to model the distribution of the underling random number which we aregoing to compute, besides the theoretical consideration on what to use here we will speak on how we will use it. Theinverse function method is a sort of statement to allow getting all distribution starting from the Uniform , infact all PC application provides a random number generator which is based on the uniform distribution. This is animportant property that allowsretrying all continuous and discrete distribution: For the first case no problem , just find the percentile as function of “U” form For discrete case we need to define range in which any value of the “U” will be assign to the correct probability measure, basically we will look for the right extreme inclusion is a convention [Generalize]The proof of this relationship is based on the fact that since: , which can be proven by checkingthat , soA passage through Bias and Efficiency:Now after that brief introduction we can describe the twin concept of bias and efficiency. Here we are speaking of bias 10referring to the discretization problem , in fact the estimator is by definition un-biased, and we are defining as efficiencya multi-dimensional measure, in fact we are looking to both reduce the radius and the time needed to perform the 11simulation . Those two parameters play a contradictory rule, or better they are inversed influenced by the sameelements, that’s why we use the mean spare error measure to improve our estimate.Thanks to this mathematical device we can jointly control for the bias and variance contribution to reduce the quality ofthe estimate. We are usually interested in minimizing the variance, besides in the case of American option. The two citedelements are the size of the discretization interval“h” and the number of sample used “n”, which contributes to theradiusefficiency with the following dumb relationship for the discrete approximation case. See belowThe time efficiency is considered as following: and the radius expressed in terms of time per simulation 12 is where the variance is the one granted by the procedure by applying “h”.Discretization procedure:8 On the convergence for big sample of the estimator to the correct value9 That is our original quantity that we want to guess10 This problem arise when we have to find the Greek of option, when we need to estimate the derivatives/marginal variation to given factors11 We have usually time constrain12 In case of stochastic time needed per simulation (barrier option case) we can use the expected value for simulation 8
9. 9. Giulio Laudani #12 Cod. 20247This is a technique to both estimates path dependent payoff and to compute the option Greeks. We are going to simulateboth the payoff evolution and the marginal change for given change in some key factors.Speaking about the Greeks there are three possible methodologies that can be used: Finite discretization: we will look after the first derivatives respect to the given factor by approximating its limit definition. This method is function of the size of the marginal increment considered and by the number of simulation performed. This method is a non-consistent approach, since the discretization bias plays a big rule, however it can be minimized by reducing the “h” size, however we need to control the variance explosion 13 problem . There exist two possible methods: o Forward Difference: . The bias in this case is reduced by a linear function regardless the number of Taylor expansion terms in the proxy used, i.e. .  hence the , so the bias 14  , still “h” is the higher order o Central Difference: . The bias goes to zero faster than the forward case , however this procedure is more time demanding since we need to compute two marginal changes. If we assumed that the function is n-times continuously differentiable the bias is  , so the bias o Speaking about the Variance effect we need to consider to possible estimation procedure:  Independent sampling  Same seed for both the sampling The path wise method consists on determining the sensitivity, by deriving the payoff with respect to the parameter you areinterested in, by swapping the expectation with the derivative operator: , basically we will estimate the sample mean of that quantity . 15 o This method is unbiased, however can be applied only under given hp, i.e. smoothness of the payoff . o For practical use the estimator that we will use where S(T) is the underlying and is the parameter on whom we are computing the derivatives.  The first factor is computed by defining the value assumed by the payoff at maturity, in the case of a European call we have: since  The second factor is the derivatives of the underlining dynamics, in the case of the European call with respect to S(0) [delta] is13 Given the estimates of the derivatives we need to analyze the Variance, in fact its estimates is reduced by the term for FD while for CD (ifthe different draw are independent both for the marginal increase that for the original)14 You have to consider the higher order among all the variable15 Digital option do not allow using this methodology, note continuity is too much we need less. 9
10. 10. Giulio Laudani #12 Cod. 20247  The European case the , if we want to estimate the Vega we need to change just the second factor o This method can be applied to any diffusion process by freezing path wise the coefficient (Euler Discretization) The likelihood ratio method has been introduced to overcome the limit of the previously method, hence it is a more general one. It consists on simulating the payoff density, which is far more smooth than the original payoff, hence will use the continuous definition of expected value:Where is the density function of y for a fixed parameter This estimator isconsistent and unbiased and extendable to the multidimensional case. In concrete this method to be applied: first we 16need find the risk neutral probability of the derivatives payment occurrence . Here there is an example for the delta of acall European: At first compute the density function to respect of the parameter, which is S0, so the density is the ln 0− − 22 , so = 1[= ]∗ ′[=1 ∗1 ] Then we need to compute the derivatives of g(x) to respect to S0: The score will beThis method can be used in a multidimensional world, as well as in a path dependent option estimation where the is the vector of one dimensional random variable with the same density g(x).Variance reduction technique:The efficiency is an important goal, here we will describe the most important one: Antithetic Variate, it is really easy, it consists on using for each simulation the given percentile and its opposite, so that they have the same distribution but they are not independent, but negatively correlated. . The variance is smaller Control Variate is based on using the error in the estimate of known quantities to reduce the error in the estimateof the unknown one. We will use the combination of the known variable and the unknown one , which it will beused as estimator. o This estimator is unbiased for o So we need to choose a parameter “b” to minimize the new estimator variance to ensure . This method allows to reduce the variance if the control variate is correlated to the unknown, the sign do not matter, only size the higher the better with the trivial requirement16 In the case of European option it is the d1 10
11. 11. Giulio Laudani #12 Cod. 20247 o If we joint estimate b and X we will have a bias, in fact those variables will be correlated so . To solve this issue we need to run two independent simulation, the first regressing Y on X to obtain “b” ( it converges to the correct value b) and the second running the simulation for the estimator itself o The “b” comes from , now we can compute the FOC or just notice that it is a parabola so the vertex is the minimum as well. Note that Matching underling asset: the key idea is to match the moments of the underlying asset to reduce the risk of mispricing derivatives. There are two possibilities, both of them are assuming a Geometric Brownian motion: o Simple Moment matching: ,(explaining ) this for the first matching (which grants positive payoff), however it is hard for higher moment.Multiplicative correction. Additive correction, however do not preserve positivity. Note that the first approach do not grant to the new parameter to be distributed as the original one, while the second does. o Weighted MC: The paths’ Weighs Si (T) for i = 1; …; n with weights such that the moments of S are matched and then use the same weights to estimate the expected payoff: . Those weights are chosen to maximize the (negative entropy) distance from the uniform distribution: with the constrain  Basically we are forcing the estimator to have same  We need to write the Lagrangian and find the FOC[ ]and the result: but we can rewrite the risk aversion coefficient v as so , by exploiting Importance sampling (Weighted MC): we want to change the paths importance of f (X) that havegreater impact on determining the expected value. We proceed to choose the weight as following: o At first we compute the continuous mean o We apply the Ridon Nikodin derivatives to change the density measure: the new measure will be o The new target is by the strong law of large numbers, hence it is unbiased. 17 o Now we want to find the g(x) that minimize the variance we may chose the where “a” is the expected value of f(x). However we cannot do that since we do not know the distribution ex-ante, but we know that g(x) is proportional to . We can apply an exponential twisting the , thisrescaling function which depends only on one parameter.  First the function is a parabola (also first derivatives are equivalent) to simplify the computation. It is the moment generating function of X and it is distributed according to a Normal.It is made to allow a decreasing mean before a key time, and increasing after to push the path closer to the significant path.  The new function will be new target function. Note that the multi-dimensional case is the one used, there will be an g(x) for each period considered  The is chosen depending on the underling dynamics and it will change depending on the event matching , this parameter is compute by doing the FOC for17 11
12. 12. Giulio Laudani #12 Cod. 20247 There will two equation one for , and by exploiting the property, and we will have: = +…+ ; = ′ − = + ′ The new variable x will be distrusted(under g measure) as a normal with same variance but different mean 12