Consider a portfolio with a one day VAR of $1 million. Assume that the market is trending with an auto correlation of 0.1. Under this scenario, what would you expect the two day VAR to be?
Suppose we have a portfolio of $10 million in shares of Microsoft. We want to calculate VAR at 99% confidence interval over a 10 day horizon. The volatility of Microsoft is 2% per day. Calculate VAR.
Now consider a combined portfolio of AT&T and Microsoft shares. Assume the returns on the two shares have a bivariate normal distribution with the correlation of 0.3. What is the portfolio VAR.?
Mapping : If the portfolio consists of a large number of instruments, it would be too complex to model each instrument separately.
Instruments are replaced by positions on a limited number of risk factors.
Local valuation methods make use of the valuation of the instruments at the current point, along with the first and perhaps, the second partial derivatives.
The portfolio is valued only once.
Full valuation methods reprice the instruments over a broad range of values for the risk factors.
The Monte Carlo Simulation Method is similar to the historical simulation, except that movements in risk factors are generated by drawings from some pre-specified distribution.
The risk manager samples pseudo random numbers from this distribution and then generates pseudo-dollar returns as before.
Finally, the returns are sorted to produce the desired VAR.
This method uses computer simulations to generate random price paths.
Monte Carlo methods are by far the most powerful approach to VAR.
They can account for a wide range of risks including price risk, volatility risk, fat tails and extreme scenarios and complex interactions.
Non linear exposures and complex pricing patterns can also be handled.
Monte Carlo analysis can deal with time decay of options, daily settlements & associated cash flows and the effect of pre specified trading or hedging strategies.
The Monte Carlo approach requires users to make assumptions about the stochastic process and to understand the sensitivity of the results to these assumptions.
Different random numbers will lead to different results.
A large number of iterations may be needed to converge to a stable VAR measure.
When all the risk factors have a normal distribution and exposures are linear, the method should converge to the VAR produced by the delta-normal VAR.
The Monte Carlo approach is computationally quite demanding.
It requires marking to market the whole portfolio over a large number of realisations of underlying random variables.
To speed up the process, methods, have been devised to break the link between the number of Monte Carlo draws and the number of times the portfolio is repriced.
In the grid Monte Carlo approach, the portfolio is exactly valued over a limited number of grid points.
For each simulation, the portfolio is valued using a linear interpolation from the exact values at adjoining grid points.
The first and most crucial step consists of choosing a particular stochastic model for the behaviour of prices.
A commonly used model in Monte carlo simulation is the Geometric Brownian motion model which assumes movements in the market price are uncorrelated over time and that small movements in prices can be described by:
dS t = μ t S t dt + σ t S t dz
dz is a random variable distributed normally with mean zero and variance dt.
Sample along the paths that are most important to the problem at hand.
If the goal is to measure a tail quantile, accurately, there is no point in doing simulations that will generate observations in the centre of the distribution.
To increase the accuracy of the VAR estimator, we can partition the simulation region into two or more zones.
Appropriate number of observations is drawn from each region.
If the stochastic process chosen for the price is unrealistic, so will be the estimate of VAR.
For example, the geometric Brownian motion model adequately describes the behaviour of stock prices and exchange rates but not that of fixed income securities.
In Brownian motion models, price shocks are never reversed and prices move as a random walk.
This cannot be the price process for default free bond prices which must converge to their face value at expiration.
52.
V A R Applications Passive Reporting risk Disclosure to shareholders Management reports Regulatory requirements Defensive Controlling risks Setting risk limits Active Allocating risk Performance valuation Capital allocation , Strategic business decisions.
Centralization makes sense for credit risk management too.
A financial institution may have myriad transactions with the same counterparty, coming from various desks such as currencies, fixed income commodities and so on.
Even though all the desks may have a reasonable exposure when considered on an individual basis, these exposures may add up to an unacceptable risk.
Also, with netting agreements, the total exposure depends on the net current value of contracts covered by the agreements.
All these steps are not possible in the absence of a global measurement system.
EVT extends the central limit theorem which deals with the distribution of the average of identically and independently distributed variables from an unknown distribution to the distribution of their tails.
The EVT approach is useful for estimating tail probabilities of extreme events.
For very high confidence levels (>99%), the normal distribution generally underestimates potential losses.
A properly working model would still produce two to three exceptions a year.
But – the existence of clusters of exceptions indicated that something was seriously wrong.
Credit Suisse reported 11 exceptions at the 99% confidence level in the third quarter, Lehman brothers three at 95%, Goldman Sachs five at 95%, Morgan Stanley six at 95%, Bear Stearns 10 at 99% and UBS 16 at 99%.
With the benefit of hindsight, the type of VAR model that would actually have worked best in the second half of 2007 would most likely have been a model driven by a frequently updated short data history.
Or any frequently updated short data history that weights more recent observations more heavily than more distant observations.
In the wake of the recent credit crisis, there is a strong case for increasing the frequency of updating.
Monthly, quarterly or even weekly updating of the data series would improve the responsiveness of the model to a sudden change of conditions.
Be the first to comment