This document presents the pricing, hedging, and risk management of a portfolio containing two basket Asian options. It first describes the contractual terms of the two options and the stocks in their respective baskets. It then outlines the necessary preliminary computations, including bootstrapping the discount curve, determining reset dates, and analyzing historical stock data. The document models the stock price dynamics under both Normal Inverse Gaussian (NIG) and Geometric Brownian Motion (GBM) and prices the options using Monte Carlo simulation. It further examines the Greeks, hedging strategy, and calculates Value at Risk both with and without hedging using different approximations.
CVA In Presence Of Wrong Way Risk and Early Exercise - Chiara Annicchiarico, ...Michele Beretta
We will show how to calibrate the main parameter of the model and how we have used it in order to evaluate the CVA and the CVAW of a one derivative portfolio with the possibility of early exercise.
This document provides an introduction to statistical modeling of financial time series. It begins with concepts like arithmetic and geometric returns that are used to analyze changes in financial prices over time. It then discusses common time series models like the random walk model and autoregressive models. Subsequent sections cover modeling volatility with GARCH models, analyzing return distributions, building multivariate models, and applications like forecasting and risk management. The overall aim is to help practitioners apply statistical methods to quantitatively analyze and model financial time series data.
Peter Warken - Effective Pricing of Cliquet Options - Masters thesis 122015Peter Warken
This thesis examines pricing methods for cliquet options, a type of path-dependent option linked to equity indices. The author develops a semi-closed form pricing formula for cliquet options in a Black-Scholes market and compares it to Monte Carlo simulation. The impact of stochastic volatility and interest rates on pricing is also examined. Numerical experiments illustrate how market parameters affect cliquet option prices. The pricing approach is also applied to similar products like sum cap contracts.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
This document discusses numerical methods for pricing financial derivatives. It covers discrete and continuous time frameworks, American and path-dependent options, and Monte Carlo simulation. The key points are:
1) Discrete models compute expected value through backward recursion on a lattice, allowing early exercise of American options. Continuous models generalize Black-Scholes.
2) Path-dependent options like lookbacks require Markovianization by introducing an auxiliary state variable. Lattice methods can be refined non-uniformly using adaptive meshing.
3) Monte Carlo simulation prices derivatives through discretization and sampling, with techniques to reduce variance like control variates.
Solution to Black-Scholes P.D.E. via Finite Difference Methods (MatLab)Fynn McKay
Simple implementable of Numerical Analysis to solve the famous Black-Scholes P.D.E. via Finite Difference Methods for the fair price of a European option.
The Effectiveness of interest rate swapsRoy Meekel
This master's thesis analyzes the effectiveness of interest rate swaps for hedging interest rate risk in a pension fund portfolio. The author, Roy Meekel, uses yield curve simulation to evaluate how well an interest rate swap portfolio hedges the interest rate risk arising from a duration mismatch between a fictional pension fund's assets and liabilities. Three models for simulating yield curves are analyzed: a basic model, an adjusted-lambda model, and a modified data model that incorporates an ultimate forward rate to reduce volatility of rates at long maturities. The results of 10,000 yield curve simulations for each model are used to assess how effectively the interest rate swaps hedge interest rate risk for the pension fund.
Abstract:
Banks are often confronted with the situation where they have to construct a portfolio from scratch. In order to make decisions about which assets to use so that the portfolio makes a profit with as little risk as possible, we have developed an engine that allows you to import assets, check their characteristics and then generate an optimal portfolio. The portfolio has been optimized in such a way, that from 2008 to 2018 it has the lowest
possible risk, even in high-volatility phases and the Sharpe ratio is improved by rebalancing the assets. For optimization purposes, various rolling windows were used, which performed
differently. What stood out was, that a Sharpe ratio improved by almost 30% through the use of a specific method. The paper shows that the Sharpe ratio of the portfolio can best be improved by taking assets from different asset allocation classes and minimizing volatility by diversifying the portfolio as much as possible.
CVA In Presence Of Wrong Way Risk and Early Exercise - Chiara Annicchiarico, ...Michele Beretta
We will show how to calibrate the main parameter of the model and how we have used it in order to evaluate the CVA and the CVAW of a one derivative portfolio with the possibility of early exercise.
This document provides an introduction to statistical modeling of financial time series. It begins with concepts like arithmetic and geometric returns that are used to analyze changes in financial prices over time. It then discusses common time series models like the random walk model and autoregressive models. Subsequent sections cover modeling volatility with GARCH models, analyzing return distributions, building multivariate models, and applications like forecasting and risk management. The overall aim is to help practitioners apply statistical methods to quantitatively analyze and model financial time series data.
Peter Warken - Effective Pricing of Cliquet Options - Masters thesis 122015Peter Warken
This thesis examines pricing methods for cliquet options, a type of path-dependent option linked to equity indices. The author develops a semi-closed form pricing formula for cliquet options in a Black-Scholes market and compares it to Monte Carlo simulation. The impact of stochastic volatility and interest rates on pricing is also examined. Numerical experiments illustrate how market parameters affect cliquet option prices. The pricing approach is also applied to similar products like sum cap contracts.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
This document discusses numerical methods for pricing financial derivatives. It covers discrete and continuous time frameworks, American and path-dependent options, and Monte Carlo simulation. The key points are:
1) Discrete models compute expected value through backward recursion on a lattice, allowing early exercise of American options. Continuous models generalize Black-Scholes.
2) Path-dependent options like lookbacks require Markovianization by introducing an auxiliary state variable. Lattice methods can be refined non-uniformly using adaptive meshing.
3) Monte Carlo simulation prices derivatives through discretization and sampling, with techniques to reduce variance like control variates.
Solution to Black-Scholes P.D.E. via Finite Difference Methods (MatLab)Fynn McKay
Simple implementable of Numerical Analysis to solve the famous Black-Scholes P.D.E. via Finite Difference Methods for the fair price of a European option.
The Effectiveness of interest rate swapsRoy Meekel
This master's thesis analyzes the effectiveness of interest rate swaps for hedging interest rate risk in a pension fund portfolio. The author, Roy Meekel, uses yield curve simulation to evaluate how well an interest rate swap portfolio hedges the interest rate risk arising from a duration mismatch between a fictional pension fund's assets and liabilities. Three models for simulating yield curves are analyzed: a basic model, an adjusted-lambda model, and a modified data model that incorporates an ultimate forward rate to reduce volatility of rates at long maturities. The results of 10,000 yield curve simulations for each model are used to assess how effectively the interest rate swaps hedge interest rate risk for the pension fund.
Abstract:
Banks are often confronted with the situation where they have to construct a portfolio from scratch. In order to make decisions about which assets to use so that the portfolio makes a profit with as little risk as possible, we have developed an engine that allows you to import assets, check their characteristics and then generate an optimal portfolio. The portfolio has been optimized in such a way, that from 2008 to 2018 it has the lowest
possible risk, even in high-volatility phases and the Sharpe ratio is improved by rebalancing the assets. For optimization purposes, various rolling windows were used, which performed
differently. What stood out was, that a Sharpe ratio improved by almost 30% through the use of a specific method. The paper shows that the Sharpe ratio of the portfolio can best be improved by taking assets from different asset allocation classes and minimizing volatility by diversifying the portfolio as much as possible.
This document is a project report submitted by Stephen Arthur Bradley that empirically calculates an optimal hedging method. It contains an acknowledgment of sources, table of contents, abstract, and sections on put call parity, volatility modeling using historical and implied methods, the Greeks (delta, gamma, vega), Black-Scholes model assumptions and equations, Heston and GARCH models, and a performance comparison of different hedging methods using these models. Code for delta hedging using Black-Scholes, Heston, and GARCH models is included in the appendices.
The document presents a methodology for removing serial correlation from hedge fund return time series data in order to determine the "true" underlying returns. It describes the Okunev White model, which can eliminate autocorrelation of any order from a time series. The document then applies this model to various hedge fund indices, finding significant reductions in autocorrelation and changes to the distributions and risk measures of the returns. Key impacts included a right shift of negatively skewed distributions and reduced kurtosis, as well as lower values for risk ratios like the Sharpe ratio.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
This report analyzes the Three Stars investment fund. It first examines the fund's return characteristics, finding the fund had a higher mean return but also higher risk than the market index. It then evaluates the fund's performance using several metrics. Sharpe and Treynor ratios found the fund offered greater risk-adjusted returns than the market. The report also conducts market timing analysis and concludes the fund was able to time the market to some degree to maximize returns. Overall, the analysis finds the fund performed well but also carried higher risk than the market.
This document discusses time series prediction using reservoir computing. It introduces different types of non-stationarity that can occur in time series, including non-stationarity in mean, variance, and both mean and variance. It then describes four time series prediction tasks that will be used to evaluate different detrending techniques: three artificial tasks exhibiting different types of non-stationarity, and one natural task of predicting stock prices. The document provides background on reservoir computing and outlines the goals and structure of analyzing how detrending impacts time series prediction performance.
The objective of this tool was to give a measure of the Value at Risk of the given asset class using techniques like Historical simulation and Monte Carlo simulation. I was involved in the design of a package for estimating the Initial Margin requirement for OTC Derivatives like FX Forward Contracts and Interest Rate Swaps using Historical Value at Risk. I also designed a prototype for running a Monte Carlo simulation on a given stock using Geometric Brownian Motion.
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
This document summarizes a master's thesis on integrating market views into quantitative portfolio allocation. It introduces the concepts of optimal asset allocation, mean-variance optimization, and dimension reduction using linear factor models. It then discusses incorporating investor views through information sets and the efficient market hypothesis. The Black-Litterman model uses a Gaussian market assumption, CAPM reverse optimization, and Bayesian updating to integrate views. An alternative approach uses f-divergences to measure distortions between probability distributions and translate views into information gain. The thesis concludes by proposing directions for future research.
This document provides an introduction and preface to a lecture on econometric analysis of financial market data. It discusses using the R programming language to model financial data and complete assignments. It encourages downloading R and additional packages to facilitate statistical analysis and simulations. The document outlines installing and using R, and provides references for further reading on empirical finance techniques.
Pricing with a smile: an approach using NIG distributions with SABR-like para...CharvetXavier
This paper outlines an approach to option pricing using the Normal Inverse Gaussian (NIG) distribution. The NIG distribution can take on various shapes to fit different volatility smiles. The paper derives a parameterization of the NIG distribution similar to the SABR model, allowing users to control the smile through familiar SABR-like parameters. Empirical results demonstrate the NIG distribution fitting smiles for inflation options on UK RPI and interest rate options on CHF Libor. The paper also discusses how modeling asset marginals as NIG distributions could benefit multi-asset option pricing.
This document presents a final year project report on using quasi-Monte Carlo methods for market risk management. It first outlines two existing variance reduction methods - importance sampling and stratified importance sampling - and how they are applied to estimate tail loss probabilities in a stock portfolio model. The report then introduces quasi-Monte Carlo and proposes combining it with stratified importance sampling to achieve further variance reduction. Numerical results show that this combined method does not significantly improve variance reduction compared to existing methods.
This thesis examines the ex-dividend day effect on the Stockholm stock exchange from 2000 to 2011. Using event study methodology, the authors estimate abnormal returns on the ex-dividend day and compare them to the dividend yield. They find no strong statistical or economic evidence of an ex-dividend day effect. They also control for abnormal returns on days surrounding the ex-dividend day and find no evidence of unusual price movements. Therefore, they cannot conclude the market is inefficient.
This thesis examines systemic risk between major UK banks and insurance companies during periods of different GDP growth using vine copula models. It analyzes dependence structures in 2008 during economic recession and 2009 during recovery. The document outlines the goals of investigating how dependence changes between the two periods and which institutions have the greatest influence when subjected to shocks. Time series GARCH models are used to prepare the data for vine copula construction and simulation.
Statistical Arbitrage
Pairs Trading, Long-Short Strategy
Cyrille BEN LEMRID

1 Pairs Trading Model 5
1.1 Generaldiscussion ................................ 5 1.2 Cointegration ................................... 6 1.3 Spreaddynamics ................................. 7
2 State of the art and model overview 9
2.1 StochasticDependenciesinFinancialTimeSeries . . . . . . . . . . . . . . . 9 2.2 Cointegration-basedtradingstrategies ..................... 10 2.3 FormulationasaStochasticControlProblem. . . . . . . . . . . . . . . . . . 13 2.4 Fundamentalanalysis............................... 16
3 Strategies Analysis 19
3.1 Roadmapforstrategydesign .......................... 19 3.2 Identificationofpotentialpairs ......................... 19 3.3 Testingcointegration ............................... 20 3.4 Riskcontrolandfeasibility............................ 20
4 Results
22
2
Contents

Introduction
This report presents my research work carried out at Credit Suisse from May to September 2012. This study has been pursued in collaboration with the Global Arbitrage Strategies team.
Quantitative analysis strategy developers use sophisticated statistical and optimization techniques to discover and construct new algorithms. These algorithms take advantage of the short term deviation from the ”fair” securities’ prices. Pairs trading is one such quantitative strategy - it is a process of identifying securities that generally move together but are currently ”drifting away”.
Pairs trading is a common strategy among many hedge funds and banks. However, there is not a significant amount of academic literature devoted to it due to its proprietary nature. For a review of some of the existing academic models, see [6], [8], [11] .
Our focus for this analysis is the study of two quantitative approaches to the problem of pairs trading, the first one uses the properties of co-integrated financial time series as a basis for trading strategy, in the second one we model the log-relationship between a pair of stock prices as an Ornstein-Uhlenbeck process and use this to formulate a portfolio optimization based stochastic control problem.
This study was performed to show that under certain assumptions the two approaches are equivalent.
Practitioners most often use a fundamentally driven approach, analyzing the performance of stocks around a market event and implement strategies using back-tested trading levels.
We also study an example of a fundamentally driven strategy, using market reaction to a stock being dropped or added to the MSCI World Standard, as a signal for a pair trading strategy on those stocks once their inclusion/exclusion has been made effective.
This report is organized as follows. Section 1 provides some background on pairs trading strategy. The theoretical results are described in Section 2. Section 3
This document appears to be a thesis submitted by Amit Kumar Sinha to the National University of Singapore's Risk Management Institute to earn a Master of Science in Financial Engineering degree. The thesis focuses on pricing and exposure measurement of interest rate derivatives using a short rate model approach. It discusses motivation for counterparty exposure measurement, defines quantitative measures of counterparty exposure at the trade, counterparty, and portfolio levels. It also covers credit risk mitigation techniques like netting and collateral agreements and their impact on exposure measurement. The document outlines the implementation of short rate models like Cox-Ingersoll-Ross and Hull-White 1-factor for term structure modeling, derivatives pricing, and simulation of risk factors to measure potential future exposure.
The document analyzes the decline of the Italian company Saras S.p.A. It begins with an overview of the company and analysis of its stock performance from 2007-2012, noting a decline coinciding with the 2008 financial crisis. It then performs a change point analysis that identifies a reduction in stock volatility in January 2012 following announcements denying rumors of delisting. The document next discusses option valuation models, focusing on the Black-Scholes model for pricing European stock options under assumptions such as stock prices following a continuous random process.
The Vasicek model is one of the earliest stochastic models for modeling the term structure of interest rates. It represents the movement of interest rates as a function of market risk, time, and the equilibrium value the rate tends to revert to. This document discusses parameter estimation techniques for the Vasicek one-factor model using least squares regression and maximum likelihood estimation on historical interest rate data. It also covers simulating the term structure and pricing zero-coupon bonds under the Vasicek model. The two-factor Vasicek model is introduced as an extension of the one-factor model.
Wealth Accumulation under Equity Trading; A Rational ApproachHangukQuant
This document discusses rational approaches to wealth accumulation through equity trading. It argues that pursuing rapid growth is distinct from "instant-wealth" strategies that involve high risk of ruin. The document outlines several trading strategies that aim to provide positively skewed returns with limited downside risk, including long positions in common stocks and call options, as well as strategies leveraging structural, latent, and behavioral factors. It emphasizes the importance of diversification across strategies of varying timeframes to balance opportunities for outsized gains with preservation of capital.
Scenario generation and stochastic programming models for asset liabiltiy man...Nicha Tatsaneeyapan
This document discusses scenario generation methods for asset liability management models. It proposes a multi-stage stochastic programming model for a Dutch pension fund to determine optimal investment policies. Two methods for generating scenarios are explored: randomly sampled event trees and event trees that fit the mean and covariance of returns. Rolling horizon simulations are used to compare the performance of the stochastic programming approach to a fixed mix model. The results show that appropriately generated scenarios can significantly improve the performance of the stochastic programming model relative to the fixed mix benchmark.
[4:55 p.m.] Bryan Oates
OJPs are becoming a critical resource for policy-makers and researchers who study the labour market. LMIC continues to work with Vicinity Jobs’ data on OJPs, which can be explored in our Canadian Job Trends Dashboard. Valuable insights have been gained through our analysis of OJP data, including LMIC research lead
Suzanne Spiteri’s recent report on improving the quality and accessibility of job postings to reduce employment barriers for neurodivergent people.
Decoding job postings: Improving accessibility for neurodivergent job seekers
Improving the quality and accessibility of job postings is one way to reduce employment barriers for neurodivergent people.
This document is a project report submitted by Stephen Arthur Bradley that empirically calculates an optimal hedging method. It contains an acknowledgment of sources, table of contents, abstract, and sections on put call parity, volatility modeling using historical and implied methods, the Greeks (delta, gamma, vega), Black-Scholes model assumptions and equations, Heston and GARCH models, and a performance comparison of different hedging methods using these models. Code for delta hedging using Black-Scholes, Heston, and GARCH models is included in the appendices.
The document presents a methodology for removing serial correlation from hedge fund return time series data in order to determine the "true" underlying returns. It describes the Okunev White model, which can eliminate autocorrelation of any order from a time series. The document then applies this model to various hedge fund indices, finding significant reductions in autocorrelation and changes to the distributions and risk measures of the returns. Key impacts included a right shift of negatively skewed distributions and reduced kurtosis, as well as lower values for risk ratios like the Sharpe ratio.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
This report analyzes the Three Stars investment fund. It first examines the fund's return characteristics, finding the fund had a higher mean return but also higher risk than the market index. It then evaluates the fund's performance using several metrics. Sharpe and Treynor ratios found the fund offered greater risk-adjusted returns than the market. The report also conducts market timing analysis and concludes the fund was able to time the market to some degree to maximize returns. Overall, the analysis finds the fund performed well but also carried higher risk than the market.
This document discusses time series prediction using reservoir computing. It introduces different types of non-stationarity that can occur in time series, including non-stationarity in mean, variance, and both mean and variance. It then describes four time series prediction tasks that will be used to evaluate different detrending techniques: three artificial tasks exhibiting different types of non-stationarity, and one natural task of predicting stock prices. The document provides background on reservoir computing and outlines the goals and structure of analyzing how detrending impacts time series prediction performance.
The objective of this tool was to give a measure of the Value at Risk of the given asset class using techniques like Historical simulation and Monte Carlo simulation. I was involved in the design of a package for estimating the Initial Margin requirement for OTC Derivatives like FX Forward Contracts and Interest Rate Swaps using Historical Value at Risk. I also designed a prototype for running a Monte Carlo simulation on a given stock using Geometric Brownian Motion.
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
This document appears to be an assignment submission for a financial engineering course. It includes a plagiarism declaration signed by the student, Andrew Hair. The assignment contains 11 questions addressing interest rate derivatives and modeling using the Vasicek model. Code is provided in MATLAB to generate simulations and analyze interest rate data based on the questions.
This document summarizes a master's thesis on integrating market views into quantitative portfolio allocation. It introduces the concepts of optimal asset allocation, mean-variance optimization, and dimension reduction using linear factor models. It then discusses incorporating investor views through information sets and the efficient market hypothesis. The Black-Litterman model uses a Gaussian market assumption, CAPM reverse optimization, and Bayesian updating to integrate views. An alternative approach uses f-divergences to measure distortions between probability distributions and translate views into information gain. The thesis concludes by proposing directions for future research.
This document provides an introduction and preface to a lecture on econometric analysis of financial market data. It discusses using the R programming language to model financial data and complete assignments. It encourages downloading R and additional packages to facilitate statistical analysis and simulations. The document outlines installing and using R, and provides references for further reading on empirical finance techniques.
Pricing with a smile: an approach using NIG distributions with SABR-like para...CharvetXavier
This paper outlines an approach to option pricing using the Normal Inverse Gaussian (NIG) distribution. The NIG distribution can take on various shapes to fit different volatility smiles. The paper derives a parameterization of the NIG distribution similar to the SABR model, allowing users to control the smile through familiar SABR-like parameters. Empirical results demonstrate the NIG distribution fitting smiles for inflation options on UK RPI and interest rate options on CHF Libor. The paper also discusses how modeling asset marginals as NIG distributions could benefit multi-asset option pricing.
This document presents a final year project report on using quasi-Monte Carlo methods for market risk management. It first outlines two existing variance reduction methods - importance sampling and stratified importance sampling - and how they are applied to estimate tail loss probabilities in a stock portfolio model. The report then introduces quasi-Monte Carlo and proposes combining it with stratified importance sampling to achieve further variance reduction. Numerical results show that this combined method does not significantly improve variance reduction compared to existing methods.
This thesis examines the ex-dividend day effect on the Stockholm stock exchange from 2000 to 2011. Using event study methodology, the authors estimate abnormal returns on the ex-dividend day and compare them to the dividend yield. They find no strong statistical or economic evidence of an ex-dividend day effect. They also control for abnormal returns on days surrounding the ex-dividend day and find no evidence of unusual price movements. Therefore, they cannot conclude the market is inefficient.
This thesis examines systemic risk between major UK banks and insurance companies during periods of different GDP growth using vine copula models. It analyzes dependence structures in 2008 during economic recession and 2009 during recovery. The document outlines the goals of investigating how dependence changes between the two periods and which institutions have the greatest influence when subjected to shocks. Time series GARCH models are used to prepare the data for vine copula construction and simulation.
Statistical Arbitrage
Pairs Trading, Long-Short Strategy
Cyrille BEN LEMRID

1 Pairs Trading Model 5
1.1 Generaldiscussion ................................ 5 1.2 Cointegration ................................... 6 1.3 Spreaddynamics ................................. 7
2 State of the art and model overview 9
2.1 StochasticDependenciesinFinancialTimeSeries . . . . . . . . . . . . . . . 9 2.2 Cointegration-basedtradingstrategies ..................... 10 2.3 FormulationasaStochasticControlProblem. . . . . . . . . . . . . . . . . . 13 2.4 Fundamentalanalysis............................... 16
3 Strategies Analysis 19
3.1 Roadmapforstrategydesign .......................... 19 3.2 Identificationofpotentialpairs ......................... 19 3.3 Testingcointegration ............................... 20 3.4 Riskcontrolandfeasibility............................ 20
4 Results
22
2
Contents

Introduction
This report presents my research work carried out at Credit Suisse from May to September 2012. This study has been pursued in collaboration with the Global Arbitrage Strategies team.
Quantitative analysis strategy developers use sophisticated statistical and optimization techniques to discover and construct new algorithms. These algorithms take advantage of the short term deviation from the ”fair” securities’ prices. Pairs trading is one such quantitative strategy - it is a process of identifying securities that generally move together but are currently ”drifting away”.
Pairs trading is a common strategy among many hedge funds and banks. However, there is not a significant amount of academic literature devoted to it due to its proprietary nature. For a review of some of the existing academic models, see [6], [8], [11] .
Our focus for this analysis is the study of two quantitative approaches to the problem of pairs trading, the first one uses the properties of co-integrated financial time series as a basis for trading strategy, in the second one we model the log-relationship between a pair of stock prices as an Ornstein-Uhlenbeck process and use this to formulate a portfolio optimization based stochastic control problem.
This study was performed to show that under certain assumptions the two approaches are equivalent.
Practitioners most often use a fundamentally driven approach, analyzing the performance of stocks around a market event and implement strategies using back-tested trading levels.
We also study an example of a fundamentally driven strategy, using market reaction to a stock being dropped or added to the MSCI World Standard, as a signal for a pair trading strategy on those stocks once their inclusion/exclusion has been made effective.
This report is organized as follows. Section 1 provides some background on pairs trading strategy. The theoretical results are described in Section 2. Section 3
This document appears to be a thesis submitted by Amit Kumar Sinha to the National University of Singapore's Risk Management Institute to earn a Master of Science in Financial Engineering degree. The thesis focuses on pricing and exposure measurement of interest rate derivatives using a short rate model approach. It discusses motivation for counterparty exposure measurement, defines quantitative measures of counterparty exposure at the trade, counterparty, and portfolio levels. It also covers credit risk mitigation techniques like netting and collateral agreements and their impact on exposure measurement. The document outlines the implementation of short rate models like Cox-Ingersoll-Ross and Hull-White 1-factor for term structure modeling, derivatives pricing, and simulation of risk factors to measure potential future exposure.
The document analyzes the decline of the Italian company Saras S.p.A. It begins with an overview of the company and analysis of its stock performance from 2007-2012, noting a decline coinciding with the 2008 financial crisis. It then performs a change point analysis that identifies a reduction in stock volatility in January 2012 following announcements denying rumors of delisting. The document next discusses option valuation models, focusing on the Black-Scholes model for pricing European stock options under assumptions such as stock prices following a continuous random process.
The Vasicek model is one of the earliest stochastic models for modeling the term structure of interest rates. It represents the movement of interest rates as a function of market risk, time, and the equilibrium value the rate tends to revert to. This document discusses parameter estimation techniques for the Vasicek one-factor model using least squares regression and maximum likelihood estimation on historical interest rate data. It also covers simulating the term structure and pricing zero-coupon bonds under the Vasicek model. The two-factor Vasicek model is introduced as an extension of the one-factor model.
Wealth Accumulation under Equity Trading; A Rational ApproachHangukQuant
This document discusses rational approaches to wealth accumulation through equity trading. It argues that pursuing rapid growth is distinct from "instant-wealth" strategies that involve high risk of ruin. The document outlines several trading strategies that aim to provide positively skewed returns with limited downside risk, including long positions in common stocks and call options, as well as strategies leveraging structural, latent, and behavioral factors. It emphasizes the importance of diversification across strategies of varying timeframes to balance opportunities for outsized gains with preservation of capital.
Scenario generation and stochastic programming models for asset liabiltiy man...Nicha Tatsaneeyapan
This document discusses scenario generation methods for asset liability management models. It proposes a multi-stage stochastic programming model for a Dutch pension fund to determine optimal investment policies. Two methods for generating scenarios are explored: randomly sampled event trees and event trees that fit the mean and covariance of returns. Rolling horizon simulations are used to compare the performance of the stochastic programming approach to a fixed mix model. The results show that appropriately generated scenarios can significantly improve the performance of the stochastic programming model relative to the fixed mix benchmark.
[4:55 p.m.] Bryan Oates
OJPs are becoming a critical resource for policy-makers and researchers who study the labour market. LMIC continues to work with Vicinity Jobs’ data on OJPs, which can be explored in our Canadian Job Trends Dashboard. Valuable insights have been gained through our analysis of OJP data, including LMIC research lead
Suzanne Spiteri’s recent report on improving the quality and accessibility of job postings to reduce employment barriers for neurodivergent people.
Decoding job postings: Improving accessibility for neurodivergent job seekers
Improving the quality and accessibility of job postings is one way to reduce employment barriers for neurodivergent people.
BONKMILLON Unleashes Its Bonkers Potential on Solana.pdfcoingabbar
Introducing BONKMILLON - The Most Bonkers Meme Coin Yet
Let's be real for a second – the world of meme coins can feel like a bit of a circus at times. Every other day, there's a new token promising to take you "to the moon" or offering some groundbreaking utility that'll change the game forever. But how many of them actually deliver on that hype?
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
1. Elemental Economics - Introduction to mining.pdfNeal Brewster
After this first you should: Understand the nature of mining; have an awareness of the industry’s boundaries, corporate structure and size; appreciation the complex motivations and objectives of the industries’ various participants; know how mineral reserves are defined and estimated, and how they evolve over time.
OJP data from firms like Vicinity Jobs have emerged as a complement to traditional sources of labour demand data, such as the Job Vacancy and Wages Survey (JVWS). Ibrahim Abuallail, PhD Candidate, University of Ottawa, presented research relating to bias in OJPs and a proposed approach to effectively adjust OJP data to complement existing official data (such as from the JVWS) and improve the measurement of labour demand.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
Solution Manual For Financial Accounting, 8th Canadian Edition 2024, by Libby...Donc Test
Solution Manual For Financial Accounting, 8th Canadian Edition 2024, by Libby, Hodge, Verified Chapters 1 - 13, Complete Newest Version Solution Manual For Financial Accounting, 8th Canadian Edition by Libby, Hodge, Verified Chapters 1 - 13, Complete Newest Version Solution Manual For Financial Accounting 8th Canadian Edition Pdf Chapters Download Stuvia Solution Manual For Financial Accounting 8th Canadian Edition Ebook Download Stuvia Solution Manual For Financial Accounting 8th Canadian Edition Pdf Solution Manual For Financial Accounting 8th Canadian Edition Pdf Download Stuvia Financial Accounting 8th Canadian Edition Pdf Chapters Download Stuvia Financial Accounting 8th Canadian Edition Ebook Download Stuvia Financial Accounting 8th Canadian Edition Pdf Financial Accounting 8th Canadian Edition Pdf Download Stuvia
Solution Manual For Financial Accounting, 8th Canadian Edition 2024, by Libby...
Asian basket options
1. POLITECNICO DI MILANO
MASTER OF SCIENCE IN MATHEMATICAL ENGINEERING
Financial Engineering 2016/2017
Final Project
Pricing, hedging and risk management of a portfolio of basket
Asian options
Luca Bardella
Victor Bontemps
Elena Cirillo
3. 1 Introduction
The aim of the project is to evaluate a portfolio consisting of two basket Asian options
within two dierent modelling frameworks and to formulate a risk assessment of the
portfolio with and without a hedging strategy.
The key issues we want to tackle are the following:
• the inuence of the dierent dynamics for the underlying of the basket Asian
option on the price and the hedging strategy;
• the optimal hedging strategy for the portfolio;
• the impact of hedging on the Value at Risk.
1.1 Description of the problem
On 8th
August 2012, the investment portfolio of Bank XX accounts for 1000000 e
invested in an equal number of Asian basket put options with the following contractual
terms:
• Option1
Basktet Volkswagen, Gas de France, Santander
Reset dates 5 weekly reset dates strating on 8th
November 2012
Strike 1
• Option2
Basktet ENI, Total, Deutsche Bank
Reset dates 13 weekly reset dates strating on 8th
August 2014
Strike 1
The payo of the options is a function of the arithmetic average across the reset dates
of the weighted average of the return of each stock:
Payoff = 1 −
1
m
m
i=1
3
n=1
1
3
·
En
t
En
0
+
Here are presented the key parameters for each stock, i.e dividend yield, NIG σ, κ and
η, that will be used to evaluate the prices of the put options:
STOCK d σ η κ
Volkswagen 0 24% 3.6 1.1
Gas de France 0 21% 3.2 1.1
Santander 0 25% 2.9 1.1
ENI 3.5% 17.8% 3.9 1.1
Total 4.3% 19% 4.1 1.1
Deutsche Bank 2.1% 22% 4.3 1.1
2
4. 2 Preliminary computations
In order to evaluate the prices of the basket Asian options in our portfolio we need
to perform a few preliminary computations. The rst preliminary operation is the
computation of the discounting curve via a bootstrap technique. Next, given the zero-
rates curve we perform the computation of the forward rates across the reset dates
that will be used for the simulation of the dynamics of the underlyings. Lastly, we
need to estimate, from previous year's returns of the six considered stocks, the average
returns, the variance of the returns and the correlation among the stocks.
2.1 Bootstrap
In order to actualize future cash ows linked to the options in our portfolio, we need
to construct the discounting curve. We have considered the Overnight Indexed Swaps
(OIS) as inputs to build the Eective OverNight Index Average (EONIA) curve via a
bootstrap technique. The paper by Cassaro and Baviera[1]
provides the formulas to
compute the discount factors.
Before computing all the discount factors, we need to transform the market data
into exploitable elements for Matlab and to modify some dates in order to respect
the modied following business days convention. The main function of this section is
bootstrap_EONIA and returns the discount factors and zero rates across the considered
maturities. Using this function, the dierent discount factors are computed iteratively
from the market rates thanks to the formulas used to price OIS given the corresponding
EONIA rates (reference 1). Finally, we computed the zero-rates Z(s,ti) directly from
the respective discount factors B(s,ti); in fact, by denition, we have :
Z(s, t) = −
ln B(s, t)
t − s
Results
The following plots present the EONIA discounting curve and the zero-rates curve
resulting from the bootstrap.
3
5. Figure 1: EONIA discount and zero-rates curves.
2.2 Reset dates
The rst piece of information we need to be able to identify the outline of future cash
ows is the set of reset dates for both Asian put options. We assume the settlement
date of the options to be the same as for the bootstrap, i.e the 10th
of August 2012,
while for the subsequent reset dates, computed accordingly to the modied preceding
convention, we assume value and settlement date to coincide. The following table
presents the reset dates computed under the above mentioned assumptions within the
framework of TARGET calendar:
Option 1 Option 2
08-11-12 08-08-14
15-11-12 08-09-14
22-11-12 08-10-14
29-11-12 07-11-14
06-12-12 08-12-14
08-01-15
06-02-15
06-03-15
08-04-15
08-05-15
08-06-15
08-07-15
07-08-15
4
6. 2.3 Forward Rates
The computation of the forward rates, given the discounted curve, is straightforward
and relies on the following relations:
B(t0, ti, ti+1) =
B(t0, t0, ti+1)
B(t0, t0, ti)
B(t0, ti, ti+1) = e−ri,i+1(ti+1−ti)
By combining the two equations we get to the following relation among spot zero rates
and forward zero rates:
ri,i+1 =
r0,i+1(ti+1 − t0) − r0,i(ti − t0)
ti+1 − ti
where ti are the reset dates of our contracts.
2.4 Historical data analysis
The last input parameters for the computation of the prices are the initial value of
the stocks, from which we start the simulation of the dynamics, and the correlation
matrix of the stocks within each basket. For the computation of the VaR we also need
the average return and the variance-covariance matrix estimated as the mean and the
sample variance of the returns observed in the previous year. These information is
needed for the simulation of returns for the 10 days dynamics of the stocks. The
estimation is performed with the usual unbiased mean and variance estimators.
5
7. 3 Pricing of basket asian options
We computed the price of the two exotic derivatives (basket Asian option) in Bank's
XX portfolio rst within NIG modeling framework and then assuming GBM dynamics
for the underlyings.
Asian options are path-dependent options because their payo depends on the average
price of the underlying asset over some predetermined reset dates across the lifetime
of the option. Since the underlying asset of both derivatives is a basket of stocks, we
must take into account that the options' payos are also aected by the correlation
among the stocks within each basket.
Given the expression of the underlying, we immediately observe that the weighted
average of log-normal variables has no known distribution, thus there is no explicit
analytical formula for the price of a put option with a payo of this kind.
Rebus sic stantibus there are two possible ways to follow:
1. make some assumptions on the distribution of the underlying and eventually
approximate it in such way to get a closed formula;
2. use a Monte Carlo simulation to get numerically the price of our exotic deriva-
tives.
The pros and cons of both strategies are well known and fall within the usual trade
o between simplication of the model and anity to real dynamics. Our choice was
to perform a Monte Carlo simulation which in this case, due to the small number of
reset dates and the few underlyings, does not require excessive computational cost.
Monte Carlo approach requires the simulation of multiple paths for the three stocks
in the basket, according to the selected model and the risk neutral measure modelling
framework. from the simulation we get the expected payo in a risk-neutral world and
then by discounting this payo at the risk-free rate we get the price of the instrument.
Therefore we can value each of the two exotic derivatives as follows:
1. Simulate a random path for the three stocks in the basket in a risk-neutral world.
2. Calculate the corresponding payo from the derivative.
3. Repeat steps 1 and 2 to get many sample values of the payo from the derivative.
4. Calculate the mean of the sample payos to get an estimate of the expected
payo in a risk neutral world.
5. Discount this expected payo at the risk-free rate to get an estimate of the value
of the the derivative.
In the specic case of our payo, we are not interested in simulating continuous
trajectories since our payo only depends on the value of the stocks in the nite set of
reset dates.
3.1 Dynamics for the underlyings
For both models, in order to implement a Monte Carlo simulation, we must derive the
stochastic dynamics of the value of each stock under the risk neutral measure in which
the forward is a martingale between reset dates.
6
8. NIG model
Within the NIG model one has:
f
(j)
t = ln
Ft
F0
=
√
t − t0σj
√
Ggj − (
1
2
+ ηj)(t − t0)σ2
j G − lnL(ηj), j = 1, 2, 3.
That leads to:
F(j)
(t1, t1)
S
(j)
t1
= Fj
(t0, t1)
S
(j)
0 e{(r01−dj)(t1−t0)
}
exp{
√
t1 − t0σj
√
Ggj − (
1
2
+ ηj)(t1 − t0)σ2
j G − lnL(ηj)}
Iteratively one gets:
F(j)
(t2, t2)
S
(j)
t2
= Fj
(t1, t2)
S
(j)
t1
e{(r12−dj)(t2−t1)
}
exp{
√
t2 − t1σj
√
Ggj − (
1
2
+ ηj)(t2 − t1)σ2
j G − lnL(ηj)}
In this way we obtain the spot dynamics that generalizes Garman Kohlhagen/Black
Scholes to this model with Mean-Variance mixture:
Sj
ti
= Sj
ti−1
e{(ri−dj)∆t}
e{
√
∆tσj
√
Ggj−( 1
2
+ηj)∆tσ2
j G−lnL(ηj)}
where
• j = 1, 2, 3 is the stock's index within the basket
• i = 1, . . . , m , t1, . . . , tm are the reset dates of the derivative
• ∆t = ti − ti−1
• σj, ηj, kj are the NIG parameters for the jth
stock
• lnL(ηj) = ∆t
kj
[1 − 1 + 2kjηjσ2
j ] is the natural logarithm of the Laplace
trasform in the IG case
• ri = r(t0, ti−1, ti) is the forward rate between ti−1 and ti
• dj is the dividend yield for the jth
stock (if a stock pays no dividend we set d = 0)
• G is the mixing variable distributed as IG(1,
kj
∆t
)
• gj is the jth
component of a multivariate standardized normal distribution (whose
correlation matrix is estimated from market data)
This enables the value of Sj at time ti to be computed from the previous value at time
ti−1. Thus each simulation trial requires m independent samples of the mixing variable
and the same number of samples of gj for each j.
7
9. GBM model
By assuming that the stocks within the basket follow correlated geometric Brownian
motion processes, one has (with the same notation above):
Sj
ti
= Sj
ti−1e{[(ri−d)−1
2
σ2
j ]∆t+σj
√
∆tgj}
Where this time σj is the implied volatility of the jth
stock in the basket computed
assuming NIG prices for plain vanilla options to be equal to market prices. We used
Lewis formula, via quadrature, to compute market prices and, by inverting the relation,
we derived implied volatility. The following table presents the values of the six implied
volatilities compared to the NIG volatility parameters:
STOCK σimplied σ η κ
Volkswagen 21.89% 24% 3.6 1.1
Gas de France 18.19% 21% 3.2 1.1
Santander 21.83% 25% 2.9 1.1
ENI 18.71% 17.8% 3.9 1.1
Total 20.36% 19% 4.1 1.1
Deutsche Bank 25.6% 22% 4.3 1.1
3.2 Pricing with MC_MAT
MC_MAT function computes the price and sensitivities of our exotic derivatives both
within NIG and GBM according to the parameter 'model' in input. In each scenario
we simulate m values for the underlyings (where m is the number of reset dates of the
derivative) thus we need to generate a random source with dimension 3xm dimension
in order to get in each step:
S01
S02
S03
S1
t1
S2
t1
S3
t1
· · ·
S1
ti
S2
ti
S3
ti
· · ·
S1
tm
S2
tm
S3
tm
We chose to exploit MATLAB's eciency in vectorial computations by adopting
a 3-dimensional simulation process: rst of all we replicated the initial condition in
order to obtain a 3xN matrix (where N is the number of the Monte Carlo scenarios)
and then we moved in time using a for cycle on the number of reset dates.
In this way,at the end of the procedure, we get a 3xNxm matrix containing N dierent
simulated paths for the three stocks. The simulation pattern is the following:
S1
0 S1
0 S1
0
S2
0 . . S2
0 . . S2
0
S3
0 S3
0 S3
0
S0 replicated N times
· · ·
S
(1,1)
ti
S
(1,N)
ti
S
(2,1)
ti
. . . . . . . S
(2,N)
ti
S
(3,1)
ti
S
(3,N)
ti
N scenarios in ti
· · ·
S
(1,1)
tm
S
(1,N)
tm
S
(2,1)
tm
. . . . . . . S
(2,N)
tm
S
(3,1)
tm
S
(3,N)
tm
N scenarios at expiry
Taking as example the NIG case, MC_MAT functions as follows:
8
10. • Uses the function randomgenerator that creates m times a 3xN matrix whose
columns contain random samples from a multivariate (3-dim) normal distribu-
tion (whose correlation matrix is estimated by historical data) and also generates
another 3xNxm matrix containing random samples for the mixing variable G ac-
cording to an IG distribution (taking into account the correct variance since the
rst step in the simulation process is done over a dierent time interval). In or-
der to reduce the variance across the simulations the function randomgenerator
takes advantage of the use of antithetic variates both for the multivariate normal
and form the Inverse Gaussian.
• The function simulationNIG takes in input the random source (i.e. the output
of randomgenerator) and uses a for cycle over the reset dates in order to
generate N scenarios at time ti starting from those at time ti−1.
• Given the evolution of the underlyings in time, the function computes the payo
and the price from it.
3.3 Results
The following table presents the prices computed with N, number of scenarios, equal
to 106
. The choice of the number of simulation is inuenced by the limited number
of elements within a MATLAB matrix: by increasing the number of scenarios to 107
MATLAB runs out of memory. In order to avoid the problem one should implement
a for cycle. It is what we do in the case of the VaR when this constraint becomes
quite limiting. However, in the pricing algorithm, with N = 106
Monte Carlo error
falls below the basis point for the rst option and is in the order of a basis point for
the second one, thus we can be satised with the results.
NIG GBM
Option 1 0.035 e 0.0365 e
Option 2 0.1412 e 0.1548 e
We observe that the prices are very close despite the dierence is not negligible. We
can attribute the incongruence to the dierent assumptions behind the implemented
models. We also observe that the dierence is greater for the option with longer ma-
turity and a larger number of reset dates. This is consistent with the fact that the
values of the underlying are recomputed across time starting from dierent values and
thus the dierence keeps increasing.
One of the reasons behind the implementation of an Exponential Lèvy model is to bet-
ter t market data for what concerns both volatility smile and distribution of returns.
NIG dynamics suits better the case of returns which are not normally distributed and
present a peak on the mean values and fatter tails on extreme values. Therefore NIG
model is more likely to be a better approximation especially when underlyings are cor-
related. In the following plots we present the distributions of previous year's returns
together with the ones generated both according to the GBM model and to the NIG
model.
9
11. Figure 2: NIG, GBM and historical returns
Despite the fact that the distribution above is computed just accordingly to previ-
uos year's data, i.e. 259 sample values, we still have an insight on the level of tting
of the two distributions. The Exponential Lèvy model via NIG provides a slightly
better approximation of the actual distribution,thus we are more likely to prefer NIG
result to the one given by the GBM framework. A good way to control the quality
of our models is to verify if the put-call parity holds. In the case of arithmetic asian
basket options (h underlyings and m reset dates), we can write the price of a call and
the price of a put as the actualized expectation of the nal payo in the risk neutral
measure, namely :
C = B(0, tm)E0 max
1
m · k
m
i=1
k
k=1
Sk
ti
− 1, 0
P = B(0, tm)E0 max 1 −
1
m · k
m
i=1
k
k=1
Sk
ti
, 0
Thus, we can consider the dierence between both, and use the linearity of the
Expectation :
C − P = B(0, tm)E0
1
m · k
m
i=1
k
k=1
Sk
ti
− 1
C − P = B(0, tm)
1
m · k
m
i=1
k
k=1
E[Sk
ti
− 1]
Now, for each term in the sum, we consider the forward contract written on the under-
lying k with a time to maturity ti. It follows a martingale in the risk neutral measure,
then we just need to know the relation between the forward and the underlying, here
we have used the Garman Kohlhagen relation.
C − P = B(0, tm)
1
m · k
m
i=1
k
k=1
(Sk
t0
eZ(t0,ti)(ti−t0)
− 1)
10
12. We applied put-call parity to our results and observed that NIG prices are closer to
perfect parity than GBM price. As expected parity for option 1 is fullled with an
error within the order of magnitude of 10−5
while it is 1 order of magnitude bigger for
the second option. Given the numerical approximation, we are satised with the level
of accuracy of the results.
11
13. 4 Greeks
The value of the greeks is a fundamental information to hedge a portfolio of derivatives.
Several methods exist to get the greeks, although in this case, since no closed formulas
exist for exotic derivatives of our kind, the only possible approach is to use a Monte
Carlo simulation. For our purposes we need the value of three greeks:
• Delta: ∆ = ∂V
∂Si
, with V price of the option and Si price of the ith
underlying,
represents the sensitivity of a nancial instrument to shifts in the price of the
underlying.
• Vega: ν = ∂V
∂σi
, with V price of the option and σi volatility of the ith
underlying,
represents the sensitivity to shifts in the volatility of the underlying. In the NIG
case, for simplicity, we have shifted the mean of the volatility smile σ while in
the GBM case we have shifted the implied volatility.
• Gamma: Γ = ∂∆
∂Si
, with V price of the option and Si price of the ith
underlying,
represents the sensitivity of delta to variations in the underlying price.
Thanks to Feynman-Kaç theorem we can write the price of a derivative with time to
expiry T as the expectation of the payo φ in the risk neutral measure :
e−rT
E[φ(Si, σi)]
Under some assumptions concerning the continuity of the payo function, we can write
(∆1 case ) :
∂(e−rT
E[φ(S1, S2, S3, σ1, σ2, σ3)])
∂S1
= e−rT
E[
∂φ(S1, S2, S3, σ1, σ2, σ3)
∂S1
]
Then, to get to this result numerically, rst we have to use the method of nite
dierences to approximate the derivatives :
∂φ(S1,Si,SN ,σ1,σi,σN )
∂Si
= φ(S1,Si+hi,SN ,σ1,σi,σN )−φ(S1,Si−hi,SN ,σ1,σi,σN )
2hi
∂φ(S1,Si,SN ,σ1,σi,σN )
∂σi
= φ(S1,Si,SN ,σ1,σi+hi,σN )−φ(S1,Si,SN ,σ1,σi−hi,σN )
2hi
∂∆(S1,Si,SN ,σ1,σi,σN )
∂Si
= φ(S1,Si+hi,SN ,σ1,σi,σN )+φ(S1,Si−hi,SN ,σ1,σi,σN )−2φ(S1,Si,SN ,σ1,σi,σN )
h2
i
then we implement a Monte Carlo simulation to get the prices as we explained
in the previous section. At this step it is crucial to compute the payo in both the
incremented and non incremented case using the same sample of generated random
variables. This is an important variance reduction technique for the numerical esti-
mation of sensitivities since the use of dierent sampling sets would have resulted in
unnecessary white noise.
4.1 Analysis of the numerical approximations
The estimation of the sensitivities, like any numerical derivation, is a delicate proce-
dure since it is easy to incur in undesired losses of accuracy. With the method of nite
dierences, we know that, if we denote by h the increment, we have an error that is
o(h) for the rst order derivatives and o(h2
) for the second order ones. Therefore we
want to have h as smaller as possible. At the same time,thanks to the Central limit
12
14. theorem, we know that with Monte-Carlo, the order of magnitude for the approxima-
tion error is equal to Σ√
N
with Σ the empirical standard variation and N the number
of simulations. By combining both methods, if we denote by χ the theoretical greek
and ˜χ the numerical result, we have for the rst order :
˜χ = χ + o(h) + O(
Σ
√
N
)
And for the second order :
˜χ = χ + o(h2
) + O(
Σ
√
N
)
Consequently, we calibrate our increments such that : h or h2
is equal to Σ√
N
In
the function eps_calibration we just use the function MC_MAT. In this function, we
compute the variance of the dierent payos corresponding to the dierent simulations,
then we set in order to satisfy the above relation.
4.2 Results
Here we present the values per option of the main greeks considered: vega, delta
and gamma under GBM assumption, just the rst two within the NIG modelling
framework.
Delta
NIG GBM
Volkswagen -0.0009 -0.0011
Gas de France -0.0071 -0.0082
Santander -0.0259 -0.0299
NIG GBM
ENI -0.0094 -0.0094
Total -0.0041 -0.0041
Deutesche Bank -0.0064 -0.0064
Vega
NIG GBM
Volkswagen 0.0558 e 0.0574 e
Gas de France 0.0521 e 0.0613 e
Santander 0.0536 e 0.0590 e
NIG GBM
ENI 0.1592 e 0.1691 e
Total 0.1636 e 0.1661 e
Deutesche Bank 0.1954 e 0.1744 e
13
15. Gamma
Volkswagen Gas de France Santander
Volkswagen 0.0000 e−1
0.0002 e−1
0.0006 e−1
Gas de France 0.0002 e−1
0.0019 e−1
0.0036 e−1
Santander 0.0006 e−1
0.0036 e−1
0.0188 e−1
ENI Total Deutsche Bank
ENI 0.4869·10−3
e−1
0.2069·10−3
e−1
0.2981·10−3
e−1
Total de France 0.2069·10−3
e−1
0.0930·10−3
e−1
0.1965·10−3
e−1
Deutsche Bank 0.2981·10−3
e−1
0.1965·10−3
e−1
0.2720·10−3
e−1
A few observations on the results:
• Signs of the deltas are consistent with the fact that we have put options thus
there is a negative correlation with respect to the movements of the underlying.
We can also observe that the delta terms are much smaller when compared to
the corresponding plain vanilla values. This is consistent with the fact that the
payo of an Asian basket option, due to the averaging across time and across
the underlyings is more stable.
• We observe that the vega terms of the second option are much bigger compared
to the ones of the rst option. This is consistent with the fact that Vega increases
with time to maturity since a longer time to maturity means larger chances of
price uctuations. Concerning the dierence between the two methods we cannot
make any comparison since the underlying assumption is th shift of dierent
quantities: the NIG parameter σ in one case and the implied volatility in the
other.
• For what concerns Gamma terms we observe consistency with the theory: gamma
increases when time to maturity decreases thus the rst option must show higher
gammas compared to the second one.
14
16. 5 Hedging
The implementation of a hedging strategy requires to consider some key issues of the
investment strategy:
• Which are the risks that we want to keep in our portfolio and to what extent;
• What are our preferences in terms of instruments required to hedge the portfolio
(i.e. liquidity, present day outow minimization etc.)
• Choice between static and dynamic hedging and, in the case of dynamic hedging,
the frequency of adjustment of the hedging portfolio.
In the specic case of Bank XX the portfolio consists in an equal number of put basket
Asian options on both baskets, to hedge with ATM plain vanillas and stocks in order
to become both delta and vega neutral. To do so we chose to proceed in the following
way:
1. Compute the vega of the portfolio with respect to the volatilities of each un-
derlying stock in the baskets and oset them with the vega of the plain vanilla
options. Observe that at this step the choice between call or put is irrelevant
since the vega is the same. The result is the number N of plain vanillas to buy or
sell to achieve vega neutrality. It is important to hedge vega rst, since the in-
troduction of new derivatives in our portfolio will change the value of the overall
delta exposure.
2. Compute the delta of the portfolio with respect to the prices of each underlying
stock in the baskets and sum to them rst the delta of N calls then the delta of
N puts. Oset the resulting delta by buying or selling underlying. The result
are two alternative strategies, a call strategy and a put strategy.
3. The last step is to choose between thee two strategies the one with the smaller
outow of money at time 0. This criterion is discretional, one could adopt
dierent approaches at this step according to his or her preferences.
The suggested strategy is a static hedging strategy under the assumption of no trans-
action costs and no arbitrage condition. In the case of our portfolio we observe that
both put asian options are vega positive, thus the hedging portfolio will require to sell
either put or call options. Accordingly to the criterion mentioned above, our choice,
both NIG and GBM hedging strategies have a smaller cost at time 0 by selling puts
instead of calls. Here we present the combination of instruments in the portfolio to
achieve vega and delta neutrality in the GBM case with a portfolio of 5232201 put
Asian of both types:
Plain Vanilla Put Stocks
Volkswagen -10356 772
Gas de France -82194 3306
Santander -292290 166656
ENI -93725 1369
Total -42564 -268
Deutsche Bank -66270 3777
15
17. The total cost of hedging is: -576954,15 e.
In the NIG case our portfolio accounts for 5673561 Asian options and the hedging
strategy is the following:
Plain Vanilla Put Stocks
Volkswagen -9956 2122
Gas de France -75938 13800
Santander -282253 50230
ENI -73759 19817
Total -33307 8246
Deutsche Bank -53953 16325
The total cost of hedging is: 1.16591116 e.
We observe that the two presented portfolios have a very similar composition although
the total cost of the hedging strategy is very dierent. The reason behind this relies
in the dierent values of the sensitivities in the GBM and in the NIG case. The values
of vega for the plain vanilla dier signicantly between the two models, especially for
the options with longer maturity.
16
18. 6 Risk management
We computed the classical risk measure, the Value at Risk, to test the strength of the
hedging strategy proposed. The computation was performed following three dierent
approaches:
• Full Monte Carlo evaluation;
• Delta normal approximation;
• Delta Gamma approximation.
Let us analyse each approach in detail.
6.1 VaR via Full Monte Carlo evaluation
The Full Monte Carlo approach is the most versatile among all the techniques for
computing the VaR. It can be used for non linear portfolios and for any model of
changes in the risk factors. However this exibility comes at a price: the procedure
requires the revaluation of the portfolio in each scenario and this can be a substantial
computational burden especially when considering portfolios with complex instruments
that require a Monte Carlo simulation to be evaluated. And this is actually what
happens with our portfolio: the presence of two basket Asian options, for which the
price can be computed only via a Monte Carlo simulation, forces us to run a simulation
for each 10 days scenario. Thus we had to face a trade o:
• Decrease the number of scenarios considered in the simulations in order to reduce
the computational time up to an acceptable level, at the cost of loosing accuracy;
• Keep a good number of scenarios for the Monte Carlo evaluation to have a good
level of accuracy although increasing signicantly computation time.
We decided to reduce the number of scenarios for the loss distribution to 104
while
the number of simulations within each scenario is kept to 104
in order to maintain
the computation time within 10 minutes. The use of antithetic variates for variance
reduction marginally osets the accuracy reduction caused by the decreased number
of simulations. The procedure we used is as follows:
1. We sampled from a multivariate normal distribution the returns xt at 10 days
for our scenarios. The parameters of the distributions are:
(a) The average historical daily returns;
(b) The Cholesky factorization of the daily variance-covariance matrix.
In order to get the desired time lag we time scaled the values by multiplying the
mean by the length of the time lag, 10 days in our case, and the Cholesky matrix
by the square root of the lag.
2. We computed the values of the underlying according to the following relation:
St+∆t = St · ext
17
19. 3. For each St+∆t we evaluated the prices of all instruments in our portfolio. In
particular, for the Asian options we set St+∆t as the new starting point for the
simulation of the dynamics of the underlying for the pricing.
4. We computed the dierence between the initial value of the portfolio and the
new value in each scenario to get the distribution of losses.
5. We sorted the losses and picked the value corresponding to the 99th
percentile.
The results are presented in the following table:
Hedging No Hedging
67881.61 e 566059.23 e
16.04 % 56.60%
We observe that there is a considerable reduction in the VaR thanks to the hedging
strategy we implemented. An important remark is that the notional of the hedged
portfolio is much smaller compared to the non hedged one so if we had to compare the
results in percentage terms the reduction in VaR is not as impressive as it looks. There
are many reasons for the fact that the hedged VaR is not as close to zero as one might
think. The rst, and most important reason is that options are not linear instruments
thus it is naive to believe that vega and delta neutrality are enough to hedge the
portfolio although this will be clearly observed with delta normal approximations. The
second reason is that we implemented a static hedging strategy which, by construction,
fully hedges only at present date. Last, and unfortunately in our case not least, the
accuracy of the method we implemented is below the desired one.
6.2 VaR via Delta Normal approximation
Delta normal approach was developed to reduce computational costs of full Monte
Carlo evaluation by simplifying the revaluation of the portfolio in each scenario. This
is done by assuming a linear relation between the changes in every risk factor and
the changes in price of each instrument in the portfolio. This approach reduces sig-
nicantly computational costs although it is important to question the validity of the
approximation for the specic instruments in the portfolio. In our case both the Asian
options and the plain vanillas are not linear instruments so we expect the delta normal
approximation to be insucient for representing with accuracy the VaR of the port-
folio. We performed the Delta normal evaluation following the same rst steps as for
the full evaluation to get the distribution of xi. Then we linearized the problem:
∆S = St+∆t − St = St(ext
− 1) ∼ Stxt
Given a generic instrument C we have:
δ =
∂C
∂S
∆C = δ · Stxt
And thus the total loss of the portfolio is:
L(xt) =
n
i
∆Ci · ni
18
20. where ni is the number of instrument i in our portfolio. The results are presented in
the following table:
Hedging No Hedging
8.42 e 813022.11 e
0% 81.30%
We observe that our hedging strategy works very well under the assumption of linear
portfolio. However we know that options are not linear and thus we should not trust
the optimistic result. Moreover the perfect hedging is not consistent with the result
nd with the full Monte Carlo evaluation so we have reason to believe that there is a
large exposure that is not captured by the linear approximation.
6.3 VaR via Delta- Gamma approximation
The Delta-Gamma evaluation of the VaR follows exactly the same principle as the delta
normal without limiting the approximation to the linear one but including also the
second order derivatives, that are, in our case, just the gamma terms. The procedure
is the same as the Delta normal but requires the computation of the gamma and
cross-gamma terms so that we have:
L(Xt) =
n
i=1
δSi(t)xi(t) · ni +
1
2
n
i,j=1
γijSi(t)Sj(t)xi(t)xj(t) · ni
The results are presented in the following table:
Hedging No Hedging
103652.89 e 424646.29 e
24.55% 42.46%
As expected this result is more coherent with respect to the full Monte Carlo evaluation
since the quadratic approximation is closer to the actual distribution of the losses. As
expected the largest gamma contribution comes from the rst option which is the one
with the highest values of gamma. The results are also consistent with our gamma
exposure: without the hedging only the gamma of the Asian options is taken into
account thus we are gamma positive. This means that our delta normal losses are
overestimated because the gains deriving from the positive exposure on gamma are
not taken into account. The opposite happens with the hedging: our positive gamma
position vanishes due to the sale of the put options, which are gamma positive as well.
Therefore the overall gamma exposure is negative now and this is coherent with the
fact that the delta normal approach underestimates potential losses.
For what concerns the comparison with full Monte Carlo evaluation results we observe
that the values are quite close and the trend is maintained. As mentioned before the
dierences can be explained by the terms that we did not considered in the delta
gamma approximation, i.e. vega terms and other rst and second order derivatives,
and by possible numerical errors.
19
21. 7 Conclusions
In conclusion there are a few key points that we want to underline:
• We observed that a dierent choice for the dynamics of the underlying option
can vary signicantly the results. This allows us to point out the crucial role of
a model that ts the actual distribution of our risk factors.
• In many applications the trade o between accuracy and time consumption is
critical. In the case of Asian options there are several analytical approxima-
tions that could be considered for pricing purposes which, ceteris paribus, could
potentially lead to better results although the analytical derivation could be
challenging.
• It is important to take into account the limits of linear hedging. As observed
for the considered portfolio, whenever we have non linear instrument the non
neutralized risk could be quite relevant.
20
22. References
[1] R. Baviera, A.Cassaro A Note on Dual-Curve Construction: Mr. Crab's Bootstrap,
Applied Mathematical Finance 22, 105-122 2015
[2] P. Glasserman Monte Carlo Methods in Financial Engineering, Springer 2004
21