This document discusses two techniques for improving the efficiency of option pricing methods: numerical smoothing and optimal damping. Numerical smoothing involves approximating non-smooth option payoffs with smooth functions, allowing for faster convergence of quadrature and multilevel Monte Carlo methods. Optimal damping adds a regularization term when pricing options under Lévy models using Fourier methods to improve stability and accuracy. The document outlines the theoretical underpinnings of how smoothness affects integration error and complexity, and presents numerical results demonstrating the effectiveness of these techniques.
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
This document summarizes a research paper about using hierarchical deterministic quadrature methods for option pricing under the rough Bergomi model. It discusses the rough Bergomi model and challenges in pricing options under this model numerically. It then describes the methodology used, which involves analytic smoothing, adaptive sparse grids quadrature, quasi Monte Carlo, and coupling these with hierarchical representations and Richardson extrapolation. Several figures are included to illustrate the adaptive construction of sparse grids and simulation of the rough Bergomi dynamics.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
This document summarizes a research paper about using hierarchical deterministic quadrature methods for option pricing under the rough Bergomi model. It discusses the rough Bergomi model and challenges in pricing options under this model numerically. It then describes the methodology used, which involves analytic smoothing, adaptive sparse grids quadrature, quasi Monte Carlo, and coupling these with hierarchical representations and Richardson extrapolation. Several figures are included to illustrate the adaptive construction of sparse grids and simulation of the rough Bergomi dynamics.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
International journal of engineering and mathematical modelling vol2 no1_2015_1IJEMM
Our efforts are mostly concentrated on improving the convergence rate of the numerical procedures both from the viewpoint of cost-efficiency and accuracy by handling the parametrization of the shape to be optimized. We employ nested parameterization supports of either shape, or shape deformation, and the classical process of degree elevation resulting in exact geometrical data transfer from coarse to fine representations. The algorithms mimick classical multigrid strategies and are found very effective in terms of convergence acceleration. In this paper, we analyse and demonstrate the efficiency of the two-level correction algorithm which is the basic block of a more general miltilevel strategy.
The document discusses uncertainty quantification (UQ) using quasi-Monte Carlo (QMC) integration methods. It introduces parametric operator equations for modeling input uncertainty in partial differential equations. Both forward and inverse UQ problems are considered. QMC methods like interlaced polynomial lattice rules are discussed for approximating high-dimensional integrals arising in UQ, with convergence rates superior to standard Monte Carlo. Algorithms for single-level and multilevel QMC are presented for solving forward and inverse UQ problems.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Talk of Michael Samet, entitled "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" at the International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022
R package bayesImageS: Scalable Inference for Intractable LikelihoodsMatt Moores
There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space. Markov random fields, such as the Ising/Potts model and exponential random graph model (ERGM), are particularly challenging because the number of discrete variables increases linearly with the size of the image or graph. The likelihood of these models cannot be computed directly, due to the presence of an intractable normalising constant. In this context, it is necessary to employ algorithms that provide a suitable compromise between accuracy and computational cost.
Bayesian indirect likelihood (BIL) is a class of methods that approximate the likelihood function using a surrogate model. This model can be trained using a pre-computation step, utilising massively parallel hardware to simulate auxiliary variables. We review various types of surrogate model that can be used in BIL. In the case of the Potts model, we introduce a parametric approximation to the score function that incorporates its known properties, such as heteroskedasticity and critical temperature. We demonstrate this method on 2D satellite remote sensing and 3D computed tomography (CT) images. We achieve a hundredfold improvement in the elapsed runtime, compared to the exchange algorithm or ABC. Our algorithm has been implemented in the R package “bayesImageS,” which is available from CRAN.
This document summarizes a thesis on numerical methods for stochastic systems subject to generalized Levy noise. It includes:
1) Motivation for studying such systems from both mathematical and applicational perspectives, such as in mathematical finance and chaotic flows.
2) An introduction to Levy processes and the probability collocation method (PCM) for uncertainty quantification (UQ).
3) Details on improving PCM through a multi-element approach and constructing orthogonal polynomials for discrete measures.
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
This document discusses the use of Monte Carlo simulations to evaluate measurement uncertainty according to Supplement 1 to the Guide to the Expression of Uncertainty in Measurement (GUM+1). It provides examples of generating random numbers and using them in Monte Carlo simulations. One example evaluates the uncertainty in measuring the volume of a cylinder based on uncertainties in diameter and height measurements. The document concludes with an example of calibrating a thermometer and determining the uncertainties in the calibration curve parameters.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
This document presents a framework for efficiently pricing multi-asset options using Fourier methods. It discusses using the Fourier transform to map option pricing problems to frequency space, where the integrand may have better regularity. A damping parameter is introduced to ensure the transformed functions have sufficient decay at infinity. However, literature provides no guidance on choosing optimal damping parameters. The document proposes a method called Optimal Damping with Hierarchical Adaptive Quadrature to select damping parameters that improve the convergence rate of quadrature pricing methods in Fourier space. It applies this method to price options under various multi-dimensional models in numerical experiments.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a methodology for designing low error fixed width adaptive multipliers. It begins by discussing Baugh-Wooley multiplication, which produces a 2n-bit output from n-bit inputs. For digital signal processing applications, only an n-bit output is required. Direct truncation introduces errors. The methodology proposes using a generalized index and binary thresholding to derive an error-compensation bias to reduce truncation errors. It defines different types of binary thresholding and analyzes statistics to determine average bias values. The proposed fixed width multiplier is intended to have better error performance than other existing multiplier structures.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
The document summarizes key concepts from Chapter 8 of the textbook "Fundamentals of Multimedia" on lossy compression algorithms. It introduces lossy compression and discusses distortion measures, rate-distortion theory, quantization techniques including uniform, non-uniform, and vector quantization. It also covers transform coding techniques such as the discrete cosine transform and its use in image compression standards to remove spatial redundancies by transforming pixel values into frequency coefficients.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
This document summarizes a talk given by Heiko Strathmann on using partial posterior paths to estimate expectations from large datasets without full posterior simulation. The key ideas are:
1. Construct a path of "partial posteriors" by sequentially adding mini-batches of data and computing expectations over these posteriors.
2. "Debias" the path of expectations to obtain an unbiased estimator of the true posterior expectation using a technique from stochastic optimization literature.
3. This approach allows estimating posterior expectations with sub-linear computational cost in the number of data points, without requiring full posterior simulation or imposing restrictions on the likelihood.
Experiments on synthetic and real-world examples demonstrate competitive performance versus standard M
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...Chiheb Ben Hammouda
My talk at ICCF24 with abstract: Efficiently pricing multi-asset options poses a significant challenge in quantitative finance. While the Monte Carlo (MC) method remains a prevalent choice, its slow convergence rate can impede practical applications. Fourier methods, leveraging the knowledge of the characteristic function, have shown promise in valuing single-asset options but face hurdles in the high-dimensional context. This work advocates using the randomized quasi-MC (RQMC) quadrature to improve the scalability of Fourier methods with high dimensions. The RQMC technique benefits from the smoothness of the integrand and alleviates the curse of dimensionality while providing practical error estimates. Nonetheless, the applicability of RQMC on the unbounded domain, $\mathbb{R}^d$, requires a domain transformation to $[0,1]^d$, which may result in singularities of the transformed integrand at the corners of the hypercube, and deteriorate the rate of convergence of RQMC. To circumvent this difficulty, we design an efficient domain transformation procedure based on the derived boundary growth conditions of the integrand. This transformation preserves the sufficient regularity of the integrand and hence improves the rate of convergence of RQMC. To validate this analysis, we demonstrate the efficiency of employing RQMC with an appropriate transformation to evaluate options in the Fourier space for various pricing models, payoffs, and dimensions. Finally, we highlight the computational advantage of applying RQMC over quadrature methods in the Fourier domain, and over the MC method in the physical domain for options with up to 15 assets.
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
International journal of engineering and mathematical modelling vol2 no1_2015_1IJEMM
Our efforts are mostly concentrated on improving the convergence rate of the numerical procedures both from the viewpoint of cost-efficiency and accuracy by handling the parametrization of the shape to be optimized. We employ nested parameterization supports of either shape, or shape deformation, and the classical process of degree elevation resulting in exact geometrical data transfer from coarse to fine representations. The algorithms mimick classical multigrid strategies and are found very effective in terms of convergence acceleration. In this paper, we analyse and demonstrate the efficiency of the two-level correction algorithm which is the basic block of a more general miltilevel strategy.
The document discusses uncertainty quantification (UQ) using quasi-Monte Carlo (QMC) integration methods. It introduces parametric operator equations for modeling input uncertainty in partial differential equations. Both forward and inverse UQ problems are considered. QMC methods like interlaced polynomial lattice rules are discussed for approximating high-dimensional integrals arising in UQ, with convergence rates superior to standard Monte Carlo. Algorithms for single-level and multilevel QMC are presented for solving forward and inverse UQ problems.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Talk of Michael Samet, entitled "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" at the International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022
R package bayesImageS: Scalable Inference for Intractable LikelihoodsMatt Moores
There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space. Markov random fields, such as the Ising/Potts model and exponential random graph model (ERGM), are particularly challenging because the number of discrete variables increases linearly with the size of the image or graph. The likelihood of these models cannot be computed directly, due to the presence of an intractable normalising constant. In this context, it is necessary to employ algorithms that provide a suitable compromise between accuracy and computational cost.
Bayesian indirect likelihood (BIL) is a class of methods that approximate the likelihood function using a surrogate model. This model can be trained using a pre-computation step, utilising massively parallel hardware to simulate auxiliary variables. We review various types of surrogate model that can be used in BIL. In the case of the Potts model, we introduce a parametric approximation to the score function that incorporates its known properties, such as heteroskedasticity and critical temperature. We demonstrate this method on 2D satellite remote sensing and 3D computed tomography (CT) images. We achieve a hundredfold improvement in the elapsed runtime, compared to the exchange algorithm or ABC. Our algorithm has been implemented in the R package “bayesImageS,” which is available from CRAN.
This document summarizes a thesis on numerical methods for stochastic systems subject to generalized Levy noise. It includes:
1) Motivation for studying such systems from both mathematical and applicational perspectives, such as in mathematical finance and chaotic flows.
2) An introduction to Levy processes and the probability collocation method (PCM) for uncertainty quantification (UQ).
3) Details on improving PCM through a multi-element approach and constructing orthogonal polynomials for discrete measures.
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
This document discusses the use of Monte Carlo simulations to evaluate measurement uncertainty according to Supplement 1 to the Guide to the Expression of Uncertainty in Measurement (GUM+1). It provides examples of generating random numbers and using them in Monte Carlo simulations. One example evaluates the uncertainty in measuring the volume of a cylinder based on uncertainties in diameter and height measurements. The document concludes with an example of calibrating a thermometer and determining the uncertainties in the calibration curve parameters.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
This document presents a framework for efficiently pricing multi-asset options using Fourier methods. It discusses using the Fourier transform to map option pricing problems to frequency space, where the integrand may have better regularity. A damping parameter is introduced to ensure the transformed functions have sufficient decay at infinity. However, literature provides no guidance on choosing optimal damping parameters. The document proposes a method called Optimal Damping with Hierarchical Adaptive Quadrature to select damping parameters that improve the convergence rate of quadrature pricing methods in Fourier space. It applies this method to price options under various multi-dimensional models in numerical experiments.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a methodology for designing low error fixed width adaptive multipliers. It begins by discussing Baugh-Wooley multiplication, which produces a 2n-bit output from n-bit inputs. For digital signal processing applications, only an n-bit output is required. Direct truncation introduces errors. The methodology proposes using a generalized index and binary thresholding to derive an error-compensation bias to reduce truncation errors. It defines different types of binary thresholding and analyzes statistics to determine average bias values. The proposed fixed width multiplier is intended to have better error performance than other existing multiplier structures.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
The document summarizes key concepts from Chapter 8 of the textbook "Fundamentals of Multimedia" on lossy compression algorithms. It introduces lossy compression and discusses distortion measures, rate-distortion theory, quantization techniques including uniform, non-uniform, and vector quantization. It also covers transform coding techniques such as the discrete cosine transform and its use in image compression standards to remove spatial redundancies by transforming pixel values into frequency coefficients.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
This document summarizes a talk given by Heiko Strathmann on using partial posterior paths to estimate expectations from large datasets without full posterior simulation. The key ideas are:
1. Construct a path of "partial posteriors" by sequentially adding mini-batches of data and computing expectations over these posteriors.
2. "Debias" the path of expectations to obtain an unbiased estimator of the true posterior expectation using a technique from stochastic optimization literature.
3. This approach allows estimating posterior expectations with sub-linear computational cost in the number of data points, without requiring full posterior simulation or imposing restrictions on the likelihood.
Experiments on synthetic and real-world examples demonstrate competitive performance versus standard M
Similar to Talk_HU_Berlin_Chiheb_benhammouda.pdf (20)
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...Chiheb Ben Hammouda
My talk at ICCF24 with abstract: Efficiently pricing multi-asset options poses a significant challenge in quantitative finance. While the Monte Carlo (MC) method remains a prevalent choice, its slow convergence rate can impede practical applications. Fourier methods, leveraging the knowledge of the characteristic function, have shown promise in valuing single-asset options but face hurdles in the high-dimensional context. This work advocates using the randomized quasi-MC (RQMC) quadrature to improve the scalability of Fourier methods with high dimensions. The RQMC technique benefits from the smoothness of the integrand and alleviates the curse of dimensionality while providing practical error estimates. Nonetheless, the applicability of RQMC on the unbounded domain, $\mathbb{R}^d$, requires a domain transformation to $[0,1]^d$, which may result in singularities of the transformed integrand at the corners of the hypercube, and deteriorate the rate of convergence of RQMC. To circumvent this difficulty, we design an efficient domain transformation procedure based on the derived boundary growth conditions of the integrand. This transformation preserves the sufficient regularity of the integrand and hence improves the rate of convergence of RQMC. To validate this analysis, we demonstrate the efficiency of employing RQMC with an appropriate transformation to evaluate options in the Fourier space for various pricing models, payoffs, and dimensions. Finally, we highlight the computational advantage of applying RQMC over quadrature methods in the Fourier domain, and over the MC method in the physical domain for options with up to 15 assets.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
The document describes a multilevel hybrid split-step implicit tau-leap method for simulating stochastic reaction networks. It begins with background on modeling biochemical reaction networks stochastically. It then discusses challenges with existing simulation methods like the chemical master equation and stochastic simulation algorithm. The document introduces the split-step implicit tau-leap method as an improvement over explicit tau-leap for stiff systems. It proposes a multilevel Monte Carlo estimator using this method to efficiently estimate expectations of observables with near-optimal computational work.
My talk in the MCQMC Conference 2016, Stanford University. The talk is about Multilevel Hybrid Split Step Implicit Tau-Leap
for Stochastic Reaction Networks.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
1. Smoothing Techniques and Hierarchical Approximations
for Efficient Option Pricing
Chiheb Ben Hammouda
Christian
Bayer
Michael
Samet
Center for Uncertainty
Quantification
Quantification Logo Lock-up
Antonis
Papapantoleon
Raúl
Tempone
Center for Uncertainty
Quantification
Cen
Qu
Center for Uncertainty Quantification Logo Loc
Mathematical Finance Seminar, Humboldt University, Berlin
October 27, 2022
2. Related Manuscripts to the Talk
Part I
▸ Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Numerical Smoothing with Hierarchical Adaptive Sparse Grids
and Quasi-Monte Carlo Methods for Efficient Option Pricing”. In:
arXiv preprint arXiv:2111.01874 (2021). Accepted in Quantitative
Finance Journal (2022).
▸ Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Multilevel Monte Carlo combined with numerical smoothing for
robust and efficient option pricing and density estimation”. In:
arXiv preprint arXiv:2003.05708 (2022).
Part II
▸ Christian Bayer, Chiheb Ben Hammouda, Antonis Papapantoleon,
Michael Samet, and Raúl Tempone. “Optimal Damping with
Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of
Multi-Asset Options in Lévy Models”. In: arXiv preprint
arXiv:2203.08196 (2022).
1
3. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
1
4. Option Pricing as a High-Dimensional, non-Smooth
Numerical Integration Problem
Option pricing often correspond to
high-dimensional integration problems
with non-smooth integrands.
▸ High-dimensional:
(i) Time-discretization of an SDE;
(ii) A large number of underlying
assets;
(iii) Path dependence on the whole
trajectory of the underlying.
▸ Non-smooth: E.g., digital options:
1ST >K, rainbow options:
max(min(S1,...,Sd) − K,0), . . .
Methods like quasi Monte Carlo
(QMC) or (adaptive) sparse grids
quadrature (ASGQ) critically rely on
smoothness of integrand and dimension
of the problem.
Monte Carlo (MC) does not care (in
terms of convergence rates) BUT
multilevel Monte Carlo (MLMC) does.
Figure 1.1: Integrand of two-dimensional
European basket option (Black–Scholes
model)
6. Randomized QMC (rQMC)
A (rank-1) lattice rule (Sloan 1985; Nuyens 2014) with n points
Qn(g) ∶=
1
n
n−1
∑
k=0
(g ○ F−1
)(
kz mod n
n
),
where z = (z1,...,zd) ∈ Nd
(the generating vector) and F−1
(⋅) is the inverse of
the standard normal cumulative distribution function.
A randomly shifted lattice rule
Qn,q(g) =
1
q
q−1
∑
i=0
Q(i)
n (g) =
1
q
q−1
∑
i=0
(
1
n
n−1
∑
k=0
(g ○ F−1
)(
kz + ∆(i)
mod n
n
)),
where {∆(i)
}q−1
i=0 : independent random shifts, and MrQMC = q × n.
3
7. How Does Regularity Affect QMC Performance?
(Niederreiter 1992): convergence rate of O (M
−1
2
−δ
rQMC), where 0 ≤ δ ≤ 1
2
is related to the degree of regularity of the integrand g.
(Dick, Kuo, and Sloan 2013): convergence rate of O (M−1
rQMC) can be
observed for the lattice rule if the integrand g belongs to Wd,γ
equipped with the norm
∣∣g∣∣2
Wd,γ
= ∑
α⊆{1∶d}
1
γα
∫
[0,1]∣α∣
(∫
[0,1]d−∣α∣
∂∣α∣
∂yα
g(y)dy−α)
2
dyα,
where yα ∶= (yj)j∈α and y−α ∶= (yj)j∈{1∶d}∖α.
(Nuyens and Suzuki 2021): Higher mixed smoothness and periodicity
requirements can lead to higher order convergence rates, when using
scaled lattice rules.
Notation
Wd,γ: a d-dimensional weighted Sobolev space of functions with
square-integrable mixed first derivatives.
γ ∶= {γα > 0 ∶ α ⊆ {1,2,...,d}} being a given collection of weights, and
d being the dimension of the problem. 4
8. Quadrature Methods (I)
Naive Quadrature for E[g]: " Curse of dimension
Qβ
d [g] ∶=
d
⊗
i=1
Qm(βi)
[g] ≡
m(β1)
∑
k1=1
⋯
m(βd)
∑
kd=1
wβ1
k1
⋯wβd
kd
g (xβ1
k1
,...,xβd
kd
)
Notation:
{xβi
ki
}
m(βi)
ki=1
,{ωβi
ki
}
m(βi)
ki=1
are respectively the sets of quadrature points and
corresponding quadrature weights.
β = (βi)d
i=1 ∈ Nd
is a multi-index, and m ∶ N → N is a level-to-nodes function mapping
a given level to the number of quadrature points.
Better Quadrature estimate E[g]:
QI
d [g] = ∑
β∈I
∆Qβ
d [g], with the multivariate detail: ∆Qβ
d [g] =
d
⊗
i=1
∆iQβ
d [g]
and univariate detail: (with ei denotes the ith d-dimensional unit vector.)
∆iQβ
d [g] ∶= {
(Qβ
d − Qβ′
d )[g], with β′
= β − ei, if βi > 1
Qβ
d [g], if βi = 1,
Illustration:
∆Qβ
2 = ∆2∆1Q
(β1,β2)
2 = Q
(β1,β2)
2 − Q
(β1,β2−1)
2 − Q
(β1−1,β2)
2 + Q
(β1−1,β2−1)
2
∑∣∣β∣∣∞≤2 ∆Qβ
2 [g] = ∆Q
(1,1)
2 [g] + ∆Q
(2,1)
2 [g] + ∆Q
(1,2)
2 [g] + ∆Q
(2,2)
2 [g] = Q
(2,2)
2 [g]
5
9. Quadrature Methods (II)
E[g] ≈ QI
d [g] = ∑
β∈I
∆Qβ
d [g]
Tensor Product (TP) approach: I ∶= I` = {∣∣ β ∣∣∞≤ `; β ∈ Nd
+}.
Regular sparse grids: I ∶ I` = {∣∣ β ∣∣1≤ ` + d − 1; β ∈ Nd
+}
Adaptive sparse grids quadrature (ASGQ): The construction of I = IASGQ
is
done a posteriori and adaptively by profit thresholding
IASGQ
= {β ∈ Nd
+ ∶ Pβ ≥ T}.
▸ Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
▸ Error contribution: ∆Eβ = ∣MI∪{β}
− MI
∣.
▸ Work contribution: ∆Wβ = Work[MI∪{β}
] − Work[MI
].
Figure 1.2: Left are product grids ∆β1 ⊗ ∆β2
for 1 ≤ β1,β2 ≤ 3. Right is the corresponding
sparse grids construction.
Figure 1.3: Illustration of ASG grid
construction.
6
10. How Does Regularity Affect ASGQ Performance?
Product approach: EQ(M) = O (M−r/d
) (for functions with
bounded total derivatives up to order r).
Adaptive sparse grids quadrature (ASGQ): From the analysis in
(Chen 2018; Ernst, Sprungk, and Tamellini 2018)
EQ(M) = O (M−p/2
) (where1
p is independent from the problem
dimension, and is related to the order up to which the weighted
mixed derivatives are bounded).
Notation: M: number of quadrature points; EQ: quadrature error.
1
characterizes the relation between the regularity of the integrand and the
anisotropic property of the integrand with respect to different dimensions. 7
11. ASGQ Quadrature Error Convergence
101
102
103
104
MASGQ
10-3
10
-2
10
-1
100
101
Relative
Quadrature
Error
ASGQ without smoothing
ASGQ with numerical smoothing
(a)
101
102
103
104
MASGQ
10-3
10
-2
10-1
100
101
Relative
Quadrature
Error
ASGQ without smoothing
ASGQ with numerical smoothing
(b)
Figure 1.4: Digital option under the Heston model: Comparison of the relative
quadrature error convergence for the ASGQ method with/out numerical
smoothing. (a) Without Richardson extrapolation (N = 8), (b) with
Richardson extrapolation (Nfine level = 8).
8
12. Multilevel Monte Carlo (MLMC)
(Heinrich 2001; Kebaier et al. 2005; Giles 2008b)
Aim: Improve MC complexity, when estimating E[g(X(T))].
Setting:
▸ A hierarchy of nested meshes of [0,T], indexed by {`}L
`=0.
▸ ∆t` = K−`
∆t0: The time steps size for levels ` ≥ 1; K>1, K ∈ N.
▸ X` ∶= X∆t`
: The approximate process generated using a step size of ∆t`.
MLMC idea
E[g(XL(T))] = E[g(X0(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
+
L
∑
`=1
E[g(X`(T)) − g(X`−1(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
(1)
Var[g(X0(T))] ≫ Var[g(X`(T)) − g(X`−1(T))] ↘ as ` ↗
M0 ≫ M` ↘ as ` ↗
MLMC estimator: ̂
Q ∶=
L
∑
`=0
̂
Q`,
̂
Q0 ∶=
1
M0
M0
∑
m0=1
g(X0,[m0](T)); ̂
Q` ∶=
1
M`
M`
∑
m`=1
(g(X`,[m`](T)) − g(X`−1,[m`](T))), 1 ≤ ` ≤ L
9
13. How Does Regularity Affect MLMC Complexity?
Complexity analysis for MLMC
MLMC Complexity (Cliffe, Giles,
Scheichl, and Teckentrup 2011)
O (TOL
−2−max(0, γ−β
α
)
log (TOL)2×1{β=γ}
)
(2)
i) Weak rate:
∣E[g (X`(T)) − g (X(T))]∣ ≤ c12−α`
ii) Variance decay rate:
Var[g (X`(T)) − g (X`−1(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=V`
≤ c22−β`
iii) Work growth rate: W` ≤ c32γ`
(W`:
expected cost)
For Euler-Maruyama (γ = 1):
▸ If g is Lipschitz ⇒ V` ≃ ∆t` due to strong rate 1/2, that is β = γ and MLMC complexity
O (TOL−2
);
▸ Otherwise (without any smoothing or adaptivity techniques):
β < γ ⇒ worst-case complexity, O (TOL− 5
2 ).
10
14. How Does Regularity Affect MLMC Robustness?
/ For non-lipschitz payoffs (without any smoothing or adaptivity
techniques):
The Kurtosis, κ` ∶=
E[(Y`−E[Y`])4
]
(Var[Y`])2 is of O(∆t
−1/2
` ) for Euler-Maruyama.
Large kurtosis problem: discussed previously in (Ben Hammouda, Moraes,
and Tempone 2017; Ben Hammouda, Ben Rached, and Tempone 2020) ⇒
/ Expensive cost for reliable/robust estimates of sample statistics.
Why is large kurtosis bad?
σS2(Y`) =
Var[Y`]
√
M`
√
(κ` − 1) +
2
M` − 1
; " M` ≫ κ`.
Why are accurate variance estimates, V` = Var[Y`], important?
M∗
` ∝
√
V`W−1
`
L
∑
`=0
√
V`W`.
Notation
Y` ∶= g(X`(T)) − g(X`−1(T))
σS2(Y`): Standard deviation of the sample variance of Y`;
M∗
` : Optimal number of samples per level; W`: Cost per sample path.
11
15. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
11
16. Framework
Approximate efficiently E[g(X(T))]
Given a (smooth) φ ∶ Rd
→ R, the payoff g ∶ Rd
→ R has either
jumps or kinks:
▸ Hockey-stick functions: g(x) = max(φ(x),0) (put or call payoffs,
. . . ).
▸ Indicator functions: g(x) = 1(φ(x)≥0) (digital option, financial
Greeks, probabilities, . . . ).
▸ Dirac Delta functions: g(x) = δ(φ(x)=0) (density estimation, financial
Greeks, . . . ).
The process X is approximated (via a discretization scheme) by
X, E.g., (for illustration)
▸ Stochastic volatility model: E.g., the Heston model
dXt = µXtdt +
√
vtXtdWX
t
dvt = κ(θ − vt)dt + ξ
√
vtdWv
t ,
(WX
t ,Wv
t ): correlated Wiener processes with correlation ρ.
12
17. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
12
18. Numerical Smoothing Steps
Guiding example:
g ∶ Rd
→ R payoff function, X
∆t
T (∆t = T
N ) Euler discretization of
d-dimensional SDE , E.g., dX
(i)
t = ai(Xt)dt + ∑d
j=1 bij(Xt)dW
(j)
t ,
where {W(j)
}d
j=1 are standard Brownian motions.
X
∆t
T = X
∆t
T (∆W
(1)
1 ,...,∆W
(1)
N ,...,∆W
(d)
1 ,...,∆W
(d)
N )
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=∆W
.
E[g(X
∆t
T )] =?
1 Identify hierarchical representation of integration variables ⇒ locate the
discontinuity in a smaller dimensional space and create more anisotropy
(a) X
∆t
T (∆W) ≡ X
∆t
T (Z), Z = (Zi)dN
i=1 ∼ N(0,IdN ): s.t. “Z1 ∶= (Z
(1)
1 ,...,Z
(d)
1 )
(coarse rdvs) substantially contribute even for ∆t → 0”.
E.g., Brownian bridges of ∆W / Haar wavelet construction of W.
(b) Design of a sub-optimal smoothing direction (A: rotation matrix which is
easy to construct and whose structure depends on the payoff g.)
Y = AZ1.
E.g., for an arithmetic basket option with payoff max(∑
d
i=1 ciXi(T) − K,0), a
suitable A is a rotation matrix, with the first row leading to Y1 = ∑
d
i=1 Z
(i)
1
up to rescaling without any constraint for the remaining rows (using the
Gram-Schmidt procedure). 13
19. Numerical Smoothing Steps
Guiding example:
g ∶ Rd
→ R payoff function, X
∆t
T (∆t = T
N ) Euler discretization of
d-dimensional SDE , E.g., dX
(i)
t = ai(Xt)dt + ∑d
j=1 bij(Xt)dW
(j)
t ,
where {W(j)
}d
j=1 are standard Brownian motions.
X
∆t
T = X
∆t
T (∆W
(1)
1 ,...,∆W
(1)
N ,...,∆W
(d)
1 ,...,∆W
(d)
N )
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=∆W
.
E[g(X
∆t
T )] =?
1 Identify hierarchical representation of integration variables ⇒ locate the
discontinuity in a smaller dimensional space and create more anisotropy
(a) X
∆t
T (∆W) ≡ X
∆t
T (Z), Z = (Zi)dN
i=1 ∼ N(0,IdN ): s.t. “Z1 ∶= (Z
(1)
1 ,...,Z
(d)
1 )
(coarse rdvs) substantially contribute even for ∆t → 0”.
E.g., Brownian bridges of ∆W / Haar wavelet construction of W.
(b) Design of a sub-optimal smoothing direction (A: rotation matrix which is
easy to construct and whose structure depends on the payoff g.)
Y = AZ1.
E.g., for an arithmetic basket option with payoff max(∑
d
i=1 ciXi(T) − K,0), a
suitable A is a rotation matrix, with the first row leading to Y1 = ∑
d
i=1 Z
(i)
1
up to rescaling without any constraint for the remaining rows (using the
Gram-Schmidt procedure). 13
20. Numerical Smoothing Steps
2
E[g(X(T))] ≈ E[g(X
∆t
(T))]
= ∫
Rd×N
G(z)ρd×N (z)dz
(1)
1 ...dz
(1)
N ...dz
(d)
1 ...dz
(d)
N
= ∫
RdN−1
I(y−1,z
(1)
−1 ,...,z
(d)
−1 )ρd−1(y−1)dy−1ρdN−d(z
(1)
−1 ,...,z
(d)
−1 )dz
(1)
−1 ...dz
(d)
−1
= E[I(Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] ≈ E[I(Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )], (3)
" How to obtain I(⋅) and I(⋅)? ⇒ See next slide.
Notation
G ∶= g ○ Φ ○ (ψ(1)
,...,ψ(d)
) maps N × d Gaussian random inputs to g(X
∆t
(T));
where
▸ ψ(j)
∶ (Z
(j)
1 ,...,Z
(j)
N ) ↦ (B
(j)
1 ,...,B
(j)
N ) denotes the mapping of the Brownian
bridge/Haar wavelet construction.
▸ Φ ∶ (∆t,B) ↦ X
∆t
(T) denotes the mapping of the time-stepping scheme.
x−j denotes the vector of length d − 1 denoting all the variables other than xj in
x ∈ Rd
.
ρd×N (z) = 1
(2π)d×N/2 e−1
2
zT z
.
14
21. Numerical Smoothing Steps
3
I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) = ∫
R
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
= ∫
y∗
1
−∞
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1 + ∫
+∞
y∗
1
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
≈ I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) ∶=
Mlag
∑
k=0
ηkG(ζk (y∗
1),y−1,z
(1)
−1 ,...,z
(d)
−1 ), (4)
where
▸ y∗
1 (y−1,z
(1)
−1 ,...,z
(d)
−1 ): the exact discontinuity location s.t
φ(X
∆t
(T)) = P(y∗
1 ;y−1,z
(1)
−1 ,...,z
(d)
−1 ) = 0. (5)
▸ y∗
1(y−1,z
(1)
−1 ,...,z
(d)
−1 ): the approximated discontinuity location via root finding
to solve (5);
▸ MLag: the number of Laguerre quadrature points ζk ∈ R, and corresponding
weights ηk;
4 Compute the remaining (dN −1)-integral in (3) by ASGQ or QMC or MLMC.
15
22. Extending Numerical Smoothing for
Multiple Discontinuities
Multiple Discontinuities: Due to the payoff structure/use of Richardson extrapolation.
R different ordered multiple roots, e.g., {y∗
i }R
i=1, the smoothed integrand is
I (y−1,z
(1)
−1 ,...,z
(d)
−1 ) = ∫
y∗
1
−∞
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1 + ∫
+∞
y∗
R
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
+
R−1
∑
i=1
∫
y∗
i+1
y∗
i
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1,
and its approximation I is given by
I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) ∶=
MLag,1
∑
k=0
ηLag
k G(ζLag
k,1 (y∗
1),y−1,z
(1)
−1 ,...,z
(d)
−1 )
+
MLag,R
∑
k=0
ηLag
k G(ζLag
k,R (y∗
R),y−1,z
(1)
−1 ,...,z
(d)
−1 )
+
R−1
∑
i=1
⎛
⎝
MLeg,i
∑
k=0
ηLeg
k G(ζLeg
k,i (y∗
i ,y∗
i+1),y−1,z
(1)
−1 ,...,z
(d)
−1 )
⎞
⎠
,
{y∗
i }R
i=1: the approximated discontinuities locations; MLag,1 and MLag,R: the number
of Laguerre quadrature points ζLag
.,. ∈ R with corresponding weights ηLag
. ; {MLeg,i}R−1
i=1 :
the number of Legendre quadrature points ζLeg
.,. with corresponding weights ηLeg
. .
I can be approximated further depending on (i) the decay of G × ρ1 in the semi-infinite
domains and (ii) how close the roots are to each other.
23. Extending Numerical Smoothing for
Density Estimation
Goal: Approximate the density ρX at u, for a stochastic process X
ρX(u) = E[δ(X − u)], δ is the Dirac delta function.
" Without any smoothing techniques (regularization, kernel
density,. . . ) MC and MLMC fail due to the infinite variance caused
by the singularity of the function δ.
Strategy in (Bayer, Hammouda, and Tempone 2022): Conditioning
with respect to the Brownian bridge
ρX(u) =
1
√
2π
E[exp(−(Y ∗
1 (u))
2
/2)∣
dY ∗
1
du
(u)∣]
≈
1
√
2π
E
⎡
⎢
⎢
⎢
⎣
exp(−(Y
∗
1(u))
2
/2)
R
R
R
R
R
R
R
R
R
R
R
dY
∗
1
du
(u)
R
R
R
R
R
R
R
R
R
R
R
⎤
⎥
⎥
⎥
⎦
, (6)
Y ∗
1 (u;Z−1): the exact discontinuity; Y
∗
1(u;Z−1): the approximated
discontinuity.
24. Why not Kernel Density Techniques
in Multiple Dimensions?
Similar to approaches based on parametric regularization as in (Giles, Nagapetyan,
and Ritter 2015).
This class of approaches has a pointwise error that increases exponentially with
respect to the dimension of the state vector X.
For a d-dimensional problem, a kernel density estimator with a bandwidth matrix,
H = diag(h,...,h)
MSE ≈ c1M−1
h−d
+ c2h4
. (7)
M is the number of samples, and c1 and c2 are constants.
Our approach in high dimension: For u ∈ Rd
ρX(u) = E[δ(X − u)] = E[ρd (Y∗
(u))∣det(J(u))∣]
≈ E[ρd (Y
∗
(u))∣det(J(u))∣], (8)
where
▸ Y∗
(u;⋅): the exact discontinuity; Y
∗
(u;⋅): the approximated discontinuity.
▸ J is the Jacobian matrix, with Jij =
∂y∗
i
∂uj
; ρd(⋅) is the multivariate Gaussian density.
Thanks to the exact conditional expectation with respect to the Brownian bridge
⇒ the smoothing error in our approach is insensitive to the dimension of the
problem.
18
25. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
18
26. Notations
The Haar basis functions, ψn,k, of L2
([0,1]) with support
[2−n
k,2−n
(k + 1)]:
ψ−1(t) ∶= 1[0,1](t); ψn,k(t) ∶= 2n/2
ψ (2n
t − k), n ∈ N0, k = 0,...,2n
− 1,
where ψ(⋅) is the Haar mother wavelet:
ψ(t) ∶=
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
1, 0 ≤ t < 1
2
,
−1, 1
2
≤ t < 1,
0, else,
Grid Dn ∶= {tn
` ∣ ` = 0,...,2n+1
} with tn
` ∶= `
2n+1 T.
For i.i.d. standard normal rdvs Z−1, Zn,k, n ∈ N0, k = 0,...,2n
− 1,
we define the (truncated) standard Brownian motion
WN
t ∶= Z−1Ψ−1(t) +
N
∑
n=0
2n
−1
∑
k=0
Zn,kΨn,k(t).
with Ψ−1(⋅) and Ψn,k(⋅) are the antiderivatives of the Haar basis
functions. 19
27. Notations
We define the corresponding increments for any function or
process F as follows:
∆N
` F ∶= F(tN
`+1) − F(tN
` ).
The solution of the Euler scheme of dXt = b(Xt)dWt, X0 = x ∈ R,
along DN
as
XN
`+1 ∶= XN
` + b(XN
` )∆N
` W, ` = 0,...,2N
− 1, (9)
with XN
0 ∶= X0 = x; for convenience, we also define XN
T ∶= XN
2N .
We define the deterministic function HN
∶ R2N+1−1
→ R as
HN
(zN
) ∶= EZ−1
[g (XN
T (Z−1,zN
))], (10)
where ZN
∶= (Zn,k)n=0,...,N, k=0,...2n−1.
20
28. The Smoothness Theorem
Theorem 2.1 ((Bayer, Ben Hammouda, and Tempone 2022))
Assume that XN
T , defined by (9), satisfies Assumptions 3.1 and 3.2.a
Then, for any p ∈ N and indices n1,...,np and k1,...,kp (satisfying
0 ≤ kj < 2nj ), the function HN
defined in (10) satisfies the following
(with constants independent of nj,kj)b
∂p
HN
∂zn1,k1 ⋯∂znp,kp
(zN
) = Ô
P (2− ∑
p
j=1 nj/2
).c
In particular, HN
is of class C∞
.
a
We showed in (Bayer, Ben Hammouda, and Tempone 2022) that
Assumptions 3.1 and 3.2 are valid for many pricing models.
b
The constants increase in p and N.
c
For sequences of rdvs FN , we write that FN = Ô
P(1) if there is a rdv C
with finite moments of all orders such that for all N, we have ∣FN ∣ ≤ C a.s.
21
29. Sketch of the Proof
1 We consider a mollified version gδ of g and the corresponding function
HN
δ (defined by replacing g with gδ in (10)).
2 We prove that we can interchange the integration and differentiation,
which implies
∂HN
δ (zN
)
∂zn,k
= E [g′
δ (XN
T (Z−1,zN
))
∂XN
T (Z−1,zN
)
∂zn,k
].
3 Multiplying and dividing by
∂XN
T (Z−1,zN )
∂y (where y ∶= z−1) and
replacing the expectation by an integral w.r.t. the standard normal
density, we obtain
∂HN
δ (zN
)
∂zn,k
= ∫
R
∂gδ (XN
T (y,zN
))
∂y
(
∂XN
T
∂y
(y,zN
))
−1
∂XN
T
∂zn,k
(y,zN
)
1
√
2π
e− y2
2 dy.
(11)
4 We show that integration by parts is possible, and then we can discard
the mollified version to obtain the smoothness of HN
because
∂HN
(zN
)
∂zn,k
= −∫
R
g (XN
T (y,zN
))
∂
∂y
⎡
⎢
⎢
⎢
⎢
⎣
(
∂XN
T
∂y
(y,zN
))
−1
∂XN
T
∂zn,k
(y,zN
)
1
√
2π
e− y2
2
⎤
⎥
⎥
⎥
⎥
⎦
dy.
5 The proof relies on successively applying the technique above of
dividing by
∂XN
T
∂y and then integrating by parts.
22
30. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
22
31. Error Discussion for ASGQ
QASGQ
N : the ASGQ estimator
E[g(X(T)] − QASGQ
N = E[g(X(T))] − E[g(X
∆t
(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error I: bias or weak error of O(∆t)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error II: numerical smoothing error of O(M
−s/2
Lag
)+O(TOLNewton)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − QASGQ
N
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error III: ASGQ error of O(M
−p/2
ASGQ
)
, (12)
Notations
MASGQ: the number of quadrature points used by the ASGQ estimator, and p > 0.
y∗
1: the approximated location of the non smoothness obtained by Newton iteration
⇒ ∣y∗
1 − y∗
1∣ = TOLNewton
MLag is the number of points used by the Laguerre quadrature for the one
dimensional pre-integration step.
s > 0: For the parts of the domain separated by the discontinuity location,
derivatives of G with respect to y1 are bounded up to order s.
23
32. Work and Complexity Discussion for ASGQ
An optimal performance of ASGQ is given by
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
min
(MASGQ,MLag,TOLNewton)
WorkASGQ ∝ MASGQ × MLag × ∆t−1
s.t. Etotal,ASGQ = TOL.
(13)
Etotal, ASGQ ∶= E[g(X(T)] − QASGQ
N
= O (∆t) + O (M
−p/2
ASGQ) + O (M
−s/2
Lag ) + O (TOLNewton).
We show in (Bayer, Ben Hammouda, and Tempone 2022) that
under certain conditions for the regularity parameters s and p
(p,s ≫ 1)
▸ WorkASGQ = O (TOL−1
) (for the best case) compared to
WorkMC = O (TOL−3
) (for the best case of MC).
24
33. Error Discussion for rQMC
̂
QrQMC
: the rQMC estimator
E[g(X(T)] − ̂
QrQMC
= E[g(X(T))] − E[g(X
∆t
(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error I: bias or weak error of O(∆t)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error II: numerical smoothing error of O(M
−s/2
Lag
)+O(TOLNewton)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − ̂
QrQMC
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error III: rQMC error of O(M−1+δ
rQMC
), δ>0
, (14)
Notations
MrQMC: the number of rQMC points.
y∗
1: the approximated location of the non smoothness obtained by Newton iteration
⇒ ∣y∗
1 − y∗
1∣ = TOLNewton
MLag is the number of points used by the Laguerre quadrature for the one
dimensional pre-integration step.
s > 0: For the parts of the domain separated by the discontinuity location,
derivatives of G with respect to y1 are bounded up to order s.
25
34. Error Discussion for MLMC
̂
QMLMC
: the MLMC estimator
E[g(X(T)] − ̂
QMLMC
= E[g(X(T))] − E[g(X
∆tL
(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error I: bias or weak error of O(∆tL)
+ E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error II: numerical smoothing error of O(M
−s/2
Lag,L
)+O(TOLNewton,L)
+ E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − ̂
QMLMC
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error III: MLMC statistical error of O
⎛
⎝
√
∑L
`=L0
√
MLag,`+log(TOL−1
Newton,`
)
⎞
⎠
.
(15)
" The level ` approximation of I in ̂
QMLMC
, I` ∶= I`(y`
−1,z
(1),`
−1 ,...,z
(d),`
−1 ), is
computed with: step size of ∆t`; MLag,` Laguerre quadrature points;
TOLNewton,` as the tolerance of the Newton method at level `
26
35. Multilevel Monte Carlo with Numerical Smoothing:
Variance Decay, Complexity and Robustness
Notation: I`:= I`(y`
−1,z
(1),`
−1 ,...,z
(d),`
−1 ): the level ` approximation of I, computed with
step size ∆t`; MLag,` Laguerre points; TOLNewton,` as the tolerance of Newton at level `.
Theorem 2.2 ((Bayer, Hammouda, and Tempone 2022))
Under Assmuptions 3.1 and 3.2 and further regularity assumptions for the drift and
diffusion, V` ∶= Var[I` − I`−1] = O (∆t1−
` ) for any 0, compared with O (∆t
1/2
` ) for
MLMC without smoothing.
Corollary 2.3 ((Bayer, Hammouda, and Tempone 2022))
Under Assmuptions 3.1 and 3.2 and further regularity assumptions for the drift and
diffusion, the complexity of MLMC combined with numerical smoothing using Euler is
O (TOL−2−
) for any 0, compared with O (TOL−2.5
) for MLMC without smoothing.
Corollary 2.4 ((Bayer, Hammouda, and Tempone 2022))
Let κ` be the kurtosis of the random variable Y` ∶= I` − I`−1, then under assmuptions 3.1
and 3.2 and further regularity assumptions for the drift and diffusion, we obtain
κ` = O (1) compared with O (∆t
−1/2
` ) for MLMC without smoothing.
36. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
27
37. ASGQ Quadrature Error Convergence
101
102
103
104
MASGQ
10-3
10
-2
10
-1
100
101
Relative
Quadrature
Error
ASGQ without smoothing
ASGQ with numerical smoothing
(a)
101
102
103
104
MASGQ
10-3
10
-2
10-1
100
101
Relative
Quadrature
Error
ASGQ without smoothing
ASGQ with numerical smoothing
(b)
Figure 2.1: Digital option under the Heston model: Comparison of the relative
quadrature error convergence for the ASGQ method with/out numerical
smoothing. (a) Without Richardson extrapolation (N = 8), (b) with
Richardson extrapolation (Nfine level = 8).
28
38. QMC Error Convergence
102
103
104
QMC samples
10-3
10
-2
10-1
95%
Statistical
error
QMC without smoothing
slope=-0.52
QMC with numerical smoothing
slope=-0.85
(a)
102
103
104
105
QMC samples
10-5
10
-4
10
-3
10-2
10-1
95%
Statistical
error
QMC without smoothing
slope=-0.64
QMC with numerical smoothing
slope=-0.92
(b)
Figure 2.2: Comparison of the 95% statistical error convergence for rQMC
with/out numerical smoothing. (a) Digital option under Heston, (b) digital
option under GBM.
29
39. Errors in the Numerical Smoothing
101
102
MLag
10-5
10
-4
10
-3
10-2
10
-1
Relative
numerical
smoothing
error
Relative numerical smoothing error
slope=-4.02
(a)
10-10
10-8
10-6
10-4
10-2
100
TOLNewton
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Relative
numerical
smoothing
error
10
-4
Relative numerical smoothing error
(b)
Figure 2.3: The relative numerical smoothing error for a fixed number of
ASGQ points MASGQ = 103
plotted against (a) different values of MLag with a
fixed Newton tolerance TOLNewton = 10−10
, (b) different values of TOLNewton
with a fixed number of Laguerre quadrature points MLag = 128.
30
40. Comparing ASGQ with MC
In (Bayer, Ben Hammouda, and Tempone 2022):
Consider digital/call/basket options in Heston or discretized GBM models.
ASGQ with numerical smoothing 10 - 100× faster in dim. around 20 than MC.
10
-3
10
-2
10
-1
Total Relative Error
10
-1
100
101
10
2
103
Computational
Work
MC (without smoothing, without Richardson extrapolation)
MC (without smoothing, with Richardson extrapolation)
ASGQ (with smoothing, without Richardson extrapolation)
ASGQ (with smoothing, with Richardson extrapolation)
Figure 2.4: Digital option under Heston: Computational work comparison for ASGQ with
numerical smoothing and MC with the different configurations in terms of the level of
Richardson extrapolation. 31
41. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
31
42. Conclusions
Contributions in the context deterministic quadrature methods
1 A novel numerical smoothing technique, combined with ASGQ/QMC,
Hierarchical Brownian Bridge and Richardson extrapolation.
2 Our approach outperforms substantially the MC method for moderate/high
dimensions and for dynamics where discretization is needed.
3 We provide a regularity analysis for the smoothed integrand in the time stepping
setting, and a related error discussion of our approach.
4 We illustrate that traditional schemes of the Heston dynamics deteriorate the
regularity, and we use an alternative smooth scheme to simulate the volatility
process based on the sum of Ornstein-Uhlenbeck (OU) processes.
5 More analysis and numerical examples can be found in:
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Numerical
Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo
Methods for Efficient Option Pricing”. In: arXiv preprint arXiv:2111.01874
(2021). Accepted in Quantitative Finance Journal (2022).
Contributions in the context MLMC methods
1 A numerical smoothing approach that can be combined with MLMC estimator for
efficient option pricing and density estimation.
2 Compared to the case without smoothing
☀ We significantly reduce the kurtosis at the deep levels of MLMC which improves
the robustness of the estimator.
☀ We improve the strong convergence rate ⇒ improvement of MLMC complexity
from O (TOL−2.5
) to O (TOL−2
).
3 More analysis and numerical examples can be found in:
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Multilevel Monte
Carlo combined with numerical smoothing for robust and efficient option pricing
and density estimation”. In: arXiv preprint arXiv:2003.05708 (2022). 32
43. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
32
44. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
32
45. Problem Setting
Using the inverse generalized Fourier transform theorem, the value of the
option price on d stocks can be expressed as
V (Θm,Θp) = e−rT
E[P(XT)] = e−rT
E[(2π)−d
R(∫
Rd+iR
ei⟨u,XT ⟩ ̂
P(u)du)], R ∈ δP
= (2π)−d
e−rT
R(∫
Rd+iR
ΦXT
(u) ̂
P(u)du), R ∈ δV ∶= δP ∩ δX
∶= ∫
Rd
g (u;R,Θm,Θp)du (16)
Notations
ΦXT
(u): the joint characteristic function of XT ∶= (X1
T ,...,Xd
T );
Xi
T ∶= log (Si
T ), {Si
T }
d
i=1
are the prices of the underlying assets at time T.
̂
P(⋅): the conjugate of the Fourier transform of the payoff P(⋅).
R: vector of damping parameters ensuring L1
integrability of g(⋅).
δV ∶= δP ∩ δX: the strip of regularity of g(⋅); δP : strip of regularity of ̂
P(⋅);
δX: strip of regularity of ΦXT
(⋅).
Θm,Θp are respectively the model and payoff parameters.
i is the unit complex number; R(⋅): real part of the complex number .
33
46. Motivation
The integrand in the frequency space has often higher regularity
than in the physical space.
For the Fourier-based pricing approach: two key aspects affecting
the numerical complexity of the numerical quadrature methods:
1 The choice of the damping parameters that ensure integrability and
control the regularity of the integrand.
No precise analysis of the effect of the damping parameters on
the convergence speed or guidance on choosing them to improve the
numerical performance, particularly in the multivariate setting.
2 The effective treatment of the high dimensionality.
Methodology
1 Smoothing the Fourier integrand via an optimized choice of
damping parameters.
2 Sparsification and dimension-adaptivity techniques to accelerate
the convergence of the quadrature in moderate/high dimensions.
47. Framework
Basket put:
P(XT ) = max(K −
d
∑
i=1
eXi
T ,0);
̂
P(z) = K1−i ∑
d
j=1 zj
∏
d
j=1 Γ(−izj)
Γ(−i∑
d
j=1 zj + 2)
, z ∈ Cd
,I[z] ∈ δP
Call on min:
P(XT ) = max(min(eX1
T ,...,eXd
T ) − K,0);
̂
P(z) =
K1−i ∑
d
j=1 zj
(i(∑
d
j=1 zj) − 1)∏
d
j=1 (izj)
, z ∈ Cd
,I[z] ∈ δP
Table 1: Strip of regularity of ̂
P(⋅): δP (Eberlein, Glau, and Papapantoleon 2010).
Payoff δP
Basket put {R ∈ Rd
, Ri 0 ∀i ∈ {1,...,d}}
Call on min {R ∈ Rd
, Ri 0 ∀i ∈ {1,...,d},∑
d
i=1 Ri −1}
Notation: Γ(⋅): complex Gamma function, I(⋅): imaginary part of the
argument. 35
48. Framework: Pricing Models
Table 2: ΦXT
(⋅) for the different models. ⟨.,.⟩ is the standard dot product on Rd
.
Model ΦXT
(z)
GBM exp(i⟨⋅,X0⟩) × exp(i⟨z,r1Rd − 1
2
diag(Σ)⟩T − T
2
⟨z,Σz⟩),I[z] ∈ δX.
VG exp(i⟨⋅,X0⟩) × exp(i⟨z,r1Rd + µV G⟩T)(1 − iν⟨θ,z⟩ + 1
2
ν⟨z,Σz⟩)
−1/ν
,
I[z] ∈ δX
NIG exp(i⟨⋅,X0⟩)×
exp(i⟨z,r1Rd + µV G⟩T)(1 − iν⟨θ,z⟩ + 1
2
ν⟨z,Σz⟩)
−1/ν
,I[z] ∈ δX
Table 3: Strip of regularity of ΦXT
(⋅): δX (Eberlein, Glau, and Papapantoleon 2010).
Model δX
GBM Rd
VG {R ∈ Rd
,(1 + ν⟨θ,R⟩ − 1
2
ν⟨R,ΣR⟩) 0}
NIG {R ∈ Rd
,(α2
− ⟨(β − R),∆(β − R)⟩) 0}
Notation:
Σ: Covariance matrix related to the Geometric Brownian Motion (GBM)
model.
ν,θ,σ,Σ: Variance Gamma (VG) model parameters.
α,β,δ,∆: Normal Inverse Gaussian (NIG) model parameters.
µV G,i = 1
ν log (1 − 1
2σ2
i ν − θiν), i = 1,...,d
µNIG,i = −δ (
√
α2 − β2
i −
√
α2 − (βi + 1)2
), i = 1,...,d
36
49. Strip of Regularity: 2D Illustration
Figure 3.1: Strip of regularity of characteristic functions, δX,
(left) VG: σ = (0.2,0.8), θ = (−0.3,0),ν = 0.257
(right) NIG: α = 10,β = (−3,0),δ = 0.2,∆ = I2.
−8 −6 −4 −2 0 2 4 6
R1
−4
−3
−2
−1
0
1
2
3
4
R
2
−15 −10 −5 0 5 10
R1
−10.0
−7.5
−5.0
−2.5
0.0
2.5
5.0
7.5
10.0
R
2
There is no guidance in the litterature on how to choose the
damping parameters, R, to improve error convergence of quadrature
methods in the Fourier space.
37
50. Effect of the Damping Parameters: 2D Illustration
u1
-1.0 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1.0
u2
-1.0
-0.75
-0.5
-0.25
0.0
0.25
0.5
0.75
1.0
g(u
1
,
u
2
)
0.0
200.0
400.0
600.0
800.0
u1
-1.0 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1.0
u2
-1.0
-0.75
-0.5
-0.25
0.0
0.25
0.5
0.75
1.0
g(u
1
,
u
2
)
2.0
4.0
6.0
8.0
10.0
u1
-1.0 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1.0
u2
-1.0
-0.75
-0.5
-0.25
0.0
0.25
0.5
0.75
1.0
g(u
1
,
u
2
)
3.0
4.0
5.0
6.0
u1
-1.0 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1.0
u2
-1.0
-0.75
-0.5
-0.25
0.0
0.25
0.5
0.75
1.0
g(u
1
,
u
2
)
-5000.0
0.0
5000.0
10000.0
15000.0
20000.0
Figure 3.2: Effect of the damping parameters on the shape of the integrand in case of
basket put on 2 stocks under VG: σ = (0.4,0.4), θ = (−0.3,−0.3),ν = 0.257 (top left)
R = (0.1,0.1) (top right) R = (1,1), (bottom left) R = (2,2), (bottom right)
R = (3.3,3.3). 38
51. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
38
52. Optimal Damping Rule: 1D Motivation
Figure 3.3: (left) Shape of the integrand w.r.t the damping parameter, R.
(right) Convergence of relative quadrature error w.r.t. number of quadrature
points, using Gauss-Laguerre quadrature for the European put option under
VG: S0 = K = 100,r = 0,T = 1,σ = 0.4,θ = −0.3,ν = 0.257.
0 1 2 3 4 5
u
0
5
10
15
20
g(u)
R = 1
R = 2
R = 3
R = 4
101
Number of quadrature points
10−5
10−4
10−3
10−2
10−1
Relative
error
R = 1
R = 2
R = 3
R = 4
53. Optimal Damping Rule: Characterization
The analysis of the quadrature error can be performed through two representations:
1 Error estimates based on high-order derivatives for a smooth function g:
▸ (-) High-order derivatives are usually challenging to estimate and control.
▸ (-) Will result in a complex rule for optimally choosing the damping parameters.
2 Error estimates valid for functions that can be extended holomorphically into the
complex plane
▸ (+) Corresponds to the case in Eq (16).
▸ (+) Will result in a simple rule for optimally choosing the damping parameters.
Error Estimate Based on Contour Integration
∣EQN
[g]∣ = ∣
1
2πi
∮
C
KN (z)g(z)dz∣ ≤
1
2π
sup
z∈C
∣g(z)∣∮
C
∣KN (z)∥dz∣, (17)
with KN (z) =
HN (z)
πN (z)
, HN (z) = ∫
b
a
ρ(x)
πN (z)
z − x
dx
Extending (17) to the multiple dimensions is straightforward using tensorization.
The upper bound on sup
z∈C
∣f(z)∣ is independent of the quadrature method.
Notation:
The quadrature error: EQN
[g] ∶= ∫
b
a g(x)ρ(x)dx − ∑N
k=1 g (xk)wk; (a,b can be infinite)
C: a contour containing the interval [a,b] within which g(z) has no singularities.
πN (z): the roots of the orthogonal polynomial related to the considered quadrature.
40
54. Optimal Damping Heuristic Rule
We propose a heuristic optimization rule for choosing the damping
parameters
R∗
∶= R∗
(Θm,Θp) = arg min
R∈δV
∥g(u;R,Θm,Θp)∥∞, (18)
where R∗
∶= (R∗
1,...,R∗
d) denotes the optimal damping parameters.
For the considered models, the integrand attains its maximum at
the origin point u = 0Rd ; thus solving (18) is reformulated to a
simpler optimization problem
R∗
= arg min
R∈δV
g(0Rd ;R,Θm,Θp).
We denote by R the numerical approximation of R∗
using
interior-point method.
55. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
41
56. Effect of the Optimal Damping Rule on ASGQ
Figure 3.4: GBM model with anisotropic parameters: Convergence of the
relative quadrature error w.r.t. number of the ASGQ points for different
damping parameter values. (left) Basket put, (right) Call on min
100
101
102
103
104
Number of quadrature points
10-4
10-3
10-2
10-1
100
101
102
Relative
Error
100
101
102
103
Number of quadrature points
10-4
10-3
10-2
10-1
100
101
Relative
Error
Reference values computed by MC method using M = 109
samples.
The used parameters are based on the literature on model
calibration (Healy 2021). 42
57. Effect of the Optimal Damping Rule on ASGQ
Figure 3.5: VG model with anisotropic parameters: Convergence of the
relative quadrature error w.r.t. number of the ASGQ points for different
damping parameter values. (left) Basket put, (right) call on min
100
101
102
103
104
Number of quadrature points
10-3
10-2
10-1
100
Relative
Error
100
101
102
103
Number of quadrature points
10-4
10-3
10-2
10-1
100
Relative
Error
Reference values computed by MC method using M = 109
samples.
The used parameters are based on the literature on model
calibration (Aguilar 2020). 43
58. Comparison of Fourier approach against MC
Table 4: Comparison of the Fourier approach against the MC method for the
European basket put and call on min under the VG model.
Example d Relative Error CPU Time Ratio
Basket put under VG (isotropic) 4 3 × 10−4
4.7%
Basket put under VG
(anisotropic)
4 4 × 10−4
5.2%
Call on min under VG (isotropic) 4 6 × 10−4
0.57%
Call on min under VG
(anisotropic)
4 9 × 10−4
0.56%
Basket put under VG (isotropic) 6 8 × 10−4
42.6%
Basket put under VG
(anisotropic)
6 5 × 10−3
11%
Call on min under VG (isotropic) 6 2 × 10−3
23%
Call on min under VG
(anisotropic)
6 3 × 10−3
1.3%
CPU times provided in Table 4 are obtained by averaging the CPU time of
10 runs. CPU Time Ratio = CP U(Quadrature)+CP U(Optimization)
CP U(MC)
× 100.
Reference values computed by MC method using M = 109
samples.
The used parameters are based on the literature on model
calibration (Aguilar 2020).
44
59. 1 Motivation and Framework
2 Numerical Smoothing for Efficient Option Pricing
The Numerical Smoothing Idea
The Smoothness Theorem
Error Discussion
Numerical Experiments and Results
Conclusions
3 Optimal Damping with Hierarchical Adaptive Quadrature for
Efficient Fourier Pricing of Multi-Asset Options in Lévy Models
Problem Setting and Motivation
Optimal Damping Rule
Numerical Experiments and Results
Conclusions
44
60. Conclusions
1 We propose a rule for the choice of the damping parameters which
accelerates the convergence of the numerical quadrature for
Fourier-based option pricing.
2 We used adaptivity and sparsification techniques to alleviate the
curse of dimensionality.
3 Our Fourier appraoch combined with the optimal damping rule
and adaptive sparse grids quadrature achieves substantial
computational gains compared to the MC method for multi-asset
options under GBM and Lévy models.
4 More analysis and numerical examples can be found in:
Christian Bayer, Chiheb Ben Hammouda, Antonis Papapantoleon,
Michael Samet, and Raúl Tempone. “Optimal Damping with
Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of
Multi-Asset Options in Lévy Models”. In: arXiv preprint
arXiv:2203.08196 (2022)
45
61. Related References
Thank you for your attention!
[1] C. Bayer, C. Ben Hammouda, R. Tempone. Numerical Smoothing with
Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for
Efficient Option Pricing, arXiv:2111.01874 (2021). Accepted in
Quantitative Finance (2022).
[2] C. Bayer, C. Ben Hammouda, R. Tempone. Multilevel Monte Carlo
combined with numerical smoothing for robust and efficient option pricing
and density estimation, arXiv:2003.05708 (2022).
[3] C. Bayer, C. Ben Hammouda, A. Papapantoleon, M. Samet, R. Tempone.
Optimal Damping with Hierarchical Adaptive Quadrature for Efficient
Fourier Pricing of Multi-Asset Options in Lévy Models, arXiv:2203.08196
(2022).
[4] C. Bayer, C. Ben Hammouda, R. Tempone. Hierarchical adaptive sparse
grids and quasi-Monte Carlo for option pricing under the rough Bergomi
model, Quantitative Finance, 2020.
46
62. Assumptions
Assumption 3.1
There are positive rdvs Cp with finite moments of all orders such that
∀N ∈ N, ∀`1,...,`p ∈ {0,...,2N
− 1} ∶
R
R
R
R
R
R
R
R
R
R
R
R
∂p
XN
T
∂XN
`1
⋯∂XN
`p
R
R
R
R
R
R
R
R
R
R
R
R
≤ Cp a.s.
this means that
∂pXN
T
∂XN
`1
⋯∂XN
`p
= Ô
P(1).
Assumption 3.1 is natural because it is fulfilled if the diffusion
coefficient b(⋅) is smooth; this situation is valid for many option pricing
models.
63. Assumptions
Assumption 3.2
For any p ∈ N we obtaina
(
∂XN
T
∂y
(Z−1,ZN
))
−p
= Ô
P(1).
a
y ∶= z−1
In (Bayer, Ben Hammouda, and Tempone 2022), we show
sufficient conditions where this assumption is valid. For instance,
Assumption 3.2 is valid for
▸ one-dimensional models with a linear or constant diffusion.
▸ multivariate models with a linear drift and constant diffusion,
including the multivariate lognormal model (see (Bayer,
Siebenmorgen, and Tempone 2018)).
There may be some cases in which Assumption 3.2 is not fulfilled,
e.g., XT = W2
T . However, our method works well in such cases
since we have g(XT ) = G(y2
1).
64. References I
[1] Jean-Philippe Aguilar. “Some pricing tools for the Variance
Gamma model”. In: International Journal of Theoretical and
Applied Finance 23.04 (2020), p. 2050025.
[2] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Hierarchical adaptive sparse grids and quasi-Monte Carlo for
option pricing under the rough Bergomi model”. In: Quantitative
Finance 20.9 (2020), pp. 1457–1473.
[3] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Numerical Smoothing with Hierarchical Adaptive Sparse Grids
and Quasi-Monte Carlo Methods for Efficient Option Pricing”.
In: arXiv preprint arXiv:2111.01874 (2021). Accepted in
Quantitative Finance Journal (2022).
49
65. References II
[4] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Multilevel Monte Carlo combined with numerical smoothing for
robust and efficient option pricing and density estimation”. In:
arXiv preprint arXiv:2003.05708 (2022).
[5] Christian Bayer, Markus Siebenmorgen, and Rául Tempone.
“Smoothing the payoff for efficient computation of basket option
pricing.”. In: Quantitative Finance 18.3 (2018), pp. 491–505.
[6] Christian Bayer et al. “Optimal Damping with Hierarchical
Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset
Options in Lévy Models”. In: arXiv preprint arXiv:2203.08196
(2022).
[7] Chiheb Ben Hammouda, Nadhir Ben Rached, and Raúl Tempone.
“Importance sampling for a robust and efficient multilevel Monte
Carlo estimator for stochastic reaction networks”. In: Statistics
and Computing 30.6 (2020), pp. 1665–1689.
50
66. References III
[8] Chiheb Ben Hammouda, Alvaro Moraes, and Raúl Tempone.
“Multilevel hybrid split-step implicit tau-leap”. In: Numerical
Algorithms 74.2 (2017), pp. 527–560.
[9] Peng Chen. “Sparse quadrature for high-dimensional integration
with Gaussian measure”. In: ESAIM: Mathematical Modelling
and Numerical Analysis 52.2 (2018), pp. 631–657.
[10] K Andrew Cliffe et al. “Multilevel Monte Carlo methods and
applications to elliptic PDEs with random coefficients”. In:
Computing and Visualization in Science 14.1 (2011), p. 3.
[11] Josef Dick, Frances Y Kuo, and Ian H Sloan. “High-dimensional
integration: the quasi-Monte Carlo way”. In: Acta Numerica 22
(2013), pp. 133–288.
51
67. References IV
[12] Ernst Eberlein, Kathrin Glau, and Antonis Papapantoleon.
“Analysis of Fourier transform valuation formulas and
applications”. In: Applied Mathematical Finance 17.3 (2010),
pp. 211–240.
[13] Oliver G Ernst, Bjorn Sprungk, and Lorenzo Tamellini.
“Convergence of sparse collocation for functions of countably
many Gaussian random variables (with application to elliptic
PDEs)”. In: SIAM Journal on Numerical Analysis 56.2 (2018),
pp. 877–905.
[14] Thomas Gerstner and Michael Griebel. “Numerical integration
using sparse grids”. In: Numerical algorithms 18.3 (1998),
pp. 209–232.
[15] Alexander D Gilbert, Frances Y Kuo, and Ian H Sloan.
“Preintegration is not smoothing when monotonicity fails”. In:
arXiv preprint arXiv:2112.11621 (2021).
52
68. References V
[16] Michael B Giles. “Improved multilevel Monte Carlo convergence
using the Milstein scheme”. In: Monte Carlo and Quasi-Monte
Carlo Methods 2006. Springer, 2008, pp. 343–358.
[17] Michael B Giles. “Multilevel Monte Carlo path simulation”. In:
Operations Research 56.3 (2008), pp. 607–617.
[18] Michael B Giles, Kristian Debrabant, and Andreas Rößler.
“Numerical analysis of multilevel Monte Carlo path simulation
using the Milstein discretisation”. In: arXiv preprint
arXiv:1302.4676 (2013).
[19] Michael B Giles and Abdul-Lateef Haji-Ali. “Multilevel Path
Branching for Digital Options”. In: arXiv preprint
arXiv:2209.03017 (2022).
53
69. References VI
[20] Michael B Giles, Tigran Nagapetyan, and Klaus Ritter.
“Multilevel Monte Carlo approximation of distribution functions
and densities”. In: SIAM/ASA Journal on Uncertainty
Quantification 3.1 (2015), pp. 267–295.
[21] Andreas Griewank et al. “High dimensional integration of kinks
and jumps—smoothing by preintegration”. In: Journal of
Computational and Applied Mathematics 344 (2018), pp. 259–274.
[22] Abdul-Lateef Haji-Ali, Jonathan Spence, and Aretha Teckentrup.
“Adaptive Multilevel Monte Carlo for Probabilities”. In: arXiv
preprint arXiv:2107.09148 (2021).
[23] Jherek Healy. “The Pricing of Vanilla Options with Cash
Dividends as a Classic Vanilla Basket Option Problem”. In:
arXiv preprint arXiv:2106.12971 (2021).
54
70. References VII
[24] Stefan Heinrich. “Multilevel monte carlo methods”. In:
International Conference on Large-Scale Scientific Computing.
Springer. 2001, pp. 58–67.
[25] Ahmed Kebaier et al. “Statistical Romberg extrapolation: a new
variance reduction method and applications to option pricing”.
In: The Annals of Applied Probability 15.4 (2005), pp. 2681–2705.
[26] J Lars Kirkby. “Efficient option pricing by frame duality with the
fast Fourier transform”. In: SIAM Journal on Financial
Mathematics 6.1 (2015), pp. 713–747.
[27] Harald Niederreiter. Random number generation and
quasi-Monte Carlo methods. Vol. 63. Siam, 1992.
[28] Dirk Nuyens. The construction of good lattice rules and
polynomial lattice rules. 2014.
55
71. References VIII
[29] Dirk Nuyens and Yuya Suzuki. “Scaled lattice rules for
integration on Rd
achieving higher-order convergence with error
analysis in terms of orthogonal projections onto periodic spaces”.
In: arXiv preprint arXiv:2108.12639 (2021).
[30] Ian H Sloan. “Lattice methods for multiple integration”. In:
Journal of Computational and Applied Mathematics 12 (1985),
pp. 131–143.
56