My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
This document discusses two techniques for improving the efficiency of option pricing methods: numerical smoothing and optimal damping. Numerical smoothing involves approximating non-smooth option payoffs with smooth functions, allowing for faster convergence of quadrature and multilevel Monte Carlo methods. Optimal damping adds a regularization term when pricing options under Lévy models using Fourier methods to improve stability and accuracy. The document outlines the theoretical underpinnings of how smoothness affects integration error and complexity, and presents numerical results demonstrating the effectiveness of these techniques.
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
This document discusses two techniques for improving the efficiency of option pricing methods: numerical smoothing and optimal damping. Numerical smoothing involves approximating non-smooth option payoffs with smooth functions, allowing for faster convergence of quadrature and multilevel Monte Carlo methods. Optimal damping adds a regularization term when pricing options under Lévy models using Fourier methods to improve stability and accuracy. The document outlines the theoretical underpinnings of how smoothness affects integration error and complexity, and presents numerical results demonstrating the effectiveness of these techniques.
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
This document summarizes a research paper about using hierarchical deterministic quadrature methods for option pricing under the rough Bergomi model. It discusses the rough Bergomi model and challenges in pricing options under this model numerically. It then describes the methodology used, which involves analytic smoothing, adaptive sparse grids quadrature, quasi Monte Carlo, and coupling these with hierarchical representations and Richardson extrapolation. Several figures are included to illustrate the adaptive construction of sparse grids and simulation of the rough Bergomi dynamics.
This document provides an overview of support vector machines (SVMs). It discusses how SVMs can be used to perform classification tasks by finding optimal separating hyperplanes that maximize the margin between different classes. The document outlines how SVMs solve an optimization problem to find these optimal hyperplanes using techniques like Lagrange duality, kernels, and soft margins. It also covers model selection methods like cross-validation and discusses extensions of SVMs to multi-class classification problems.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
The document summarizes key concepts from Chapter 8 of the textbook "Fundamentals of Multimedia" on lossy compression algorithms. It introduces lossy compression and discusses distortion measures, rate-distortion theory, quantization techniques including uniform, non-uniform, and vector quantization. It also covers transform coding techniques such as the discrete cosine transform and its use in image compression standards to remove spatial redundancies by transforming pixel values into frequency coefficients.
International journal of engineering and mathematical modelling vol2 no1_2015_1IJEMM
Our efforts are mostly concentrated on improving the convergence rate of the numerical procedures both from the viewpoint of cost-efficiency and accuracy by handling the parametrization of the shape to be optimized. We employ nested parameterization supports of either shape, or shape deformation, and the classical process of degree elevation resulting in exact geometrical data transfer from coarse to fine representations. The algorithms mimick classical multigrid strategies and are found very effective in terms of convergence acceleration. In this paper, we analyse and demonstrate the efficiency of the two-level correction algorithm which is the basic block of a more general miltilevel strategy.
This document discusses automatic Bayesian cubature for numerical integration. It begins with an introduction to multivariate integration and the challenges it poses. It then describes an automatic cubature algorithm that generates sample points and computes error bounds iteratively until a tolerance threshold is met. Next, it covers Bayesian cubature, which treats integrands as random functions to obtain probabilistic error bounds. It defines a Bayesian trio identity relating the integration error to discrepancies, variations, and alignments. The document concludes with discussions of future work.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Talk of Michael Samet, entitled "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" at the International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
This document discusses time series forecasting techniques for multivariate and hierarchical time series data. It presents several cases involving energy consumption forecasting, sales forecasting, and freight transportation forecasting. For each case, it describes the time series data and components, discusses feature generation methods like nonparametric transformations and the Haar wavelet transform to extract features, and evaluates different forecasting models and their ability to generate consistent forecasts while respecting any hierarchical relationships in the data. The focus is on generating accurate forecasts while maintaining properties like consistency, minimizing errors, and handling complex time series structures.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
This document presents research on hand gesture recognition using discrete wavelet transform and hidden Markov models. The researchers designed a system that uses discrete wavelet transform for feature extraction from images of hand gestures, representing the images as feature vectors. They then used hidden Markov models for classification, calculating the probability of each feature vector belonging to different gesture classes. The system was tested on a dataset of 150 training and 100 test images across five gesture classes and achieved an accuracy of 72%. Compared to another dataset, the new dataset and preprocessing methods improved accuracy by 14%, likely due to better brightness and contrast handling. The researchers concluded the system had good performance for hand gesture recognition.
Traveling Salesman Problem in Distributed Environmentcsandit
In this paper, we focus on developing parallel algorithms for solving the traveling salesman problem (TSP) based on Nicos Christofides algorithm released in 1976. The parallel algorithm
is built in the distributed environment with multi-processors (Master-Slave). The algorithm is installed on the computer cluster system of National University of Education in Hanoi,
Vietnam (ccs1.hnue.edu.vn) and uses the library PJ (Parallel Java). The results are evaluated and compared with other works.
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENTcscpconf
The document describes developing a parallel algorithm for solving the traveling salesman problem (TSP) based on Christofides' algorithm. It discusses implementing Christofides' algorithm in a distributed environment using multiple processors. The parallel algorithm divides the graph vertices and distance matrix across slave processors, which calculate the minimum spanning tree in parallel. The master processor then finds odd-degree vertices, performs matching, and finds the Hamiltonian cycle to solve TSP. The algorithm is tested on a computer cluster using graphs of 20,000 and 30,000 nodes, showing improved runtime over the sequential algorithm.
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...Chiheb Ben Hammouda
My talk at ICCF24 with abstract: Efficiently pricing multi-asset options poses a significant challenge in quantitative finance. While the Monte Carlo (MC) method remains a prevalent choice, its slow convergence rate can impede practical applications. Fourier methods, leveraging the knowledge of the characteristic function, have shown promise in valuing single-asset options but face hurdles in the high-dimensional context. This work advocates using the randomized quasi-MC (RQMC) quadrature to improve the scalability of Fourier methods with high dimensions. The RQMC technique benefits from the smoothness of the integrand and alleviates the curse of dimensionality while providing practical error estimates. Nonetheless, the applicability of RQMC on the unbounded domain, $\mathbb{R}^d$, requires a domain transformation to $[0,1]^d$, which may result in singularities of the transformed integrand at the corners of the hypercube, and deteriorate the rate of convergence of RQMC. To circumvent this difficulty, we design an efficient domain transformation procedure based on the derived boundary growth conditions of the integrand. This transformation preserves the sufficient regularity of the integrand and hence improves the rate of convergence of RQMC. To validate this analysis, we demonstrate the efficiency of employing RQMC with an appropriate transformation to evaluate options in the Fourier space for various pricing models, payoffs, and dimensions. Finally, we highlight the computational advantage of applying RQMC over quadrature methods in the Fourier domain, and over the MC method in the physical domain for options with up to 15 assets.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
This document summarizes a research paper about using hierarchical deterministic quadrature methods for option pricing under the rough Bergomi model. It discusses the rough Bergomi model and challenges in pricing options under this model numerically. It then describes the methodology used, which involves analytic smoothing, adaptive sparse grids quadrature, quasi Monte Carlo, and coupling these with hierarchical representations and Richardson extrapolation. Several figures are included to illustrate the adaptive construction of sparse grids and simulation of the rough Bergomi dynamics.
This document provides an overview of support vector machines (SVMs). It discusses how SVMs can be used to perform classification tasks by finding optimal separating hyperplanes that maximize the margin between different classes. The document outlines how SVMs solve an optimization problem to find these optimal hyperplanes using techniques like Lagrange duality, kernels, and soft margins. It also covers model selection methods like cross-validation and discusses extensions of SVMs to multi-class classification problems.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
The document summarizes key concepts from Chapter 8 of the textbook "Fundamentals of Multimedia" on lossy compression algorithms. It introduces lossy compression and discusses distortion measures, rate-distortion theory, quantization techniques including uniform, non-uniform, and vector quantization. It also covers transform coding techniques such as the discrete cosine transform and its use in image compression standards to remove spatial redundancies by transforming pixel values into frequency coefficients.
International journal of engineering and mathematical modelling vol2 no1_2015_1IJEMM
Our efforts are mostly concentrated on improving the convergence rate of the numerical procedures both from the viewpoint of cost-efficiency and accuracy by handling the parametrization of the shape to be optimized. We employ nested parameterization supports of either shape, or shape deformation, and the classical process of degree elevation resulting in exact geometrical data transfer from coarse to fine representations. The algorithms mimick classical multigrid strategies and are found very effective in terms of convergence acceleration. In this paper, we analyse and demonstrate the efficiency of the two-level correction algorithm which is the basic block of a more general miltilevel strategy.
This document discusses automatic Bayesian cubature for numerical integration. It begins with an introduction to multivariate integration and the challenges it poses. It then describes an automatic cubature algorithm that generates sample points and computes error bounds iteratively until a tolerance threshold is met. Next, it covers Bayesian cubature, which treats integrands as random functions to obtain probabilistic error bounds. It defines a Bayesian trio identity relating the integration error to discrepancies, variations, and alignments. The document concludes with discussions of future work.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Talk of Michael Samet, entitled "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" at the International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
This document discusses time series forecasting techniques for multivariate and hierarchical time series data. It presents several cases involving energy consumption forecasting, sales forecasting, and freight transportation forecasting. For each case, it describes the time series data and components, discusses feature generation methods like nonparametric transformations and the Haar wavelet transform to extract features, and evaluates different forecasting models and their ability to generate consistent forecasts while respecting any hierarchical relationships in the data. The focus is on generating accurate forecasts while maintaining properties like consistency, minimizing errors, and handling complex time series structures.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
This document presents research on hand gesture recognition using discrete wavelet transform and hidden Markov models. The researchers designed a system that uses discrete wavelet transform for feature extraction from images of hand gestures, representing the images as feature vectors. They then used hidden Markov models for classification, calculating the probability of each feature vector belonging to different gesture classes. The system was tested on a dataset of 150 training and 100 test images across five gesture classes and achieved an accuracy of 72%. Compared to another dataset, the new dataset and preprocessing methods improved accuracy by 14%, likely due to better brightness and contrast handling. The researchers concluded the system had good performance for hand gesture recognition.
Traveling Salesman Problem in Distributed Environmentcsandit
In this paper, we focus on developing parallel algorithms for solving the traveling salesman problem (TSP) based on Nicos Christofides algorithm released in 1976. The parallel algorithm
is built in the distributed environment with multi-processors (Master-Slave). The algorithm is installed on the computer cluster system of National University of Education in Hanoi,
Vietnam (ccs1.hnue.edu.vn) and uses the library PJ (Parallel Java). The results are evaluated and compared with other works.
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENTcscpconf
The document describes developing a parallel algorithm for solving the traveling salesman problem (TSP) based on Christofides' algorithm. It discusses implementing Christofides' algorithm in a distributed environment using multiple processors. The parallel algorithm divides the graph vertices and distance matrix across slave processors, which calculate the minimum spanning tree in parallel. The master processor then finds odd-degree vertices, performs matching, and finds the Hamiltonian cycle to solve TSP. The algorithm is tested on a computer cluster using graphs of 20,000 and 30,000 nodes, showing improved runtime over the sequential algorithm.
Similar to MCQMC_talk_Chiheb_Ben_hammouda.pdf (20)
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...Chiheb Ben Hammouda
My talk at ICCF24 with abstract: Efficiently pricing multi-asset options poses a significant challenge in quantitative finance. While the Monte Carlo (MC) method remains a prevalent choice, its slow convergence rate can impede practical applications. Fourier methods, leveraging the knowledge of the characteristic function, have shown promise in valuing single-asset options but face hurdles in the high-dimensional context. This work advocates using the randomized quasi-MC (RQMC) quadrature to improve the scalability of Fourier methods with high dimensions. The RQMC technique benefits from the smoothness of the integrand and alleviates the curse of dimensionality while providing practical error estimates. Nonetheless, the applicability of RQMC on the unbounded domain, $\mathbb{R}^d$, requires a domain transformation to $[0,1]^d$, which may result in singularities of the transformed integrand at the corners of the hypercube, and deteriorate the rate of convergence of RQMC. To circumvent this difficulty, we design an efficient domain transformation procedure based on the derived boundary growth conditions of the integrand. This transformation preserves the sufficient regularity of the integrand and hence improves the rate of convergence of RQMC. To validate this analysis, we demonstrate the efficiency of employing RQMC with an appropriate transformation to evaluate options in the Fourier space for various pricing models, payoffs, and dimensions. Finally, we highlight the computational advantage of applying RQMC over quadrature methods in the Fourier domain, and over the MC method in the physical domain for options with up to 15 assets.
This document presents a framework for efficiently pricing multi-asset options using Fourier methods. It discusses using the Fourier transform to map option pricing problems to frequency space, where the integrand may have better regularity. A damping parameter is introduced to ensure the transformed functions have sufficient decay at infinity. However, literature provides no guidance on choosing optimal damping parameters. The document proposes a method called Optimal Damping with Hierarchical Adaptive Quadrature to select damping parameters that improve the convergence rate of quadrature pricing methods in Fourier space. It applies this method to price options under various multi-dimensional models in numerical experiments.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
The document describes a multilevel hybrid split-step implicit tau-leap method for simulating stochastic reaction networks. It begins with background on modeling biochemical reaction networks stochastically. It then discusses challenges with existing simulation methods like the chemical master equation and stochastic simulation algorithm. The document introduces the split-step implicit tau-leap method as an improvement over explicit tau-leap for stiff systems. It proposes a multilevel Monte Carlo estimator using this method to efficiently estimate expectations of observables with near-optimal computational work.
My talk in the MCQMC Conference 2016, Stanford University. The talk is about Multilevel Hybrid Split Step Implicit Tau-Leap
for Stochastic Reaction Networks.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
MCQMC_talk_Chiheb_Ben_hammouda.pdf
1. Quasi-Monte Carlo & Multilevel Monte Carlo Methods
Combined with Numerical Smoothing
for Efficient Option Pricing and Density Estimation
Chiheb Ben Hammouda
Christian Bayer Raúl Tempone
Center for Uncertainty
Quantification
Center for Un
Quantification
nter for Uncertainty Quantification Logo Lock-up
15th International Conference on Monte Carlo and Quasi-Monte
Carlo Methods in Scientific Computing, Linz, Austria
July 20, 2022
2. Related Manuscripts to the Talk
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Numerical Smoothing with Hierarchical Adaptive Sparse Grids
and Quasi-Monte Carlo Methods for Efficient Option Pricing”. In:
arXiv preprint arXiv:2111.01874 (2021), to appear in Quantitative
Finance Journal (2022).
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Multilevel Monte Carlo combined with numerical smoothing for
robust and efficient option pricing and density estimation”. In:
arXiv preprint arXiv:2003.05708 (2022)
1
3. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
4. Option Pricing as a High-Dimensional, non-Smooth
Numerical Integration Problem
Option pricing often correspond to
high-dimensional integration problems
with non-smooth integrands.
▸ High-dimensional:
(i) Time-discretization of an SDE;
(ii) A large number of underlying
assets;
(iii) Path dependence on the whole
trajectory of the underlying.
▸ Non-smooth: E.g., call options
(ST − K)+
, digital options 1ST >K, . . .
Methods like quasi Monte Carlo
(QMC) or (adaptive) sparse grids
quadrature (ASGQ) critically rely on
smoothness of integrand and dimension
of the problem.
Monte Carlo (MC) does not care (in
terms of convergence rates) BUT
multilevel Monte Carlo (MLMC) does.
Figure 1.1: Integrand of two-dimensional
European basket option (Black–Scholes
model)
5. Framework
Approximate efficiently E[g(X(T))]
Given a (smooth) φ ∶ Rd
→ R, the payoff g ∶ Rd
→ R has either
jumps or kinks:
▸ Hockey-stick functions: g(x) = max(φ(x),0) (put or call payoffs,
. . . ).
▸ Indicator functions: g(x) = 1(φ(x)≥0) (digital option, financial
Greeks, distribution functions, . . . ).
▸ Dirac Delta functions: g(x) = δ(φ(x)=0) (density estimation, financial
Greeks, . . . ).
The process X is approximated (via a discretization scheme) by
X, E.g., (for illustration)
▸ One/multi-dimensional geometric Brownian motion (GBM) process.
▸ Multi-dimensional stochastic volatility model: E.g., the Heston
model
dXt = µXtdt +
√
vtXtdWX
t
dvt = κ(θ − vt)dt + ξ
√
vtdWv
t ,
(WX
t ,Wv
t ): correlated Wiener processes with correlation ρ.
3
7. Randomized QMC (rQMC)
A (rank-1) lattice rule (Sloan 1985; Nuyens 2014) with n points
Qn(g) ∶=
1
n
n−1
∑
k=0
(g ○ F−1
)(
kz mod n
n
),
where z = (z1,...,zd) ∈ Nd
(the generating vector) and F−1
(⋅) is the inverse of
the standard normal cumulative distribution function.
A randomly shifted lattice rule
Qn,q(g) =
1
q
q−1
∑
i=0
Q(i)
n (f) =
1
q
q−1
∑
i=0
(
1
n
n−1
∑
k=0
(g ○ F−1
)(
kz + ∆(i)
mod n
n
)),
where {∆(i)
}q−1
i=0 : independent random shifts, and MrQMC = q × n.
4
8. How Does Regularity Affect QMC Performance?
(Niederreiter 1992): convergence rate of O (M
−1
2
−δ
rQMC), where 0 ≤ δ ≤ 1
2
is related to the degree of regularity of the integrand g.
(Dick, Kuo, and Sloan 2013): convergence rate of O (M−1
rQMC) can be
observed for the lattice rule if the integrand g belongs to Wd,γ
equipped with the norm
∣∣g∣∣2
Wd,γ
= ∑
α⊆{1∶d}
1
γα
∫
[0,1]∣α∣
(∫
[0,1]d−∣α∣
∂∣α∣
∂yα
g(y)dy−α)
2
dyα,
where yα ∶= (yj)j∈α and y−α ∶= (yj)j∈{1∶d}∖α.
(Nuyens and Suzuki 2021): Higher mixed smoothness and periodicity
requirements can lead to higher order convergence rates, when using
scaled lattice rules.
Notation
Wd,γ: a d-dimensional weighted Sobolev space of functions with
square-integrable mixed first derivatives.
γ ∶= {γα > 0 ∶ α ⊆ {1,2,...,d}} being a given collection of weights, and
d being the dimension of the problem. 5
9. Multilevel Monte Carlo (MLMC)
(Heinrich 2001; Kebaier et al. 2005; Giles 2008b)
Aim: Improve MC complexity, when estimating E[g(X(T))].
Setting:
▸ A hierarchy of nested meshes of [0,T], indexed by {`}L
`=0.
▸ ∆t` = K−`
∆t0: The time steps size for levels ` ≥ 1; K>1, K ∈ N.
▸ X` ∶= X∆t`
: The approximate process generated using a step size of ∆t`.
MLMC idea
E[g(XL(T))] = E[g(X0(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
+
L
∑
`=1
E[g(X`(T)) − g(X`−1(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
(1)
Var[g(X0(T))] ≫ Var[g(X`(T)) − g(X`−1(T))] ↘ as ` ↗
M0 ≫ M` ↘ as ` ↗
MLMC estimator: ̂
Q ∶=
L
∑
`=0
̂
Q`,
̂
Q0 ∶=
1
M0
M0
∑
m0=1
g(X0,[m0](T)); ̂
Q` ∶=
1
M`
M`
∑
m`=1
(g(X`,[m`](T)) − g(X`−1,[m`](T))), 1 ≤ ` ≤ L
6
10. How Does Regularity Affect MLMC Complexity?
Complexity analysis for MLMC
MLMC Complexity (Cliffe, Giles,
Scheichl, and Teckentrup 2011)
O (TOL
−2−max(0, γ−β
α
)
log (TOL)2×1{β=γ}
)
(2)
i) Weak rate:
∣E[g (X`(T)) − g (X(T))]∣ ≤ c12−α`
ii) Variance decay rate:
Var[g (X`(T)) − g (X`−1(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=V`
≤ c22−β`
iii) Work growth rate: W` ≤ c32γ`
(W`:
expected cost)
For Euler-Maruyama (γ = 1): If g is Lipschitz ⇒ V` ≃ ∆t` due to strong rate 1/2,
that is β = γ. Otherwise β < γ ⇒ worst-case complexity.
Higher order schemes, E.g., the Milstein scheme, may lead to better complexities
even for non-Lipschitz observables (Giles, Debrabant, and Rößler 2013; Giles 2015).
However,
▸ For moderate/high-dimensional dynamics, coupling issues may arise and the scheme
becomes computationally expensive.
▸ Deterioration of the robustness of the MLMC estimator because the kurtosis explodes
as ∆t` decreases: O (∆t−1
` ) compared with O (∆t
−1/2
` ) for Euler-Maruyama without
smoothing (Giles, Nagapetyan, and Ritter 2015) and O (1) in (Bayer, Ben Hammouda,
and Tempone 2022) (see next slides).
7
11. How Does Regularity Affect MLMC Robustness?
/ For non-lipschitz payoffs: The kurtosis, κ` ∶=
E[(Y`−E[Y`])4
]
(Var[Y`])2 is of
O(∆t
−1/2
` ) for Euler scheme and O(∆t−1
` ) for the Milstein scheme.
Large kurtosis problem: (Giles 2015; Ben Hammouda, Moraes, and
Tempone 2017; Ben Hammouda, Ben Rached, and Tempone 2020)
⇒ / Expensive cost for reliable/robust estimates of sample statistics.
Why is large kurtosis bad?
σS2(Y`) =
Var[Y`]
√
M`
√
(κ` − 1) +
2
M` − 1
; " M` ≫ κ`.
Why are accurate variance estimates, V` = Var[Y`], important?
M∗
` ∝
√
V`W−1
`
L
∑
`=0
√
V`W`.
Notation
Y` ∶= g(X`(T)) − g(X`−1(T))
σS2(Y`): Standard deviation of the sample variance of Y`;
M∗
` : Optimal number of samples per level; W`: Cost per sample path.
8
12. Alternative Smoothing Techniques
for QMC and ASGQ
Bias-free mollification (E.g., (Bayer, Siebenmorgen, and Tempone
2018; Bayer, Ben Hammouda, and Tempone 2020)) by taking
conditional expectations or exact integration over subset of
integration variables;
(-) Not always possible.
Mollification: E.g., by convolution with Gaussian kernel or
manually;
(-) Additional error that may scale exponentially with the
dimension.
Mapping the problem to the frequency space (E.g., (Bayer,
Ben Hammouda, Papapantoleon, Samet, and Tempone 2022)):
Better regularity compared to the physical space;
(-) Applicable when the Fourier transform of the density function is
available and cheap to compute.
Preintegration techniques (E.g., (Griewank, Kuo, Leövey, and Sloan
2018; Gilbert, Kuo, and Sloan 2021)).
13. Alternative Smoothing Techniques
for MLMC
Implicit smoothing based on conditional expectation combined with
the Milstein scheme: (E.g, (Giles, Debrabant, and Rößler 2013));
(-) Not always possible and drawbacks of Milstein scheme.
Parametric smoothing: carefully constructing a regularized version
of the observable (E.g, (Giles, Nagapetyan, and Ritter 2015));
Malliavin calculus integration by parts: (E.g, (Altmayer and
Neuenkirch 2015)): splitting the payoff function into a smooth
component (treated by standard MLMC) and a compactly
supported discontinuous part (treated via Malliavin MLMC).
Adaptive sampling: (E.g, (Haji-Ali, Spence, and Teckentrup 2021))
14. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
15. Numerical Smoothing Steps
Guiding example:
g ∶ Rd
→ R payoff function, X
∆t
T (∆t = T
N ) Euler discretization of
d-dimensional SDE , E.g., dX
(i)
t = ai(Xt)dt + ∑d
j=1 bij(Xt)dW
(j)
t ,
where {W(j)
}d
j=1 are standard Brownian motions.
X
∆t
T = X
∆t
T (∆W
(1)
1 ,...,∆W
(1)
N ,...,∆W
(d)
1 ,...,∆W
(d)
N )
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=∆W
.
E[g(X
∆t
T )] =?
1 Identify hierarchical representation of integration variables ⇒ locate the
discontinuity in a smaller dimensional space and create more anisotropy
(a) X
∆t
T (∆W) ≡ X
∆t
T (Z), Z = (Zi)dN
i=1 ∼ N(0,IdN ): s.t. “Z1 ∶= (Z
(1)
1 ,...,Z
(d)
1 )
(coarse rdvs) substantially contribute even for ∆t → 0”.
E.g., Brownian bridges of ∆W / Haar wavelet construction of W.
(b) Design of a sub-optimal smoothing direction (A: rotation matrix which is
easy to construct and whose structure depends on the payoff g.)
Y = AZ1.
E.g., For an arithmetic basket option, a suitable A is a rotation matrix, with
the first row leading to Y1 = ∑
d
i=1 Z
(i)
1 up to rescaling without any constraint
for the remaining rows (using the Gram-Schmidt procedure).
11
16. Numerical Smoothing Steps
Guiding example:
g ∶ Rd
→ R payoff function, X
∆t
T (∆t = T
N ) Euler discretization of
d-dimensional SDE , E.g., dX
(i)
t = ai(Xt)dt + ∑d
j=1 bij(Xt)dW
(j)
t ,
where {W(j)
}d
j=1 are standard Brownian motions.
X
∆t
T = X
∆t
T (∆W
(1)
1 ,...,∆W
(1)
N ,...,∆W
(d)
1 ,...,∆W
(d)
N )
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
∶=∆W
.
E[g(X
∆t
T )] =?
1 Identify hierarchical representation of integration variables ⇒ locate the
discontinuity in a smaller dimensional space and create more anisotropy
(a) X
∆t
T (∆W) ≡ X
∆t
T (Z), Z = (Zi)dN
i=1 ∼ N(0,IdN ): s.t. “Z1 ∶= (Z
(1)
1 ,...,Z
(d)
1 )
(coarse rdvs) substantially contribute even for ∆t → 0”.
E.g., Brownian bridges of ∆W / Haar wavelet construction of W.
(b) Design of a sub-optimal smoothing direction (A: rotation matrix which is
easy to construct and whose structure depends on the payoff g.)
Y = AZ1.
E.g., For an arithmetic basket option, a suitable A is a rotation matrix, with
the first row leading to Y1 = ∑
d
i=1 Z
(i)
1 up to rescaling without any constraint
for the remaining rows (using the Gram-Schmidt procedure).
11
17. Numerical Smoothing Steps
2
E[g(X(T))] ≈ E[g(X
∆t
(T))]
= ∫
Rd×N
G(z)ρd×N (z)dz
(1)
1 ...dz
(1)
N ...dz
(d)
1 ...dz
(d)
N
= ∫
RdN−1
I(y−1,z
(1)
−1 ,...,z
(d)
−1 )ρd−1(y−1)dy−1ρdN−d(z
(1)
−1 ,...,z
(d)
−1 )dz
(1)
−1 ...dz
(d)
−1
= E[I(Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] ≈ E[I(Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )], (3)
" How to obtain I(⋅) and I(⋅)? ⇒ See next slide.
Notation
G ∶= g ○ Φ ○ (ψ(1)
,...,ψ(d)
) maps N × d Gaussian random inputs to g(X
∆t
(T));
where
▸ ψ(j)
∶ (Z
(j)
1 ,...,Z
(j)
N ) ↦ (B
(j)
1 ,...,B
(j)
N ) denotes the mapping of the Brownian
bridge/Haar wavelet construction.
▸ Φ ∶ (∆t,B) ↦ X
∆t
(T) denotes the mapping of the time-stepping scheme.
x−j denotes the vector of length d − 1 denoting all the variables other than xj in
x ∈ Rd
.
ρd×N (z) = 1
(2π)d×N/2 e−1
2
zT z
.
12
18. Numerical Smoothing Steps
3
I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) = ∫
R
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
= ∫
y∗
1
−∞
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1 + ∫
+∞
y∗
1
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
≈ I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) ∶=
Mlag
∑
k=0
ηkG(ζk (y∗
1),y−1,z
(1)
−1 ,...,z
(d)
−1 ), (4)
where
▸ y∗
1 (y−1,z
(1)
−1 ,...,z
(d)
−1 ): the exact discontinuity location s.t
φ(X
∆t
(T)) = P(y∗
1 ;y−1,z
(1)
−1 ,...,z
(d)
−1 ) = 0.
▸ y∗
1(y−1,z
(1)
−1 ,...,z
(d)
−1 ): the approximated discontinuity location via root finding;
▸ MLag: the number of Laguerre quadrature points ζk ∈ R, and corresponding
weights ηk;
4 Compute the remaining (dN −1)-integral in (3) by ASGQ or QMC or MLMC.
" Our approach is different from previous MLMC techniques which smooth out
only at the final step with respect to ∆W ⇒ smoothing effect vanishes as ∆t → 0.
13
19. Extending Numerical Smoothing for
Multiple Discontinuities
Multiple Discontinuities: Due to the payoff structure/use of Richardson extrapolation.
R different ordered multiple roots, e.g., {y∗
i }R
i=1, the smoothed integrand is
I (y−1,z
(1)
−1 ,...,z
(d)
−1 ) = ∫
y∗
1
−∞
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1 + ∫
+∞
y∗
R
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1
+
R−1
∑
i=1
∫
y∗
i+1
y∗
i
G(y1,y−1,z
(1)
−1 ,...,z
(d)
−1 )ρ1(y1)dy1,
and its approximation I is given by
I(y−1,z
(1)
−1 ,...,z
(d)
−1 ) ∶=
MLag,1
∑
k=0
ηLag
k G(ζLag
k,1 (y∗
1),y−1,z
(1)
−1 ,...,z
(d)
−1 )
+
MLag,R
∑
k=0
ηLag
k G(ζLag
k,R (y∗
R),y−1,z
(1)
−1 ,...,z
(d)
−1 )
+
R−1
∑
i=1
⎛
⎝
MLeg,i
∑
k=0
ηLeg
k G(ζLeg
k,i (y∗
i ,y∗
i+1),y−1,z
(1)
−1 ,...,z
(d)
−1 )
⎞
⎠
,
{y∗
i }R
i=1: the approximated discontinuities locations; MLag,1 and MLag,R: the number
of Laguerre quadrature points ζLag
.,. ∈ R with corresponding weights ηLag
. ; {MLeg,i}R−1
i=1 :
the number of Legendre quadrature points ζLeg
.,. with corresponding weights ηLeg
. .
I can be approximated further depending on (i) the decay of G × ρ1 in the semi-infinite
domains and (ii) how close the roots are to each other.
20. Extending Numerical Smoothing for
Density Estimation
Goal: Approximate the density ρX at u, for a stochastic process X
ρX(u) = E[δ(X − u)], δ is the Dirac delta function.
" Without any smoothing techniques (regularization, kernel
density,. . . ) MC and MLMC fail due to the infinite variance caused
by the singularity of the function δ.
Strategy in (Bayer, Hammouda, and Tempone 2022): Conditioning
with respect to the Brownian bridge
ρX(u) =
1
√
2π
E[exp(−(Y ∗
1 (u))
2
/2)∣
dY ∗
1
du
(u)∣]
≈
1
√
2π
E
⎡
⎢
⎢
⎢
⎣
exp(−(Y
∗
1(u))
2
/2)
R
R
R
R
R
R
R
R
R
R
R
dY
∗
1
du
(u)
R
R
R
R
R
R
R
R
R
R
R
⎤
⎥
⎥
⎥
⎦
, (5)
Y ∗
1 (u;Z−1): the exact discontinuity; Y
∗
1(u;Z−1): the approximated
discontinuity.
21. Why not Kernel Density Techniques
in Multiple Dimensions?
Similar to approaches based on parametric regularization as in (Giles, Nagapetyan,
and Ritter 2015).
This class of approaches has a pointwise error that increases exponentially with
respect to the dimension of the state vector X.
For a d-dimensional problem, a kernel density estimator with a bandwidth matrix,
H = diag(h,...,h)
MSE ≈ c1M−1
h−d
+ c2h4
. (6)
M is the number of samples, and c1 and c2 are constants.
Our approach in high dimension: For u ∈ Rd
ρX(u) = E[δ(X − u)] = E[ρd (Y∗
(u))∣det(J(u))∣]
≈ E[ρd (Y
∗
(u))∣det(J(u))∣], (7)
where
▸ Y∗
(u;⋅): the exact discontinuity; Y
∗
(u;⋅): the approximated discontinuity.
▸ J is the Jacobian matrix, with Jij =
∂y∗
i
∂uj
; ρd(⋅) is the multivariate Gaussian density.
Thanks to the exact conditional expectation with respect to the Brownian bridge
⇒ the smoothing error in our approach is insensitive to the dimension of the
problem.
16
22. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
23. Notations
The Haar basis functions, ψn,k, of L2
([0,1]) with support
[2−n
k,2−n
(k + 1)]:
ψ−1(t) ∶= 1[0,1](t); ψn,k(t) ∶= 2n/2
ψ (2n
t − k), n ∈ N0, k = 0,...,2n
− 1,
where ψ(⋅) is the Haar mother wavelet:
ψ(t) ∶=
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
1, 0 ≤ t < 1
2
,
−1, 1
2
≤ t < 1,
0, else,
Grid Dn ∶= {tn
` ∣ ` = 0,...,2n+1
} with tn
` ∶= `
2n+1 T.
For i.i.d. standard normal rdvs Z−1, Zn,k, n ∈ N0, k = 0,...,2n
− 1,
we define the (truncated) standard Brownian motion
WN
t ∶= Z−1Ψ−1(t) +
N
∑
n=0
2n
−1
∑
k=0
Zn,kΨn,k(t).
with Ψ−1(⋅) and Ψn,k(⋅) are the antiderivatives of the Haar basis
functions. 17
24. Notations
We define the corresponding increments for any function or
process F as follows:
∆N
` F ∶= F(tN
`+1) − F(tN
` ).
The solution of the Euler scheme of dXt = b(Xt)dWt, X0 = x ∈ R,
along DN
as
XN
`+1 ∶= XN
` + b(XN
` )∆N
` W, ` = 0,...,2N
− 1, (8)
with XN
0 ∶= X0 = x; for convenience, we also define XN
T ∶= XN
2N .
We define the deterministic function HN
∶ R2N+1−1
→ R as
HN
(zN
) ∶= EZ−1
[g (XN
T (Z−1,zN
))], (9)
where ZN
∶= (Zn,k)n=0,...,N, k=0,...2n−1.
18
25. The Smoothness Theorem
Theorem 3.1 ((Bayer, Ben Hammouda, and Tempone 2022))
Assume that XN
T , defined by (8), satisfies Assumptions 6.1 and 6.2.a
Then, for any p ∈ N and indices n1,...,np and k1,...,kp (satisfying
0 ≤ kj < 2nj ), the function HN
defined in (9) satisfies the following
(with constants independent of nj,kj)b
∂p
HN
∂zn1,k1 ⋯∂znp,kp
(zN
) = Ô
P (2− ∑
p
j=1 nj/2
).c
In particular, HN
is of class C∞
.
a
We showed in (Bayer, Ben Hammouda, and Tempone 2022) that
Assumptions 6.1 and 6.2 are valid for many pricing models.
b
The constants increase in p and N.
c
For sequences of rdvs FN , we write that FN = Ô
P(1) if there is a rdv C
with finite moments of all orders such that for all N, we have ∣FN ∣ ≤ C a.s.
19
26. Sketch of the Proof
1 We consider a mollified version gδ of g and the corresponding function
HN
δ (defined by replacing g with gδ in (9)).
2 We prove that we can interchange the integration and differentiation,
which implies
∂HN
δ (zN
)
∂zn,k
= E [g′
δ (XN
T (Z−1,zN
))
∂XN
T (Z−1,zN
)
∂zn,k
].
3 Multiplying and dividing by
∂XN
T (Z−1,zN )
∂y (where y ∶= z−1) and
replacing the expectation by an integral w.r.t. the standard normal
density, we obtain
∂HN
δ (zN
)
∂zn,k
= ∫
R
∂gδ (XN
T (y,zN
))
∂y
(
∂XN
T
∂y
(y,zN
))
−1
∂XN
T
∂zn,k
(y,zN
)
1
√
2π
e− y2
2 dy.
(10)
4 We show that integration by parts is possible, and then we can discard
the mollified version to obtain the smoothness of HN
because
∂HN
(zN
)
∂zn,k
= −∫
R
g (XN
T (y,zN
))
∂
∂y
⎡
⎢
⎢
⎢
⎢
⎣
(
∂XN
T
∂y
(y,zN
))
−1
∂XN
T
∂zn,k
(y,zN
)
1
√
2π
e− y2
2
⎤
⎥
⎥
⎥
⎥
⎦
dy.
5 The proof relies on successively applying the technique above of
dividing by
∂XN
T
∂y and then integrating by parts.
20
27. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
28. Error Discussion for rQMC
̂
QrQMC
: the rQMC estimator
E[g(X(T)] − ̂
QrQMC
= E[g(X(T))] − E[g(X
∆t
(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error I: bias or weak error of O(∆t)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error II: numerical smoothing error of O(M
−s/2
Lag
)+O(TOLNewton)
+ E[I (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − ̂
QrQMC
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error III: rQMC error of O(M−1+δ
rQMC
), δ>0
, (11)
Notations
MrQMC: the number of rQMC points.
y∗
1: the approximated location of the non smoothness obtained by Newton iteration
⇒ ∣y∗
1 − y∗
1∣ = TOLNewton
MLag is the number of points used by the Laguerre quadrature for the one
dimensional pre-integration step.
s > 0: For the parts of the domain separated by the discontinuity location,
derivatives of G with respect to y1 are bounded up to order s.
21
29. Error Discussion for MLMC
̂
QMLMC
: the MLMC estimator
E[g(X(T)] − ̂
QMLMC
= E[g(X(T))] − E[g(X
∆tL
(T))]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error I: bias or weak error of O(∆tL)
+ E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error II: numerical smoothing error of O(M
−s/2
Lag,L
)+O(TOLNewton,L)
+ E[IL (Y−1,Z
(1)
−1 ,...,Z
(d)
−1 )] − ̂
QMLMC
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Error III: MLMC statistical error of O
⎛
⎝
√
∑L
`=L0
√
MLag,`+log(TOL−1
Newton,`
)
⎞
⎠
.
(12)
" The level ` approximation of I in ̂
QMLMC
, I` ∶= I`(y`
−1,z
(1),`
−1 ,...,z
(d),`
−1 ), is
computed with: step size of ∆t`; MLag,` Laguerre quadrature points;
TOLNewton,` as the tolerance of the Newton method at level `
22
30. Multilevel Monte Carlo with Numerical Smoothing:
Variance Decay, Complexity and Robustness
Notation: I`:= I`(y`
−1,z
(1),`
−1 ,...,z
(d),`
−1 ): the level ` Euler approximation of I, computed
with: step size of ∆t`; MLag,` Laguerre quadrature points; TOLNewton,` as the tolerance of
the Newton method at level `.
Corollary 4.1 ((Bayer, Hammouda, and Tempone 2022))
Under Assmuptions 6.1 and 6.2, V` ∶= Var[I` − I`−1] = O (∆t`) compared with O (∆t
1/2
` )
for MLMC without smoothing.
Corollary 4.2 ((Bayer, Hammouda, and Tempone 2022))
Under Assmuptions 6.1 and 6.2, the complexity of MLMC combined with numerical
smoothing using Euler discretization is O (TOL−2
(log(TOL))2
) compared with
O (TOL−2.5
) for MLMC without smoothing.
Corollary 4.3 ((Bayer, Hammouda, and Tempone 2022))
Let κ` be the kurtosis of the random variable Y` ∶= I` − I`−1, then under Assmuptions 6.1
and 6.2, we obtain κ` = O (1) compared with O (∆t
−1/2
` ) for MLMC without smoothing.
31. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
32. QMC Error Convergence
102
103
104
QMC samples
10-3
10
-2
10-1
95%
Statistical
error
QMC without smoothing
slope=-0.52
QMC with numerical smoothing
slope=-0.85
(a)
102
103
104
105
QMC samples
10-5
10
-4
10
-3
10-2
10-1
95%
Statistical
error
QMC without smoothing
slope=-0.64
QMC with numerical smoothing
slope=-0.92
(b)
Figure 5.1: Comparison of the 95% statistical error convergence for rQMC
with/out numerical smoothing. (a) Digital option under Heston, (b) digital
option under GBM.
24
33. Errors in the Numerical Smoothing
101
102
MLag
10-5
10
-4
10
-3
10-2
10
-1
Relative
numerical
smoothing
error
Relative numerical smoothing error
slope=-4.02
(a)
10-10
10-8
10-6
10-4
10-2
100
TOLNewton
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Relative
numerical
smoothing
error
10
-4
Relative numerical smoothing error
(b)
Figure 5.2: Call option under GBM with N = 4: The relative numerical
smoothing error plotted against (a) different values of MLag with a fixed
Newton tolerance TOLNewton = 10−10
, (b) different values of TOLNewton with
a fixed number of Laguerre quadrature points MLag = 128.
25
34. Digital Option under the Heston Model:
MLMC Without Smoothing
0 1 2 3 4 5 6
-8
-6
-4
-2
0 1 2 3 4 5 6
-10
-5
0
0 1 2 3 4 5 6
2
4
6
0 1 2 3 4 5 6
50
100
150
200
kurtosis
Figure 5.3: Digital option under Heston: Convergence plots for MLMC
without smoothing.
26
35. Digital Option under the Heston Model:
MLMC With Numerical Smoothing
0 1 2 3 4 5 6 7
-15
-10
-5
0 1 2 3 4 5 6 7
-10
-5
0
0 1 2 3 4 5 6 7
2
4
6
8
0 1 2 3 4 5 6 7
8
9
10
11
12
kurtosis
Figure 5.4: Digital option under Heston: Convergence plots for MLMC with
numerical smoothing.
27
36. Digital Option under the Heston Model:
Numerical Complexity Comparison
10-4
10-3
10-2
10-1
TOL
1e-04
1e-02
10
2
10
3
E[W]
MLMC without smoothing
TOL
-2.5
MLMC+ Numerical smoothing
TOL-2
log(TOL)2
Figure 5.5: Digital option under Heston: Comparison of the numerical
complexity of i) standard MLMC, and ii) MLMC with numerical smoothing.
Numerical computational cost improved from O(TOL−5/2
) without smoothing
to O(TOL−2
) with numerical smoothing.
28
37. Density estimation in the Heston model:
MLMC With Numerical Smoothing
0 2 4 6 7
-10
-5
0
0 2 4 6 7
-10
-5
0
0 2 4 6 7
2
4
6
8
0 2 4 6 7
10
15
20
kurtosis
Figure 5.6: Density of Heston: Convergence plots for MLMC with numerical
smoothing for computing the density ρX(T )(u) at u = 1, where X follows the
Heston dynamics.
29
38. 1 Motivation and Framework
2 The Numerical Smoothing Idea
3 The Smoothness Theorem
4 Error Discussion for rQMC and MLMC combined with Numerical
Smoothing
5 Numerical Experiments and Results
6 Conclusions
39. Conclusions
1 We propose a numerical smoothing approach that can be combined with
QMC and MLMC (also ASGQ) estimators for efficient option pricing and
density estimation.
2 The numerical smoothing improves the regularty structure of the problem
⇒ optimal convergence rates for QMC.
3 Compared to the case without smoothing
▸ We significantly reduce the kurtosis at the deep levels of MLMC which
improves the robustness of the estimator.
▸ We improve the variance decay rate ⇒ improvement of MLMC complexity
from O (TOL−2.5
) to O (TOL−2
log(TOL)2
)
" without the need to use higher order schemes such as Milstein scheme.
4 When estimating densities: Compared to the smoothing strategy used in
(Giles, Nagapetyan, and Ritter 2015), the pointwise error of our numerical
smoothing approach does not increases exponentially with respect to the
dimension of state vector.
5 Our approach is simple and generic, and can be extended:
▸ Many model dynamics and payoff structures;
▸ Computing financial Greeks;
▸ Computing distribution functions;
▸ Risk estimation.
30
40. Related References
[1] C. Bayer, C. Ben Hammouda, R. Tempone. Numerical Smoothing with
Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for
Efficient Option Pricing, arXiv:2111.01874 (2021), to appear in
Quantitative Finance.
[2] C. Bayer, C. Ben Hammouda, R. Tempone. Multilevel Monte Carlo
combined with numerical smoothing for robust and efficient option pricing
and density estimation, arXiv:2003.05708 (2022).
[3] C. Bayer, C. Ben Hammouda, A. Papapantoleon, M. Samet, R. Tempone.
Optimal Damping with Hierarchical Adaptive Quadrature for Efficient
Fourier Pricing of Multi-Asset Options in Lévy Models, arXiv:2203.08196
(2022).
[4] C. Bayer, C. Ben Hammouda, R. Tempone. Hierarchical adaptive sparse
grids and quasi-Monte Carlo for option pricing under the rough Bergomi
model, Quantitative Finance, 2020.
Thank you for your attention!
31
41. Assumptions
Assumption 6.1
There are positive rdvs Cp with finite moments of all orders such that
∀N ∈ N, ∀`1,...,`p ∈ {0,...,2N
− 1} ∶
R
R
R
R
R
R
R
R
R
R
R
R
∂p
XN
T
∂XN
`1
⋯∂XN
`p
R
R
R
R
R
R
R
R
R
R
R
R
≤ Cp a.s.
this means that
∂pXN
T
∂XN
`1
⋯∂XN
`p
= Ô
P(1).
Assumption 6.1 is natural because it is fulfilled if the diffusion
coefficient b(⋅) is smooth; this situation is valid for many option pricing
models.
42. Assumptions
Assumption 6.2
For any p ∈ N we obtaina
(
∂XN
T
∂y
(Z−1,ZN
))
−p
= Ô
P(1).
a
y ∶= z−1
In (Bayer, Ben Hammouda, and Tempone 2022), we show
sufficient conditions where this assumption is valid. For instance,
Assumption 6.2 is valid for
▸ one-dimensional models with a linear or constant diffusion.
▸ multivariate models with a linear drift and constant diffusion,
including the multivariate lognormal model (see (Bayer,
Siebenmorgen, and Tempone 2018)).
There may be some cases in which Assumption 6.2 is not fulfilled,
e.g., XT = W2
T . However, our method works well in such cases
since we have g(XT ) = G(y2
1).
43. References I
[1] Martin Altmayer and Andreas Neuenkirch. “Multilevel Monte
Carlo quadrature of discontinuous payoffs in the generalized
Heston model using Malliavin integration by parts”. English. In:
SIAM Journal on Financial Mathematics : SIFIN 6.1 (2015).
Online-Ressource, pp. 22–52. doi: 10.1137/130933629. url:
https://madoc.bib.uni-mannheim.de/37052/.
[2] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Hierarchical adaptive sparse grids and quasi-Monte Carlo for
option pricing under the rough Bergomi model”. In: Quantitative
Finance 20.9 (2020), pp. 1457–1473.
[3] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Numerical Smoothing with Hierarchical Adaptive Sparse Grids
and Quasi-Monte Carlo Methods for Efficient Option Pricing”.
In: arXiv preprint arXiv:2111.01874 (2021), to appear in
Quantitative Finance Journal (2022).
34
44. References II
[4] Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Multilevel Monte Carlo combined with numerical smoothing for
robust and efficient option pricing and density estimation”. In:
arXiv preprint arXiv:2003.05708 (2022).
[5] Christian Bayer, Markus Siebenmorgen, and Rául Tempone.
“Smoothing the payoff for efficient computation of basket option
pricing.”. In: Quantitative Finance 18.3 (2018), pp. 491–505.
[6] Christian Bayer et al. “Optimal Damping with Hierarchical
Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset
Options in L/’evy Models”. In: arXiv preprint arXiv:2203.08196
(2022).
[7] Chiheb Ben Hammouda, Nadhir Ben Rached, and Raúl Tempone.
“Importance sampling for a robust and efficient multilevel Monte
Carlo estimator for stochastic reaction networks”. In: Statistics
and Computing 30.6 (2020), pp. 1665–1689.
35
45. References III
[8] Chiheb Ben Hammouda, Alvaro Moraes, and Raúl Tempone.
“Multilevel hybrid split-step implicit tau-leap”. In: Numerical
Algorithms 74.2 (2017), pp. 527–560.
[9] K Andrew Cliffe et al. “Multilevel Monte Carlo methods and
applications to elliptic PDEs with random coefficients”. In:
Computing and Visualization in Science 14.1 (2011), p. 3.
[10] Josef Dick, Frances Y Kuo, and Ian H Sloan. “High-dimensional
integration: the quasi-Monte Carlo way”. In: Acta Numerica 22
(2013), pp. 133–288.
[11] Alexander D Gilbert, Frances Y Kuo, and Ian H Sloan.
“Preintegration is not smoothing when monotonicity fails”. In:
arXiv preprint arXiv:2112.11621 (2021).
36
46. References IV
[12] Michael B Giles. “Improved multilevel Monte Carlo convergence
using the Milstein scheme”. In: Monte Carlo and Quasi-Monte
Carlo Methods 2006. Springer, 2008, pp. 343–358.
[13] Michael B Giles. “Multilevel Monte Carlo methods”. In: Acta
Numerica 24 (2015), pp. 259–328.
[14] Michael B Giles. “Multilevel Monte Carlo path simulation”. In:
Operations Research 56.3 (2008), pp. 607–617.
[15] Michael B Giles, Kristian Debrabant, and Andreas Rößler.
“Numerical analysis of multilevel Monte Carlo path simulation
using the Milstein discretisation”. In: arXiv preprint
arXiv:1302.4676 (2013).
37
47. References V
[16] Michael B Giles, Tigran Nagapetyan, and Klaus Ritter.
“Multilevel Monte Carlo approximation of distribution functions
and densities”. In: SIAM/ASA Journal on Uncertainty
Quantification 3.1 (2015), pp. 267–295.
[17] Andreas Griewank et al. “High dimensional integration of kinks
and jumps—smoothing by preintegration”. In: Journal of
Computational and Applied Mathematics 344 (2018), pp. 259–274.
[18] Abdul-Lateef Haji-Ali, Jonathan Spence, and Aretha Teckentrup.
“Adaptive Multilevel Monte Carlo for Probabilities”. In: arXiv
preprint arXiv:2107.09148 (2021).
[19] Stefan Heinrich. “Multilevel monte carlo methods”. In:
International Conference on Large-Scale Scientific Computing.
Springer. 2001, pp. 58–67.
38
48. References VI
[20] Ahmed Kebaier et al. “Statistical Romberg extrapolation: a new
variance reduction method and applications to option pricing”.
In: The Annals of Applied Probability 15.4 (2005), pp. 2681–2705.
[21] Harald Niederreiter. Random number generation and
quasi-Monte Carlo methods. Vol. 63. Siam, 1992.
[22] Dirk Nuyens. The construction of good lattice rules and
polynomial lattice rules. 2014.
[23] Dirk Nuyens and Yuya Suzuki. “Scaled lattice rules for
integration on Rd
achieving higher-order convergence with error
analysis in terms of orthogonal projections onto periodic spaces”.
In: arXiv preprint arXiv:2108.12639 (2021).
[24] Ian H Sloan. “Lattice methods for multiple integration”. In:
Journal of Computational and Applied Mathematics 12 (1985),
pp. 131–143.
39