This document discusses Monte Carlo simulation techniques for pricing derivatives. It begins with an overview of Monte Carlo simulation and its use in derivative pricing models. It then covers different types of Monte Carlo path evolution (element-wise, path-wise, etc.). The document discusses implementing Monte Carlo simulation, including generating Wiener paths using methods like Euler discretization and Brownian bridges. It provides an example applying these techniques to price an option on the geometric average rate and shows convergence with different levels of path stratification. The key steps in Monte Carlo simulation for derivative pricing are outlined as simulating asset paths, computing payoffs, and estimating the option value and associated standard error.
This document discusses adiabatic gate teleportation and its applications. It begins with an overview of joint work done by Dave Bacon of the University of Washington along with Steve Flammia, Alice Neels, and Andrew Landahl on this topic. The rest of the document discusses the history of classical computing using unreliable components, ideas from Kitaev and Freedman on topological quantum computing using anyons, and an open controversy around whether topological quantum computing is truly fault-tolerant.
1) The document discusses session types in Abelian logic. It introduces primitives for synchronous communication and shows how to represent channels as session types using macros.
2) It proposes adding exchange laws to typed lambda calculus with session types in order to represent commutativity. This results in a system called Abelian logic that is sound and complete.
3) The document considers adding an "eval-subst" rule to allow evaluation of processes with nested channel pairs that would otherwise be deadlocked. This raises issues with preserving types during evaluation that require further formalization.
- Sliding mode control is a variable structure control method where the control signal is switched between two values based on the sign of a switching function (s). This causes the system trajectories to slide along the switching surface (s=0).
- In sliding mode, the system motion is governed by a reduced order sliding mode equation that depends only on the designer-selected parameter c, not the plant dynamics.
- For sliding mode to exist, the system trajectories must be oriented towards the switching surface. Adaptive sliding mode control can vary the parameter c depending on system parameters to increase the rate of decay in sliding mode.
The document discusses the phenomenon of interference of light. It explains the conditions required for interference, including coherent sources, monochromatic light, and a constant path difference. It describes several classic interference experiments, including Young's double slit experiment, Fresnel's bi-prism, Newton's rings, and Michelson's interferometer. It discusses how interference patterns are used to determine properties like wavelength and refractive index.
Lesson 19: The Mean Value Theorem (Section 021 slides)Matthew Leingang
(a) E-ZPass cannot prove that the driver was speeding. E-ZPass records entry and exit times and locations, but does not continuously track speed. It cannot determine the driver's exact speed at any point during the trip, so it cannot prove a specific speeding violation occurred. The best it could show is an average speed that may or may not indicate speeding depending on the specific speed limit(s) along the route.
Avoidance of Microstructural Heterogeneities by Hot Rolling Design in Thin Sl...Pello Uranga
The document describes a new microstructural model for predicting microstructural heterogeneity in thin slab direct rolled niobium microalloyed steels. The model uses grain size distributions measured from real thin slabs as inputs. It then outputs recrystallized and unrecrystallized grain size histograms and retained strain. Rolling simulations using the model showed that entry temperatures of 1060°C can result in heterogeneous final austenite structure, while 1100°C promotes a homogeneous structure. Optimized rolling schedules and minimum entry temperatures were determined to be 1090-1070°C to avoid heterogeneities. Increasing the initial slab thickness was also found to provide higher retained strain without affecting homogeneity.
This document discusses adiabatic gate teleportation and its applications. It begins with an overview of joint work done by Dave Bacon of the University of Washington along with Steve Flammia, Alice Neels, and Andrew Landahl on this topic. The rest of the document discusses the history of classical computing using unreliable components, ideas from Kitaev and Freedman on topological quantum computing using anyons, and an open controversy around whether topological quantum computing is truly fault-tolerant.
1) The document discusses session types in Abelian logic. It introduces primitives for synchronous communication and shows how to represent channels as session types using macros.
2) It proposes adding exchange laws to typed lambda calculus with session types in order to represent commutativity. This results in a system called Abelian logic that is sound and complete.
3) The document considers adding an "eval-subst" rule to allow evaluation of processes with nested channel pairs that would otherwise be deadlocked. This raises issues with preserving types during evaluation that require further formalization.
- Sliding mode control is a variable structure control method where the control signal is switched between two values based on the sign of a switching function (s). This causes the system trajectories to slide along the switching surface (s=0).
- In sliding mode, the system motion is governed by a reduced order sliding mode equation that depends only on the designer-selected parameter c, not the plant dynamics.
- For sliding mode to exist, the system trajectories must be oriented towards the switching surface. Adaptive sliding mode control can vary the parameter c depending on system parameters to increase the rate of decay in sliding mode.
The document discusses the phenomenon of interference of light. It explains the conditions required for interference, including coherent sources, monochromatic light, and a constant path difference. It describes several classic interference experiments, including Young's double slit experiment, Fresnel's bi-prism, Newton's rings, and Michelson's interferometer. It discusses how interference patterns are used to determine properties like wavelength and refractive index.
Lesson 19: The Mean Value Theorem (Section 021 slides)Matthew Leingang
(a) E-ZPass cannot prove that the driver was speeding. E-ZPass records entry and exit times and locations, but does not continuously track speed. It cannot determine the driver's exact speed at any point during the trip, so it cannot prove a specific speeding violation occurred. The best it could show is an average speed that may or may not indicate speeding depending on the specific speed limit(s) along the route.
Avoidance of Microstructural Heterogeneities by Hot Rolling Design in Thin Sl...Pello Uranga
The document describes a new microstructural model for predicting microstructural heterogeneity in thin slab direct rolled niobium microalloyed steels. The model uses grain size distributions measured from real thin slabs as inputs. It then outputs recrystallized and unrecrystallized grain size histograms and retained strain. Rolling simulations using the model showed that entry temperatures of 1060°C can result in heterogeneous final austenite structure, while 1100°C promotes a homogeneous structure. Optimized rolling schedules and minimum entry temperatures were determined to be 1090-1070°C to avoid heterogeneities. Increasing the initial slab thickness was also found to provide higher retained strain without affecting homogeneity.
ANISOTROPIC SURFACES DETECTION USING INTENSITY MAPS ACQUIRED BY AN AIRBORNE L...grssieee
The document discusses methods for estimating the spatial anisotropy of surfaces using near-infrared LiDAR intensity maps over coastal environments. It presents two estimators - one based on 1D correlations of columns and lines in sliding windows, and another based on 2D correlations of windows and their transposes. The estimators are evaluated on synthetic data with varying anisotropy, relative anisotropy, and signal-to-noise ratio. The estimators are then applied to LiDAR intensity maps from coastal areas to characterize anisotropic surfaces independently of intensity variations. Future work involves combining these methods with multi-resolution wavelet approaches and comparing LiDAR intensity to DEM and dual-polarization SAR data.
Optimization of Rolling Conditions in Nb Microalloyed Steels Processed by Thi...Pello Uranga
The document describes a model for optimizing rolling conditions in Nb microalloyed steels processed by thin slab casting and direct rolling. The model predicts the evolution of austenite grain size distributions during rolling schedules. It was used to generate processing maps showing the effects of rolling temperature, thickness reduction, and schedule on the final austenite grain structure. The maps identify processing conditions that avoid microstructural heterogeneities for different steel thicknesses.
Role of Microalloying Elements during Thin Slab Direct RollingPello Uranga
The document discusses the role of microalloying elements during thin slab direct rolling of steels. Specifically:
- Thin slab casting and direct rolling leads to different metallurgical changes compared to traditional routes that affect microalloying behavior.
- Models have been developed to optimize microalloyed steel grades for thin slab direct rolling, focusing on avoiding heterogeneities and conditioning austenite structure.
- Industrial rolling simulations using the models optimize schedules to achieve thick final gauges with high microalloying levels. Redesigned schedules reduce austenite fraction and improve microstructural homogeneity.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
This document provides an introduction and overview of Toeplitz and circulant matrices. It discusses how these matrices arise in applications involving time series, signal processing, and discrete time systems. Toeplitz matrices have constant diagonals, while circulant matrices are a special case where each row is a cyclic shift of the row above it. The document outlines the structure and key properties of these matrices and previews the major topics to be covered, including asymptotic behavior, eigenvalues, inverses, and applications to stochastic time series.
In this section we look at problems where changing quantities are related. For instance, a growing oil slick is changing in diameter and volume at the same time. How are the rates of change of these quantities related? The chain rule for derivatives is the key.
In this section we look at problems where changing quantities are related. For instance, a growing oil slick is changing in diameter and volume at the same time. How are the rates of change of these quantities related? The chain rule for derivatives is the key.
Lesson 13: Exponential and Logarithmic Functions (Section 041 handout)Matthew Leingang
This document summarizes sections 3.1-3.2 of a Calculus I course at New York University on exponential and logarithmic functions taught on October 20, 2010. It outlines definitions and properties of exponential functions, introduces the special number e and natural exponential function, and defines logarithmic functions. Announcements are made that the midterm exam is nearly graded and a WebAssign assignment is due the following week.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
This study models the acceleration of anomalous cosmic rays (ACRs) at a non-spherical, blunt termination shock (TS) using numerical modeling. The model adapts an existing code used for galactic cosmic rays to model ACR acceleration and transport. Results show that ACR acceleration occurs at the flanks of the blunt TS, consistent with observations from Voyager 1 and 2. As Voyager 1 moves farther from the TS, its observed ACR intensities are decreasing, while Voyager 2's intensities are increasing as it moves toward the flanks where intensities are higher according to the model.
Prof. Jim Bezdek: Every Picture Tells a Story — Visual Cluster Analysisieee_cis_cyprus
The talk overviews the history of Visual Clustering, which began thousands of years ago. The first image for this appeared in 1873. Three algorithms for visual assessment of clustering tendency examined, namely the VAT, iVAT and asiVAT, with applications to social network analysis. Particularly three applications, one for each algorithm will be discussed: time series analysis with clusters of linguistic medoid prototypes in Eldercare data (iVAT); social network analysis with Sampson's Monastery data (asiVAT); and network access security (VAT), a commercial application developed by CA technologies.
Computation of the gravity gradient tensor due to topographic masses using te...Leonardo Uieda
The GOCE satellite mission has the objective of measuring the Earth's gravitational field with an unprecedented accuracy through the measurement of the gravity gradient tensor (GGT). One of the several applications of this new gravity data set is to study the geodynamics of the lithospheric plates, where the flat Earth approximation may not be ideal and the Earth's curvature should be taken into account. In such a case, the Earth could be modeled using tesseroids, also called spherical prisms, instead of the conventional rectangular prisms. The GGT due to a tesseroid is calculated using numerical integration methods, such as the Gauss-Legendre Quadrature (GLQ), as already proposed by Asgharzadeh et al. (2007) and Wild-Pfeiffer (2008). We present a computer program for the direct computation of the GGT caused by a tesseroid using the GLQ. The accuracy of this implementation was evaluated by comparing its results with the result of analytical formulas for the special case of a spherical cap with computation point located at one of the poles. The GGT due to the topographic masses of the Parana basin (SE Brazil) was estimated at 260 km altitude in an attempt to quantify this effect on the GOCE gravity data. The digital elevation model ETOPO1 (Amante and Eakins, 2009) between 40º W and 65º W and 10º S and 35º S, which includes the Paraná Basin, was used to generate a tesseroid model of the topography with grid spacing of 10' x 10' and a constant density of 2670 kg/m3. The largest amplitude observed was on the second vertical derivative component (-0.05 to 1.20 Eötvos) in regions of rough topography, such as that along the eastern Brazilian continental margins. These results indicate that the GGT due to topographic masses may have amplitudes of the same order of magnitude as the GGT due to density anomalies within the crust and mantle.
The document discusses calculating volumes of solids of revolution. It provides examples of finding the volume when revolving common shapes around different axes, such as:
(1) Revolving a cone around the x-axis to find the volume is 1/3πr2h.
(2) Revolving a sphere around the x or y-axis finds the volume is 4/3πr3.
(3) Revolving the region between y=x2 and the y-axis around the y-axis from 0 to 1 finds the volume is π/2 units3.
12 x1 t04 06 integrating functions of time (2012)Nigel Simmons
The document discusses integrating functions of time to determine changes in displacement, distance, velocity, and speed. It explains that the integral of position over time equals displacement, while subtracting integrals of position over different time intervals equals distance. Similarly, the integral of velocity over time equals speed, while the integral of acceleration over time equals velocity. Graphs of functions and their derivatives are also presented, showing the relationships between integration and differentiation.
12 x1 t04 06 integrating functions of time (2013)Nigel Simmons
The document discusses integrating functions of time to calculate changes in displacement, distance, velocity, and speed based on position, velocity, and acceleration graphs over time. It provides examples of how integrating areas under curves relates to these physical quantities. Derivative graphs and their relationships are also summarized, along with how different function types integrate or differentiate into other graph types. An example problem calculating the times when a particle is at rest and its maximum velocity is also worked through.
1) The document derives both the continuous and discrete forms of hybrid Adams-Moulton methods for step numbers k=1 and k=2. These formulations incorporate off-grid interpolation and off-grid collocation schemes.
2) A matrix inversion technique is used to derive the continuous form. The continuous and discrete coefficients are obtained by solving a matrix equation where the identity matrix equals the product of two other matrices.
3) Error and zero-stability analyses are performed on the derived discrete schemes. The schemes are found to be of good order, with good error constants, implying they are consistent.
Monte Caro Simualtions, Sampling and Markov Chain Monte CarloXin-She Yang
Pseudorandom
Pseudorandom The document discusses Monte Carlo methods and Markov chain Monte Carlo (MCMC). It provides examples of using Monte Carlo simulations to estimate pi and solve Buffon's needle problem. It also discusses random walks in Markov chains, the PageRank algorithm used by Google, and challenges with high-dimensional integrals and distributions that do not have a closed-form inverse. MCMC methods are presented as a way to address these challenges.
Lesson 15: Exponential Growth and Decay (handout)Matthew Leingang
Many problems in nature are expressible in terms of a certain differential equation that has a solution in terms of exponential functions. We look at the equation in general and some fun applications, including radioactivity, cooling, and interest.
Talk given at the workshop "Multiphase turbulent flows in the atmosphere and ocean", National Centre for Atmospheric REsearch, Boulder CO, August 15 2012
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...Colm Connaughton
This document summarizes Colm Connaughton's presentation on solving the inverse Smoluchowski problem to determine particle collision kernels from observed cluster size distributions. It describes how the forward problem maps kernels to distributions but the inverse problem is ill-posed. Tikhonov regularization is used to obtain approximate kernel reconstructions from numerical solutions with known test kernels, demonstrating partial success in reconstructing kernel features despite ill-posedness. Future work aims to address limitations and applicability to real problems.
ANISOTROPIC SURFACES DETECTION USING INTENSITY MAPS ACQUIRED BY AN AIRBORNE L...grssieee
The document discusses methods for estimating the spatial anisotropy of surfaces using near-infrared LiDAR intensity maps over coastal environments. It presents two estimators - one based on 1D correlations of columns and lines in sliding windows, and another based on 2D correlations of windows and their transposes. The estimators are evaluated on synthetic data with varying anisotropy, relative anisotropy, and signal-to-noise ratio. The estimators are then applied to LiDAR intensity maps from coastal areas to characterize anisotropic surfaces independently of intensity variations. Future work involves combining these methods with multi-resolution wavelet approaches and comparing LiDAR intensity to DEM and dual-polarization SAR data.
Optimization of Rolling Conditions in Nb Microalloyed Steels Processed by Thi...Pello Uranga
The document describes a model for optimizing rolling conditions in Nb microalloyed steels processed by thin slab casting and direct rolling. The model predicts the evolution of austenite grain size distributions during rolling schedules. It was used to generate processing maps showing the effects of rolling temperature, thickness reduction, and schedule on the final austenite grain structure. The maps identify processing conditions that avoid microstructural heterogeneities for different steel thicknesses.
Role of Microalloying Elements during Thin Slab Direct RollingPello Uranga
The document discusses the role of microalloying elements during thin slab direct rolling of steels. Specifically:
- Thin slab casting and direct rolling leads to different metallurgical changes compared to traditional routes that affect microalloying behavior.
- Models have been developed to optimize microalloyed steel grades for thin slab direct rolling, focusing on avoiding heterogeneities and conditioning austenite structure.
- Industrial rolling simulations using the models optimize schedules to achieve thick final gauges with high microalloying levels. Redesigned schedules reduce austenite fraction and improve microstructural homogeneity.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
This document provides an introduction and overview of Toeplitz and circulant matrices. It discusses how these matrices arise in applications involving time series, signal processing, and discrete time systems. Toeplitz matrices have constant diagonals, while circulant matrices are a special case where each row is a cyclic shift of the row above it. The document outlines the structure and key properties of these matrices and previews the major topics to be covered, including asymptotic behavior, eigenvalues, inverses, and applications to stochastic time series.
In this section we look at problems where changing quantities are related. For instance, a growing oil slick is changing in diameter and volume at the same time. How are the rates of change of these quantities related? The chain rule for derivatives is the key.
In this section we look at problems where changing quantities are related. For instance, a growing oil slick is changing in diameter and volume at the same time. How are the rates of change of these quantities related? The chain rule for derivatives is the key.
Lesson 13: Exponential and Logarithmic Functions (Section 041 handout)Matthew Leingang
This document summarizes sections 3.1-3.2 of a Calculus I course at New York University on exponential and logarithmic functions taught on October 20, 2010. It outlines definitions and properties of exponential functions, introduces the special number e and natural exponential function, and defines logarithmic functions. Announcements are made that the midterm exam is nearly graded and a WebAssign assignment is due the following week.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
This study models the acceleration of anomalous cosmic rays (ACRs) at a non-spherical, blunt termination shock (TS) using numerical modeling. The model adapts an existing code used for galactic cosmic rays to model ACR acceleration and transport. Results show that ACR acceleration occurs at the flanks of the blunt TS, consistent with observations from Voyager 1 and 2. As Voyager 1 moves farther from the TS, its observed ACR intensities are decreasing, while Voyager 2's intensities are increasing as it moves toward the flanks where intensities are higher according to the model.
Prof. Jim Bezdek: Every Picture Tells a Story — Visual Cluster Analysisieee_cis_cyprus
The talk overviews the history of Visual Clustering, which began thousands of years ago. The first image for this appeared in 1873. Three algorithms for visual assessment of clustering tendency examined, namely the VAT, iVAT and asiVAT, with applications to social network analysis. Particularly three applications, one for each algorithm will be discussed: time series analysis with clusters of linguistic medoid prototypes in Eldercare data (iVAT); social network analysis with Sampson's Monastery data (asiVAT); and network access security (VAT), a commercial application developed by CA technologies.
Computation of the gravity gradient tensor due to topographic masses using te...Leonardo Uieda
The GOCE satellite mission has the objective of measuring the Earth's gravitational field with an unprecedented accuracy through the measurement of the gravity gradient tensor (GGT). One of the several applications of this new gravity data set is to study the geodynamics of the lithospheric plates, where the flat Earth approximation may not be ideal and the Earth's curvature should be taken into account. In such a case, the Earth could be modeled using tesseroids, also called spherical prisms, instead of the conventional rectangular prisms. The GGT due to a tesseroid is calculated using numerical integration methods, such as the Gauss-Legendre Quadrature (GLQ), as already proposed by Asgharzadeh et al. (2007) and Wild-Pfeiffer (2008). We present a computer program for the direct computation of the GGT caused by a tesseroid using the GLQ. The accuracy of this implementation was evaluated by comparing its results with the result of analytical formulas for the special case of a spherical cap with computation point located at one of the poles. The GGT due to the topographic masses of the Parana basin (SE Brazil) was estimated at 260 km altitude in an attempt to quantify this effect on the GOCE gravity data. The digital elevation model ETOPO1 (Amante and Eakins, 2009) between 40º W and 65º W and 10º S and 35º S, which includes the Paraná Basin, was used to generate a tesseroid model of the topography with grid spacing of 10' x 10' and a constant density of 2670 kg/m3. The largest amplitude observed was on the second vertical derivative component (-0.05 to 1.20 Eötvos) in regions of rough topography, such as that along the eastern Brazilian continental margins. These results indicate that the GGT due to topographic masses may have amplitudes of the same order of magnitude as the GGT due to density anomalies within the crust and mantle.
The document discusses calculating volumes of solids of revolution. It provides examples of finding the volume when revolving common shapes around different axes, such as:
(1) Revolving a cone around the x-axis to find the volume is 1/3πr2h.
(2) Revolving a sphere around the x or y-axis finds the volume is 4/3πr3.
(3) Revolving the region between y=x2 and the y-axis around the y-axis from 0 to 1 finds the volume is π/2 units3.
12 x1 t04 06 integrating functions of time (2012)Nigel Simmons
The document discusses integrating functions of time to determine changes in displacement, distance, velocity, and speed. It explains that the integral of position over time equals displacement, while subtracting integrals of position over different time intervals equals distance. Similarly, the integral of velocity over time equals speed, while the integral of acceleration over time equals velocity. Graphs of functions and their derivatives are also presented, showing the relationships between integration and differentiation.
12 x1 t04 06 integrating functions of time (2013)Nigel Simmons
The document discusses integrating functions of time to calculate changes in displacement, distance, velocity, and speed based on position, velocity, and acceleration graphs over time. It provides examples of how integrating areas under curves relates to these physical quantities. Derivative graphs and their relationships are also summarized, along with how different function types integrate or differentiate into other graph types. An example problem calculating the times when a particle is at rest and its maximum velocity is also worked through.
1) The document derives both the continuous and discrete forms of hybrid Adams-Moulton methods for step numbers k=1 and k=2. These formulations incorporate off-grid interpolation and off-grid collocation schemes.
2) A matrix inversion technique is used to derive the continuous form. The continuous and discrete coefficients are obtained by solving a matrix equation where the identity matrix equals the product of two other matrices.
3) Error and zero-stability analyses are performed on the derived discrete schemes. The schemes are found to be of good order, with good error constants, implying they are consistent.
Monte Caro Simualtions, Sampling and Markov Chain Monte CarloXin-She Yang
Pseudorandom
Pseudorandom The document discusses Monte Carlo methods and Markov chain Monte Carlo (MCMC). It provides examples of using Monte Carlo simulations to estimate pi and solve Buffon's needle problem. It also discusses random walks in Markov chains, the PageRank algorithm used by Google, and challenges with high-dimensional integrals and distributions that do not have a closed-form inverse. MCMC methods are presented as a way to address these challenges.
Lesson 15: Exponential Growth and Decay (handout)Matthew Leingang
Many problems in nature are expressible in terms of a certain differential equation that has a solution in terms of exponential functions. We look at the equation in general and some fun applications, including radioactivity, cooling, and interest.
Talk given at the workshop "Multiphase turbulent flows in the atmosphere and ocean", National Centre for Atmospheric REsearch, Boulder CO, August 15 2012
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...Colm Connaughton
This document summarizes Colm Connaughton's presentation on solving the inverse Smoluchowski problem to determine particle collision kernels from observed cluster size distributions. It describes how the forward problem maps kernels to distributions but the inverse problem is ill-posed. Tikhonov regularization is used to obtain approximate kernel reconstructions from numerical solutions with known test kernels, demonstrating partial success in reconstructing kernel features despite ill-posedness. Future work aims to address limitations and applicability to real problems.
The document discusses transportation problems and assignment problems in operations research. It provides:
1) An overview of transportation problems, including the mathematical formulation to minimize transportation costs while meeting supply and demand constraints.
2) Methods for obtaining initial basic feasible solutions to transportation problems, such as the North-West Corner Rule and Vogel's Approximation Method.
3) Techniques for moving towards an optimal solution, including determining net evaluations and selecting entering variables.
4) The formulation and algorithm for solving assignment problems to minimize assignment costs while ensuring each job is assigned to exactly one machine.
Catalogue of Models for Electricity Prices Part 2NicolasRR
This document provides an overview and examples of several stochastic models for electricity spot prices, including:
1) A one-factor affine jump diffusion model that adds a jump component to allow for strong price variations. The jumps follow a Gaussian distribution.
2) A jump diffusion model where jump amplitudes follow an Erlang distribution. Examples show how the parameters n and λ impact spike magnitude.
3) A Markov-chain model where the spot price depends on the percentage of online generators. Deterministic functions are used to influence spike amplitude and timing.
Randomness conductors are a general framework that unifies various combinatorial objects like expanders, extractors, condensers, and universal hash functions. They can transform a probability distribution X with a certain amount of "entropy" into another distribution X' with a specified amount of entropy. The document discusses how expanders, extractors, and other objects are special cases of randomness conductors. It also describes how zigzag graph products can be used to construct explicit constant-degree randomness conductors and discusses some open problems in further studying and constructing these objects.
On recent improvements in the conic optimizer in MOSEKedadk
The software package MOSEK is capable of solving large-scale sparse
conic quadratic optimization problems using an interior-point method.
In this talk we will present our recent improvements in the implementation.
Moreover, we will present numerical results demonstrating the performance of the implementation.
Nonequilibrium Statistical Mechanics of Cluster-cluster Aggregation Warwick ...Colm Connaughton
This document summarizes a talk on nonequilibrium statistical mechanics of cluster-cluster aggregation. The talk focused on theoretical models of particle clustering, including simple models where particles perform random walks and merge upon contact, and more sophisticated models that track the distribution of cluster sizes over time using the Smoluchowski equation. It discussed self-similar solutions and stationary solutions of the Smoluchowski equation. It also described the gelation transition that can occur when clusters absorb smaller clusters rapidly, violating the assumption of mass conservation and leading to clusters of infinite size.
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...Colm Connaughton
Invited talk given at "Boltzmann equation:
mathematics, modeling and simulations
In memory of Carlo Cercignani", Institut Henri Poincare, Paris, February 11, 2011.
Stability of adaptive random-walk Metropolis algorithmsBigMC
The document discusses adaptive MCMC algorithms and their stability. It introduces the stochastic approximation framework that is commonly used to construct adaptive MCMC algorithms. It then discusses issues with stability as the adaptive parameters are updated, and how enforced stability or adaptive reprojections can help address this. Finally, it provides examples of the adaptive Metropolis algorithm and adaptive scaling Metropolis algorithm, which aim to automatically tune the proposal distribution scale parameter.
This document discusses likelihood methods for continuous-time models in finance. It describes approximating the transition density function pX of a continuous-time process through a series of transformations to get closer to a normal distribution. This allows representing pX as a series expansion involving Hermite polynomials. Computing the expansion coefficients allows obtaining an explicit closed-form approximation to pX. Maximizing the approximate likelihood results in an estimator that converges to the true MLE as the number of terms increases.
Cluster aggregation with complete collisional fragmentationColm Connaughton
The document summarizes research on cluster-cluster aggregation (CCA) models where particles stick together upon contact. It discusses mean-field kinetic equations to model CCA with sources and sinks of particles. For the case of complete fragmentation, it presents an exact solution to the kinetic equations. It finds that nonlocal cascades where larger clusters interact mostly with smaller ones can be unstable, leading to oscillatory behavior over time rather than a stationary state. The document outlines approaches to model the nonlocal case using approximations to the Smoluchowski kinetic equation.
This document outlines key concepts in linear models and estimation that will be covered in the STA721 Linear Models course, including:
1) Linear regression models decompose observed data into fixed and random components.
2) Maximum likelihood estimation finds parameter values that maximize the likelihood function.
3) Linear restrictions on the mean vector μ define a subspace and equivalent parameterizations represent the same subspace.
4) Inference should be independent of the parameterization or coordinate system used to represent μ.
What happens when the Kolmogorov-Zakharov spectrum is nonlocal?Colm Connaughton
This document summarizes research on the behavior of the Kolmogorov-Zakharov (KZ) spectrum when it is nonlocal. It examines a model of cluster-cluster aggregation described by the Smoluchowski equation, which can be viewed as a model of 3-wave turbulence without backscatter. The research finds that when the exponents in the interaction term satisfy certain conditions, the KZ spectrum is nonlocal. In this case, the stationary state has a novel functional form and can become unstable, leading to oscillatory behavior in the cascade dynamics at long times. Open questions remain about whether physical systems exhibit this behavior and how the results are affected by including backscatter terms.
1. M ONTE C ARLO S IMULATION IN D ERIVATIVE
P RICING M ODELS
Kai Zhang
Numerical Algorithms Group
Warwick Business School
The Thalesians Quantitative Finance Seminar
Canary Wharf, London
December 15, 2009
1 / 35
2. Outline
Monte Carlo Overview
Monte Carlo Evolution Type
Monte Carlo for Generic SDE
Wiener Path Generator
Discretization Scheme
Exact Simulation
References on Monte Carlo
2 / 35
3. Derivative Pricing with Monte Carlo
Comparison with Lattice or PDE
1. Monte Carlo can deal with high-dimensional models.
2. Suitable for path-dependent options.
Issues
1. Different answer each simulation (supplied with standard error).
2. Slower convergence than PDE in low-dimension (1-2) case.
3. Special treatment needed for options with embedded decision.
3 / 35
4. Principals of Monte Carlo
Estimation of Expectation
1. Simulate a sample distribution.
2. Estimate the expectation by sample mean.
3. Estimate error bound from standard deviation.
In Derivative Pricing Models
1. Simulate a path of asset values, X = (X0 , X1 , · · · , XN ).
2. Compute payoff from the path, V(X).
3. Compute numeraire from the path, N(X).
V
4. Compute option value as N0 EN .
N
Crucial step is path simulation.
4 / 35
5. Monte Carlo Evolution Type I
Notations for Sample Paths
X = {Xij }j=1,2,··· ,M
i=0,1,··· ,N
1. i is the index for time steps.
2. j is the index for sample paths.
Four Evolution Types
1. Element-wise
2. Path-wise
3. Slice-wise
4. Holistic
5 / 35
6. Monte Carlo Evolution Type II
Element-Wise
1 1 1 1 2 2 2 2 3 3
X0 → X1 → X2 · · · XN → X0 → X1 → X2 · · · XN → X0 → X1 · · ·
1. Low level evolution type.
2. A single value at a time.
3. Not necessarily to evolve the entire path.
4. For knock out barrier option, can stop when barrier is hit.
6 / 35
7. Monte Carlo Evolution Type III
Path-Wise
ˆ1
(X0 , ··· ˆ1
, XN )
↓
ˆ j ˆj
(X0 , · · · , XN )
↓
ˆM
(X0 , · · · ˆM
, XN )
1. One path at a time.
2. Suitable for path-dependent option.
3. Suitable for valuing and hedging a book of options.
4. An application is adjoint method (Giles and Glasserman (2006)).
7 / 35
8. Monte Carlo Evolution Type IV
Slice-Wise
ˆ1 ˆ ˆ ˆ
1 1 1
X0 X2 Xi XN
. . . .
. → . → ··· . ··· → .
. . . .
ˆM
X ˆM
X ˆ iM
X ˆN
XM
0 2
1. One slice at a time.
2. Do not always evolve forward in time.
3. Suitable for using variance reduction.
4. An example is Brownian bridge with stratified sampling.
5. Another application is Longstaff and Schwartz Monte Carlo.
8 / 35
9. Monte Carlo Evolution Type V
Holistic
ˆ1 ˆ1
X0 , · · · , XN
ˆ
X2, · · · ˆ2
, XN
0
. .
. . .
.
ˆ M, · · ·
X0 ˆN
, XM
1. The whole set of sample paths at one time.
2. Suitable when the whole set of paths is needed.
3. Variance reduction: moment matching methods.
4. Has to store everything.
9 / 35
10. Implementing a Monte Carlo
To Simulate a Path of Asset Value One Needs
1. A random number generator.
2. A Wiener path generator.
3. A scheme for discretizing SDE.
Simulation Procedure
1. A discrete Wiener path, W = (W0 , W1 , · · · , WN ).
2. Back out normal variables, Zi = Wi − Wi−1 , i = 1, · · · , N.
3. Choose a proper discretization scheme for SDE.
4. Use Zi to construct a path for SDE, S = (S0 , S1 , · · · , SN ).
I will talk about these in turn.
10 / 35
11. Wiener Path Generator I
Definition of Wiener Path
1. Independent increment: ∆Wi = Wi − Wi−1 , i = 1, · · · , N.
2. Gaussian increment: Wi − Wi−1 ∼ N(0, ∆t).
3. Continuous in t a.s.
4. W0 = 0 a.s.
Ways of Simulating a Discrete Wiener Path
1. Euler discretization.
2. Brownian bridge.
3. Spectral decomposition (do not discuss).
11 / 35
12. Wiener Path Generator II
Euler Discretization
√
1. Wi = Wi−1 + ∆tǫi , ǫi ∼ i.i.d. N(0, 1).
2. Forward evolution, W0 → W1 → W2 → · · · → WN .
3. N multiplications and N − 1 additions.
Brownian Bridge
1. Given Wi and Wk , simulate Wj , i < j < k, as
tk − tj tj − ti (tj − ti )(tk − tj )
Wj = Wi + Wk + ǫ
tk − ti tk − ti tk − ti
where ǫ ∼ N(0, 1).
2. Binary chop evolution
W0 → WN → WN/2 → WN/4 → W3N/4 → WN/8 → W3N/8 · · · .
3. 3N multiplications and 2N additions.
12 / 35
13. Wiener Path Generator III
Sobol Brownian Bridge
1. First few draws determine the main skeleton of the Wiener path.
2. Fill the first draw (final point) with stratified samples.
3. Fill up the subsequent draws using Sobol numbers.
4. Can also use a mixture of Sobol and random numbers.
NAG Library Functions
1. Function G05YMF in Fortran Mark 22.
2. Generates Sobol numbers up to 50,000 dimensions.
3. Sufficient for any reasonable applications.
4. Digital scrambling Sobol generator can have standard error.
5. .DLL callable from Excel/VBA.
6. Brownian bridge is in the next release.
7. Will have GPU version of Sobol Brownian bridge.
13 / 35
14. Wiener Path Generator: Numerical Example
Geometric Average Rate Option
1. Option on geometric average of values at reset dates.
2. Closed-form formula (Kemma and Vorst (1990)).
Model Parameters (GBM)
1. Initial asset value, S0 = 100.
2. Constant interest rate, r = 0.05.
3. Asset volatility, σ = 0.2.
Option Parameters
1. Maturity T = 1yr.
2. Strike X = 100.
i
3. Reset dates Ti = , i = 0, 1, · · · , 16.
16
14 / 35
15. Wiener Path Generator: Convergence Results
Sobol Brownian Bridge with Different Levels of Stratifications
0.4
0 Stratification (E=3.6E−2)
1 Stratification (E=5.4E−4)
4 Stratifications (E=5.2E−5)
16 Stratifications (E=4.2E−8)
0.3
0.2
0.1
Picing Bias
0
−0.1
−0.2
−0.3
0 10 20 30 40 50 60 70 80 90 100
Replication
Figure: Convergence with Different Levels of Stratifications, ∆t = 1/16,
Efficiency E = SE2 × CPU, Explicit Value 5.8417
15 / 35
16. Wiener Path Generator: Be Aware of the Deterministic Bias
x 10
−3 Fully Stratified Sobol Brownian Bridge
1
0.5
0
−0.5
−1
Pricing Bias
−1.5
−2
−2.5
−3
−3.5
−4
0 10 20 30 40 50 60 70 80 90 100
Replication
Figure: Deterministic Bias of Fully Stratified Sobol Brownian Bridge
16 / 35
17. Discretization Scheme I
General Asset SDE
dX = a(t, X)dt + b(t, X) · dW
1. X = (Xt1 , · · · , Xtd )′ is a vector of factors.
2. W = (Wt1 , · · · , Wtd )′ is d-dimensional Brownian motion.
3. a = ad×1 , b = bd×d and bb′ = D, the factor covariance matrix.
Discretization
1. Chose a mesh size δ = maxi ∆ti .
2. Approximate Xt by its time discrete version Xtδ .
3. Need Lipschitz and linear growth bound conditions on a and b.
4. Measure the closeness of Xtδ to Xt by weak and strong criterions.
17 / 35
18. Discretization Scheme II
Weak and Strong Convergence
1. Strong convergence
δ
E|XT − XT | Cδβ
where δ = maxi ∆ti and C independent of δ.
2. Weak convergence
δ
|E[f (XT )] − E[f (XT )]| Cδβ
where f ∈ C 2(β+1) with polynomial growth.
3. β is known as the order of convergence.
We look at Euler, Milstein and higher order Itô-Taylor schemes.
18 / 35
19. Discretization Scheme III
Euler Scheme
ˆ ˆ ˆ ˆ
Xi+1 = Xi + a(t, Xi )∆t + b(t, Xi ) · ∆Wi
ˆ ˆ
1. a ≡ a(t, Xi ) and b ≡ b(t, Xi ), for t ∈ [ti , ti+1 ].
√
2. O(∆t) weak convergence, O( ∆t) strong convergence.
3. Simplest but crude.
Milstein Scheme
ˆk ˆ ∂bk,l
Xi+1 = Xik + ak ∆t + bk,l ∆Wil + bn,m Im,l
∂X n
l l,m,n
ti+1 t
where Im,l = ti ti dWs dWtl .
m
1. Weak convergence O(∆t), strong convergence O(∆t).
2. Need evaluation of Im,l .
19 / 35
20. Discretization Scheme IV
Iterated Itô Integral Im,l
1. Can show
1
Im,m = (∆Wim )2 − ∆t ,
2
Im,l + Il,m = ∆Wim ∆Wil .
∂bk,l ∂bk,m 1
2. If b =
n n,m
b , ∀l, m, n, can take Im,l = ∆Wim ∆Wil.
n n,l
∂X ∂X 2
3. For weak approximations, can approximate by moment matching
Im,l ≈ ∆Wim ∆Wil + Vm,l
where P(Vm,l = ±∆t) = 1 .
2
4. For strong approximations, see Kloeden and Platen (1999).
20 / 35
21. Discretization Scheme V
Strong O(∆t3/2 ) Itô-Taylor
∂bi,j2
∆X i =ai ∆t + bi,j ∆W j + bk,j1 Ij ,j
∂X k 1 2
j j1 ,j2 ,k
∂bi,j + ∂bi,j 1 ∂ 2 bi,j
+ ak k + bk,j1 bl,j1 k l I0,j
∂t ∂X 2 ∂X ∂X
j k j1 ,k,l
∂ai 1 ∂ai ∂ai 1 2a
∂ i
+ bk,j k Ij,0 + + ak k + bk,j bl,j k l ∆t2
∂X 2 ∂t ∂X 2 ∂X X
j,k k j,k,l
∂bk,j2 ∂bk,j3 ∂ 2 bi,j
+ bl,j1 + bk,j2 k 3 l Ij1 ,j2 ,j3
∂X l ∂X k ∂X ∂X
j1 ,j2 ,j3 ,k,l
t+∆t s2 s1 j j j t+∆t s j
where Ij1 ,j2 ,j3 = t t t dWu1 dWs2 dWs3 , I0,j =
1 2 t t dudWs
t+∆t s j
and Ij,0 = t t dWu ds.
21 / 35
22. Discretization Scheme VI
Iterated Itô Integral Ij1 ,j2 ,j3 , I0,j and Ij,0
1. Have properties
1 1 2
Ij,j,j = ∆W j − ∆t ∆W j,
2 2
I0,j = ∆W j ∆t − Ij,0 ,
1 1
Ij,0 ∼ ∆t ∆W j + √ ∆y ,
2 3
where ∆y ∼ N(0, ∆t).
2. For approximation of Ij1 ,j2 ,j3 , see Kloeden and Platen (1999).
3. For strong approximations, Ij,0 must be simulated.
4. For weak approximations, can approximate Ij,0 by
1
Ij,0 ≈ E Ij,0 |∆W j = ∆t∆W j.
2
22 / 35
23. Discretization Scheme VII
Weak O(∆t2 ) Itô-Taylor
∂bi,j2
∆X i =ai ∆t + bi,j ∆W j + bk,j1 Ij ,j
∂X k 1 2
j j1 ,j2 ,k
∂bi,j + ∂bi,j 1 ∂ 2 bi,j
+ ak k + bk,j1 bl,j1 k l I0,j
∂t ∂X 2 ∂X ∂X
j k j1 ,k,l
∂ai 1 ∂ai ∂ai 1 ∂ 2 ai 2
+ bk,j k Ij,0 + + ak k + bk,j bl,j k l ∆t
∂X 2 ∂t ∂X 2 ∂X X
j,k k j,k,l
1. Do away with Ij1 ,j2 ,j3 .
2. I0,j ≈ Ij,0 ≈ 1 ∆t∆W j.
2
3. Ij1 ,j2 ≈ ∆W j1 ∆W j2 + Vj1 ,j2 where P(Vm,l = ±∆t) = 1 .
2
23 / 35
24. Discretization Scheme VIII
O(∆t) Predictor-Corrector
ˆ ˆ ˆ
Xi+1 =Xi + [p · a(t, Xi+1 ) + (1 − p) · a(t, Xi )]∆t
ˆ
+ [q · b(t, Xi+1 ) + (1 − q) · b(t, Xi )] · ∆Wi ,
where
ˆ ˆ ˆ
Xi+1 = Xi + a(t, Xi )∆t + b(t, Xi ) · ∆Wi ,
and
∂bi,k
ai = ai − q bj,k .
∂X j
j,k
∂bi,k
1. Easy to implement when = 0, ∀i, j, k.
∂X j
2. Important scheme for LIBOR market model.
We apply various schemes to GBM and Vasicek processes.
24 / 35
33. Exact Simulation I
Analytical Distribution
1. For some SDE the analytical asset distribution is known.
2. In GBM asset value XT has lognormal distribution,
1 √
XT = X0 exp r − σ 2 T + σ Tǫ , ǫ ∼ N(0, 1).
2
3. For Vasicek
σ2
XT = X0 e−αT +µ(1−e−αT )+ǫ (1 − e−2αT ), ǫ ∼ N(0, 1).
2α
4. For CIR, XT has non-central χ2 distribution.
5. For Heston, see Broadie and Kaya (2006), Andersen (2007), etc.
6. Can simulate directly from XT in one step.
Requires other random number generators.
33 / 35
34. Exact Simulation II
Random Numbers
1. For GBM, needs lognormal.
2. For Vasicek, needs Gaussian.
3. For CIR, needs Gaussian, Poisson and Gamma.
4. For Heston, needs Gaussian, Poisson, Gamma and Bessel.
NAG Random Number Generators
1. Chapter G05 is on random number generators.
2. Available in NAG Fortran Mark 22 and C Mark 9.
3. NAG Toolbox for MATLAB.
34 / 35
35. References on Monte Carlo
Monte Carlo in Finance
1. Peter Jäckel (2002), a good introduction.
2. Paul Glasserman (2003), deeper in theory.
Discretization Schemes
Kloeden and Platen (1999), numerical solutions of SDE.
Computation
1. Mark Joshi (2008), a good introduction on C++ OOP.
2. Nick Webber (Wiley forthcoming), more on OOP C++ and VBA.
3. NAG library documentation on Monte Carlo components.
Benchmark with Closed-Form Formulae
1. NAG library Chapter S30 (Black-Scholes, Merton, Heston, etc).
2. Available in NAG Fortran Mark 22 and C Mark 9.
35 / 35