This document provides preparation guidelines for interviews for junior quantitative analyst positions. It recommends spending 40-50% of time reviewing basic math skills like calculus, probability, statistics and financial math. Another 30-40% should be spent programming in C++, focusing on object-oriented principles and data structures. The final 10-20% should cover financial products and modeling. Sample questions test knowledge of derivatives pricing, differential equations, linear algebra, and programming concepts. Problem-solving questions evaluate logical thinking and proof abilities. Overall, the document emphasizes mastering fundamentals before complex topics.
This document provides an overview of statistical tests commonly used in neuroimaging such as t-tests, ANOVAs, and regression. It discusses the purposes of these tests and how they are applied. T-tests are used to compare means, for example to determine if the difference between two conditions is statistically significant. ANOVAs examine variances and can be used when comparing more than two groups. Regression allows describing and predicting the relationship between variables and is useful in the general linear model approach used in SPM. Key assumptions and calculations for each method are outlined.
The document discusses measures of dispersion such as variance, standard deviation, and the coefficient of variation. It defines variance as the average squared deviation from the mean and standard deviation as the positive square root of the variance. The coefficient of variation measures relative dispersion by dividing the standard deviation by the mean. It is unit-free and allows for comparison across distributions. The document also covers Chebyshev's inequality and how it relates to the proportion of data within a given number of standard deviations from the mean.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
In this paper, the L1 norm of continuous functions and corresponding continuous estimation of regression parameters are defined. The continuous L1 norm estimation problem of one and two parameters linear models in the continuous case is solved. We proceed to use the functional form and parameters of the probability distribution function of income to exactly determine the L1 norm approximation of the corresponding Lorenz curve of the statistical population under consideration.
Systemic Risk Modeling - André Lucas, April 16 2014SYRTO Project
This document discusses challenges in modeling systemic risk and presents a new class of time series models for systemic risk modeling. It introduces a factor copula model that uses a multivariate skewed-t density with time-varying parameters to assess joint and conditional measures of financial sector risk. The model uses a conditional law of large numbers to efficiently compute risk measures without simulation for high-dimensional, non-Gaussian data. It also defines measures to analyze systemic influence and connectedness within the financial system.
The document is about implicit differentiation and contains the following:
- It introduces implicit differentiation using the example of finding the slope of the curve x^2 + y^2 = 1 at the point (3/5, -4/5).
- It shows solving this problem explicitly by isolating y and taking the derivative, as well as implicitly by treating y as a function f(x) and differentiating the equation x^2 + f(x)^2 = 1.
- The objectives, outline, and motivation for implicit differentiation are provided to set up the key concepts covered in the section.
This document presents a model of job satisfaction based on experienced utility gaps or regret over past and present wages and opportunities. The main prediction tested is that job satisfaction correlates with wage gaps experienced in the past and present, when controlling for other job satisfactions, except possibly for young workers. The effect of wage gaps on job satisfaction is predicted to decline with more work experience. Evidence from a Canadian cross-section survey is used to estimate the model.
This document provides an overview of statistical tests commonly used in neuroimaging such as t-tests, ANOVAs, and regression. It discusses the purposes of these tests and how they are applied. T-tests are used to compare means, for example to determine if the difference between two conditions is statistically significant. ANOVAs examine variances and can be used when comparing more than two groups. Regression allows describing and predicting the relationship between variables and is useful in the general linear model approach used in SPM. Key assumptions and calculations for each method are outlined.
The document discusses measures of dispersion such as variance, standard deviation, and the coefficient of variation. It defines variance as the average squared deviation from the mean and standard deviation as the positive square root of the variance. The coefficient of variation measures relative dispersion by dividing the standard deviation by the mean. It is unit-free and allows for comparison across distributions. The document also covers Chebyshev's inequality and how it relates to the proportion of data within a given number of standard deviations from the mean.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
In this paper, the L1 norm of continuous functions and corresponding continuous estimation of regression parameters are defined. The continuous L1 norm estimation problem of one and two parameters linear models in the continuous case is solved. We proceed to use the functional form and parameters of the probability distribution function of income to exactly determine the L1 norm approximation of the corresponding Lorenz curve of the statistical population under consideration.
Systemic Risk Modeling - André Lucas, April 16 2014SYRTO Project
This document discusses challenges in modeling systemic risk and presents a new class of time series models for systemic risk modeling. It introduces a factor copula model that uses a multivariate skewed-t density with time-varying parameters to assess joint and conditional measures of financial sector risk. The model uses a conditional law of large numbers to efficiently compute risk measures without simulation for high-dimensional, non-Gaussian data. It also defines measures to analyze systemic influence and connectedness within the financial system.
The document is about implicit differentiation and contains the following:
- It introduces implicit differentiation using the example of finding the slope of the curve x^2 + y^2 = 1 at the point (3/5, -4/5).
- It shows solving this problem explicitly by isolating y and taking the derivative, as well as implicitly by treating y as a function f(x) and differentiating the equation x^2 + f(x)^2 = 1.
- The objectives, outline, and motivation for implicit differentiation are provided to set up the key concepts covered in the section.
This document presents a model of job satisfaction based on experienced utility gaps or regret over past and present wages and opportunities. The main prediction tested is that job satisfaction correlates with wage gaps experienced in the past and present, when controlling for other job satisfactions, except possibly for young workers. The effect of wage gaps on job satisfaction is predicted to decline with more work experience. Evidence from a Canadian cross-section survey is used to estimate the model.
The document analyzes the analytic solution of Burger's equations using the variational iteration method. It begins by introducing the variational iteration method and how it can be used to solve differential equations. It then applies the method to obtain exact solutions for the (1+1), (1+2), and (1+3) dimensional Burger equations. Lengthy iterative solutions are presented for each case. The variational iteration method is shown to provide exact solutions to these Burger equations without requiring linearization.
The comparative study of finite difference method and monte carlo method for ...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. The finite difference method solves the Black-Scholes partial differential equation by approximating it on a grid, while the Monte Carlo method simulates asset price paths and averages discounted payoffs. The study finds that while both methods agree with the Black-Scholes price, the finite difference method converges faster and is more accurate for standard European options, whereas Monte Carlo is better suited for exotic options due to its flexibility.
Modelling the Diluting Effect of Social Mobility on Health Inequalityhtstatistics
The document describes methods for modeling the effect of social mobility on health inequality using data on socioeconomic position and health outcomes across generations. It compares traditional approaches that analyze trajectories between social classes to models proposed by Bartley and Plewis that combine effects of origin and destination class. The document also introduces diagonal reference models that estimate the combined effect of origin and destination class using a weighted average. These models provide an interpretable way to analyze how social mobility impacts health inequality between socioeconomic classes. The document demonstrates fitting these models to data on long-term illness using generalized nonlinear modeling in R.
The document discusses primal and dual linear programming problems. It provides examples of a primal problem about maximizing revenue from producing furniture given resource constraints, and its corresponding dual problem. The key relationships between a primal problem, its dual, and their optimal solutions are explained, including weak duality where any feasible primal solution has an objective value no greater than any feasible dual solution, and strong duality where the optimal primal and dual objectives are equal. General rules are provided for constructing the dual problem from the primal.
Using implicit differentiation we can treat relations which are not quite functions like they were functions. In particular, we can find the slopes of lines tangent to curves which are not graphs of functions.
This document provides an overview of calculating limits from a Calculus I course at New York University. It begins with announcements about homework being due and includes sections on recalling the concept of limit, basic limits, limit laws, limits with algebra, and important trigonometric limits. The document uses examples, definitions, proofs, and the error-tolerance game to explain how to calculate limits and the properties of limits, such as how limits behave under arithmetic operations.
Demand Modelling of Asymmetric Digital Subscriber Line in the Czech RepublicIDES Editor
This article describes and analyses the existing
possibilities for using Standard Statistical Methods and
Artificial Intelligence Methods for a short-term forecast and
simulation of demand in the field of Asymmetric Digital
Subscriber Line Internet Connection in the Czech Republic.
The most widespread methods are based on Time Series
Analysis. Nowadays, approaches based on Artificial
Intelligence Methods, including Neural Networks, are
booming. Separate approaches will be used in the study of
Demand Modelling in the field of Asymmetric Digital
Subscriber Line, and the results of these models will be
compared with actual guaranteed values. Then we will examine
the quality of Neural Network models. The another part of
study will be focused on improving the quality of Neural
Network models with the use of indicator Gross Domestic
Product.
This chapter summary discusses discrete probability distributions. It distinguishes between discrete and continuous random variables and distributions. It describes how to determine the mean and variance of discrete distributions. It introduces some common discrete distributions like the binomial and Poisson distributions. For the binomial distribution, it explains how to calculate the probability of a given number of successes in a given number of trials. For the Poisson distribution, it provides the probability formula and explains that it models independent events occurring continuously over an interval.
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
1. This document summarizes key concepts in numerical analysis related to floating point arithmetic, including: the floating point number system; relative error; rounding; IEEE standard; precision versus accuracy; exceptional results; cancellation; and fused multiply-add instructions.
2. It discusses common misconceptions around floating point arithmetic, including the ideas that innocuous calculations are always accurate; increasing precision improves accuracy; and cancellation eliminates rounding errors.
3. The document uses examples to illustrate concepts like loss of accuracy from repeated operations, sensitivity to precision levels, and how cancellation can expose existing errors.
Let me break this down step-by-step:
* The distribution is normal with mean 72 and standard deviation 5
* We want the score at the 85th percentile (100% - 15% = 85%)
* The z-score for the 85th percentile is 1.04
* So the score is: 72 + 1.04(5) = 76.2
* Therefore, the lowest score to receive an A is 76
So in summary, the lowest score a student can earn and still receive an A is 76.
This document discusses independent component analysis (ICA) for blind source separation. ICA is a method to estimate original signals from observed signals consisting of mixed original signals and noise. It introduces the ICA model and approach, including whitening, maximizing non-Gaussianity using kurtosis and negentropy, and fast ICA algorithms. The document provides examples applying ICA to separate images and discusses approaches to improve ICA, including using differential filtering. ICA is an important technique for blind source separation and independent component estimation from observed signals.
- Duality theory states that every linear programming (LP) problem has a corresponding dual problem, and the optimal solutions of the primal and dual problems are related.
- The dual problem is obtained by converting the constraints of the primal to variables and vice versa.
- The dual simplex method starts with an infeasible but optimal solution and moves toward feasibility while maintaining optimality, unlike the regular simplex method which moves from a feasible to optimal solution.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Clustering Financial Time Series using their Correlations and their Distribut...Gautier Marti
This document discusses methods for clustering random walks. It introduces the GNPR (Generic Non-Parametric Representation) method for defining a distance between two random walks that separates dependence and distribution information. The GNPR method is shown to outperform standard approaches on synthetic datasets containing different clusters based on distribution and dependence. The GNPR method is also used to cluster credit default swaps, identifying a cluster of "Western sovereigns". The document concludes that GNPR is an effective way to deal with dependence and distribution information separately without losing information.
Hierarchical Applied General Equilibrium (HAGE) ModelsVictor Zhorin
New techniques for uncertainty quantification (Chernoff entropy-based) in highly non-linear stochastic models; Tensor computing
Purpose: address the changing nature of service industries as providers of multi-dimensional contracts rather than simple price-based bundles of goods; models based on large data sets, consisting of country-wide surveys of
household data from Thailand, Mexico, Brazil, Spain
in combination with variety of geophysical and socioeconomic data. Innovative methods to represent complex data (Analytics/Visualization)
Purpose: evaluate household microfinance initiatives and credit expansion under different policies.
Threshold Concepts in Quantitative Finance - DEE 2011 PresentationRichard Diamond
This document discusses challenges in teaching quantitative finance concepts and introduces threshold concepts as a framework. It provides examples of difficult concepts in quantitative finance like Ito's lemma. Graphs show distributions of student learning styles, from surface to deep. Definitions of threshold concepts are given. Examples illustrate mapping concepts during expert practitioner sessions, which helped students integrate knowledge in a transformative way. Assessment strategies aim to discourage surface learning and encourage reasoning and interpretation.
60 Years Birthday, 30 Years of Ground Breaking Innovation: A Tribute to Bruno...Antoine Savine
- Dupire's work from 1992-1996 defined modern finance by establishing conditions for absence of arbitrage, respect of initial yield curves, and respect of initial call prices.
- Dupire showed that models must respect market prices of options through calibration, demonstrating a necessary and sufficient condition for a wide class of diffusion models.
- Dupire's implied volatility formula expresses the implied variance as an average of local variances weighted by probability and gamma, linking market prices to underlying volatility.
Certainly! "Modeling and simulation of energy efficiency measures in industrial processes" is a fascinating topic to explore in your research paper. Here are some key points and areas you can cover:
The document analyzes the analytic solution of Burger's equations using the variational iteration method. It begins by introducing the variational iteration method and how it can be used to solve differential equations. It then applies the method to obtain exact solutions for the (1+1), (1+2), and (1+3) dimensional Burger equations. Lengthy iterative solutions are presented for each case. The variational iteration method is shown to provide exact solutions to these Burger equations without requiring linearization.
The comparative study of finite difference method and monte carlo method for ...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. The finite difference method solves the Black-Scholes partial differential equation by approximating it on a grid, while the Monte Carlo method simulates asset price paths and averages discounted payoffs. The study finds that while both methods agree with the Black-Scholes price, the finite difference method converges faster and is more accurate for standard European options, whereas Monte Carlo is better suited for exotic options due to its flexibility.
Modelling the Diluting Effect of Social Mobility on Health Inequalityhtstatistics
The document describes methods for modeling the effect of social mobility on health inequality using data on socioeconomic position and health outcomes across generations. It compares traditional approaches that analyze trajectories between social classes to models proposed by Bartley and Plewis that combine effects of origin and destination class. The document also introduces diagonal reference models that estimate the combined effect of origin and destination class using a weighted average. These models provide an interpretable way to analyze how social mobility impacts health inequality between socioeconomic classes. The document demonstrates fitting these models to data on long-term illness using generalized nonlinear modeling in R.
The document discusses primal and dual linear programming problems. It provides examples of a primal problem about maximizing revenue from producing furniture given resource constraints, and its corresponding dual problem. The key relationships between a primal problem, its dual, and their optimal solutions are explained, including weak duality where any feasible primal solution has an objective value no greater than any feasible dual solution, and strong duality where the optimal primal and dual objectives are equal. General rules are provided for constructing the dual problem from the primal.
Using implicit differentiation we can treat relations which are not quite functions like they were functions. In particular, we can find the slopes of lines tangent to curves which are not graphs of functions.
This document provides an overview of calculating limits from a Calculus I course at New York University. It begins with announcements about homework being due and includes sections on recalling the concept of limit, basic limits, limit laws, limits with algebra, and important trigonometric limits. The document uses examples, definitions, proofs, and the error-tolerance game to explain how to calculate limits and the properties of limits, such as how limits behave under arithmetic operations.
Demand Modelling of Asymmetric Digital Subscriber Line in the Czech RepublicIDES Editor
This article describes and analyses the existing
possibilities for using Standard Statistical Methods and
Artificial Intelligence Methods for a short-term forecast and
simulation of demand in the field of Asymmetric Digital
Subscriber Line Internet Connection in the Czech Republic.
The most widespread methods are based on Time Series
Analysis. Nowadays, approaches based on Artificial
Intelligence Methods, including Neural Networks, are
booming. Separate approaches will be used in the study of
Demand Modelling in the field of Asymmetric Digital
Subscriber Line, and the results of these models will be
compared with actual guaranteed values. Then we will examine
the quality of Neural Network models. The another part of
study will be focused on improving the quality of Neural
Network models with the use of indicator Gross Domestic
Product.
This chapter summary discusses discrete probability distributions. It distinguishes between discrete and continuous random variables and distributions. It describes how to determine the mean and variance of discrete distributions. It introduces some common discrete distributions like the binomial and Poisson distributions. For the binomial distribution, it explains how to calculate the probability of a given number of successes in a given number of trials. For the Poisson distribution, it provides the probability formula and explains that it models independent events occurring continuously over an interval.
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
1. This document summarizes key concepts in numerical analysis related to floating point arithmetic, including: the floating point number system; relative error; rounding; IEEE standard; precision versus accuracy; exceptional results; cancellation; and fused multiply-add instructions.
2. It discusses common misconceptions around floating point arithmetic, including the ideas that innocuous calculations are always accurate; increasing precision improves accuracy; and cancellation eliminates rounding errors.
3. The document uses examples to illustrate concepts like loss of accuracy from repeated operations, sensitivity to precision levels, and how cancellation can expose existing errors.
Let me break this down step-by-step:
* The distribution is normal with mean 72 and standard deviation 5
* We want the score at the 85th percentile (100% - 15% = 85%)
* The z-score for the 85th percentile is 1.04
* So the score is: 72 + 1.04(5) = 76.2
* Therefore, the lowest score to receive an A is 76
So in summary, the lowest score a student can earn and still receive an A is 76.
This document discusses independent component analysis (ICA) for blind source separation. ICA is a method to estimate original signals from observed signals consisting of mixed original signals and noise. It introduces the ICA model and approach, including whitening, maximizing non-Gaussianity using kurtosis and negentropy, and fast ICA algorithms. The document provides examples applying ICA to separate images and discusses approaches to improve ICA, including using differential filtering. ICA is an important technique for blind source separation and independent component estimation from observed signals.
- Duality theory states that every linear programming (LP) problem has a corresponding dual problem, and the optimal solutions of the primal and dual problems are related.
- The dual problem is obtained by converting the constraints of the primal to variables and vice versa.
- The dual simplex method starts with an infeasible but optimal solution and moves toward feasibility while maintaining optimality, unlike the regular simplex method which moves from a feasible to optimal solution.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Clustering Financial Time Series using their Correlations and their Distribut...Gautier Marti
This document discusses methods for clustering random walks. It introduces the GNPR (Generic Non-Parametric Representation) method for defining a distance between two random walks that separates dependence and distribution information. The GNPR method is shown to outperform standard approaches on synthetic datasets containing different clusters based on distribution and dependence. The GNPR method is also used to cluster credit default swaps, identifying a cluster of "Western sovereigns". The document concludes that GNPR is an effective way to deal with dependence and distribution information separately without losing information.
Hierarchical Applied General Equilibrium (HAGE) ModelsVictor Zhorin
New techniques for uncertainty quantification (Chernoff entropy-based) in highly non-linear stochastic models; Tensor computing
Purpose: address the changing nature of service industries as providers of multi-dimensional contracts rather than simple price-based bundles of goods; models based on large data sets, consisting of country-wide surveys of
household data from Thailand, Mexico, Brazil, Spain
in combination with variety of geophysical and socioeconomic data. Innovative methods to represent complex data (Analytics/Visualization)
Purpose: evaluate household microfinance initiatives and credit expansion under different policies.
Threshold Concepts in Quantitative Finance - DEE 2011 PresentationRichard Diamond
This document discusses challenges in teaching quantitative finance concepts and introduces threshold concepts as a framework. It provides examples of difficult concepts in quantitative finance like Ito's lemma. Graphs show distributions of student learning styles, from surface to deep. Definitions of threshold concepts are given. Examples illustrate mapping concepts during expert practitioner sessions, which helped students integrate knowledge in a transformative way. Assessment strategies aim to discourage surface learning and encourage reasoning and interpretation.
60 Years Birthday, 30 Years of Ground Breaking Innovation: A Tribute to Bruno...Antoine Savine
- Dupire's work from 1992-1996 defined modern finance by establishing conditions for absence of arbitrage, respect of initial yield curves, and respect of initial call prices.
- Dupire showed that models must respect market prices of options through calibration, demonstrating a necessary and sufficient condition for a wide class of diffusion models.
- Dupire's implied volatility formula expresses the implied variance as an average of local variances weighted by probability and gamma, linking market prices to underlying volatility.
Certainly! "Modeling and simulation of energy efficiency measures in industrial processes" is a fascinating topic to explore in your research paper. Here are some key points and areas you can cover:
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Data science is an area at the interface of statistics, computer science, and mathematics.
• Statisticians contributed a large inferential framework, important Bayesian perspectives, the bootstrap and CART and random forests, and the concepts of sparsity and parsimony.
• Computer scientists contributed an appetite for big, challenging problems.They also pioneered neural networks, boosting, PAC bounds, and developed programming languages, such as Spark and hadoop, for handling Big Data.
• Mathematicians contributed support vector machines, modern optimization, tensor analysis, and (maybe) topological data analysis.
1. The document provides guidelines and syllabus for mathematics classes 11-12 in India.
2. It outlines 5 units of study for class 11 including sets and functions, algebra, coordinate geometry, calculus, and mathematical reasoning.
3. It outlines 6 units of study for class 12 including relations and functions, algebra, calculus, vectors, three-dimensional geometry and linear programming.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Seminar talk at École des Ponts ParisTech about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model". - Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
Condition Monitoring Of Unsteadily Operating EquipmentJordan McBain
The document discusses techniques for condition monitoring of unsteadily operating equipment. It proposes a statistical parameterization approach involving segmenting vibration data based on steady speeds/loads, extracting statistical parameters from segments, and using novelty detection with support vectors to classify patterns as normal or faulted while accounting for changing operating conditions. Experimental results on gearbox data demonstrated superior fault detection performance compared to alternative approaches.
1) The document derives both the continuous and discrete forms of hybrid Adams-Moulton methods for step numbers k=1 and k=2. These formulations incorporate off-grid interpolation and off-grid collocation schemes.
2) A matrix inversion technique is used to derive the continuous form. The continuous and discrete coefficients are obtained by solving a matrix equation where the identity matrix equals the product of two other matrices.
3) Error and zero-stability analyses are performed on the derived discrete schemes. The schemes are found to be of good order, with good error constants, implying they are consistent.
A study on singular perturbation correction to bond prices under affine term ...Frank Fung
The document discusses applying the technique of singular perturbation to pricing fixed income derivatives. Singular perturbation provides a convenient way to account for stochastic interest rate volatility. The accuracy of a perturbation corrected Vasicek model is evaluated by comparing its yield curve fitting to an exact analytic Fong-Vasicek model. The perturbation scheme achieves comparable accuracy to the exact model while requiring much less computational time. The perturbation scheme is also extended to a CIR model, though the advantage in speed is diminished due to the need for numerical methods.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Principal Component Analysis For Novelty DetectionJordan McBain
This document summarizes a journal article that proposes using principal component analysis (PCA) for novelty detection in condition monitoring applications. It describes how PCA can be used to reduce the dimensionality of feature spaces while retaining most of the variation in the data. The authors modify the standard PCA technique to maximize the difference between the spread of normal data and the spread of outlier data from the mean of the normal data. They validate the approach on artificial and machinery vibration data and show it can effectively distinguish outliers. Future work could involve extending the technique to non-linear data using kernel methods.
Mc0079 computer based optimization methods--phpapp02Rabby Bhatt
This document discusses mathematical models and provides examples of different types of mathematical models. It begins by defining a mathematical model as a description of a system using mathematical concepts and language. It then classifies mathematical models in several ways, such as linear vs nonlinear, deterministic vs probabilistic, static vs dynamic, discrete vs continuous, and deductive vs inductive vs floating. The document provides examples and explanations of each type of model. It also discusses using finite queuing tables to analyze queuing systems with a finite population size. In summary, the document outlines different ways to classify mathematical models and provides examples of applying various types of models.
The document discusses basic feasible solutions in linear programming. It begins by introducing the concept of a basic feasible solution, which is obtained by setting some variables to zero and solving the remaining equations. This allows converting a problem with more variables than constraints into an equivalent problem with as many variables as constraints. The document then provides an example to illustrate this process. It concludes by proving that there is a one-to-one correspondence between basic feasible solutions and extreme points of the feasible region.
This document is the MSc project of Mohamed Raagi submitted to Brunel University London in October 2015. It examines excess rates of return from jump risks using geometric Lévy models for asset pricing. The project reviews recent developments in these models and simulates price processes involving jumps to analyze excess rate of return behavior and impact. It introduces Lévy processes and geometric Lévy martingale models as tools for derivative pricing. Specific models discussed include Brownian motion, Poisson, compound Poisson, and geometric gamma. The document also covers option pricing and simulations for each model.
This document provides an overview of automated theorem proving. It discusses:
1) The history and background of automated theorem proving, from Hobbes and Leibniz proposing algorithmic logic to modern computer-based approaches.
2) The theoretical limitations of automated reasoning due to results like Godel's incompleteness theorems, but also practical applications like verifying mathematics and computer systems.
3) How automated reasoning involves expressing statements formally and then manipulating those expressions algorithmically, as anticipated by Leibniz centuries ago.
2. Michael Page
CITY
INTERVIEW PREPARATION FOR JUNIOR QUANTS
This document is intended to provide a preparation framework ahead of interviews for junior quantitative analyst
(financial engineering) positions. It is not meant to be exhaustive and more in depth reading should be undertaken
as part of the preparation process.
1) Introduction
For an overview guide of the responsibilities that a Financial Engineer can have, please ask us for a copy of
“Quantitative Research” (another Michael Page publication, 2003). In summary, Quants are typically expected to get
involved with the mathematics and implementation of models for pricing of derivative instruments. The focus of what a
Quant does really depends on the nature of his/her role, but invariably the need for a deep fundamental understanding of
both maths and programming is key.
2) Basic Maths:
It has been mentioned by a number of our clients that the failing of many junior candidates they interview is their lack
of ability to solve problems using the fundamentals. For example, whilst over 90% can apply Ito’s lemma in the context
of Black Scholes to calculate the price of an option where a stochastic process is involved, over half of these people are
not able to solve it as an ODE once the stochastic component has been removed. This suggests two things;
(i) they have merely learnt the solution by memory; and/or
(ii) they are only aware of standard methodologies;
without understanding the fundamental mechanics of the model. What clients are looking for at the entry to 2 year level
is strong competency on the basics and strong (almost intuitive) mathematical and logical problem solving skills that
would suggest that in the future you will be capable of learning the difficult and complex processes applied to harder
problems. If you slip up on the basics it’s difficult to judge your potential of coping with the more challenging products
or indeed your longer term potential to bring innovative ideas to the forum. We would therefore suggest that prior to
interview you spend at least 40-50% of your time consolidating your understanding of basic math applicable to
quantitative finance - see Table 1. (30-40% of your time should be spent programming (C++), and the rest learning
about the products and their structures. Refer to Figure 1 for a revision summary).
Table 1: Areas of maths that are fundamental for a Quant interview
Calculus: Functions of a single variable: Functions of two or more variables: Matrices:
o Ordinary calculus o Partial differential calculus o Matrix manipulation
o Ordinary differential equations o Partial differential equations o Eigenvalues and
o Solution methods o Classification eigenvectors
o Basic numerical integration o The diffusion equation o Exponentiation
o Simple integral equations o Solution methods
o Basic numerical methods
Series/ o Taylor series o Convergence Tests
sequences: o Maclaurin Series
Probability/ Elementary probability theory: Elementary statistics: Random walks:
Statitics: o Distributions, discrete and o Data representation o Trinomial
continuous o Regression o Transition probability
o First and second moments o Confidence intervals density functions
(mean and variance) o Hypothesis testing o Deterministic
o Higher moments (skew and equations from random
kurtosis) behaviour
o Important distributions
o Several variables
o Correlation
o Central Limit Theorem
- Page 2 of 9 -
3. Michael Page
CITY
3) Basic Financial Maths:
You should then make sure you have mastered the basic math behind derivative pricing theory, e.g. you understand
Black Scholes and are able to not only solve the equation using at least two different methods but are able to discuss the
application in a real world context. A complaint from clients is that some candidates are not “pragmatic”
mathematicians – i.e. if they do have a good understanding of how an equation works they can often be too zealous
about the equation itself rather than its application and practicality.
Example: When asking someone to price an option the “purists” dive straight into finding the solution with the most
precision, however before attempting any question like this it is important to know the precision limits required as this
may fundamentally change the approach one may take (speed vs precision optimization).
In terms of specific math look at: Markov Processes, Ito Processes, Ito’s Lemma, Wiener/Brownian Motion, Stochastic
Calculus, PDEs, Monte Carlo techniques. All of these areas have the specific applications in derivative pricing.
If you want to appear credible in front of the interviewer, it goes without saying that you should understand the
terminology and simple behavior of vanilla derivatives as well as talk about this from a mathematical perspective.
Finally, as a basic rule, ensure you are able to talk in depth about what you have put on your CV. Even if it is something
you have not done for a few years make sure you are able to give a thorough overview of the projects you have done
and that you can give a good summary of the considerations behind the technical decisions you made during these
projects. Be prepared to be questioned on these and make sure you have re-familiarized yourself with the subject areas
involved.
4) “Complex” Maths
“Complex” mathematical questions typically only contain complexity in the way that they are structured. Within an
interview for an entry/junior level position, if you are presented with a highly complex problem the interviewer will
often talk you through the approach to solving it giving you pointers where you get stuck. All you need to ensure is that
you have mastered the basics and that you are showing that you understand the approach being taught to you by the
interviewer.
With regard to the more complex problems interviewers are looking for creativity in finding solutions. It is difficult to
be creative with mathematical rules unless
(i) you know them well and therefore can apply them accurately; and
(ii) you have been actively applying them to a multitude of problems and therefore understand the different approaches
to finding solutions.
Sample Questions on the basics
• Evaluate the following integrals
∞ t dt
∫ ∫e ∫
2
(a) x 2e −x /2
.dx (b)
x
cos x.dx (c)
0 0 t + 5t + 6
2
• Solve the following ordinary differential equations for y(x)
(a) y '+6 xy = 0, y (0) = 1
(b) y ' '+ y '−6 y = 0, y (0) = 1, y ' (0) = 0
- Page 3 of 9 -
4. Michael Page
CITY
• Solve the partial differential equation
∂u ∂ 2 u
=
∂t ∂x 2
• Find all eigenvalues and all (normalised) eigenvectors for the following matrix
2 1
1 2
• Calculate E[eX], where X follows a Normal distribution N(u, s) of mean u and standard deviation s.
• If
dX t = a( X ∞ − X t ).dt + s.dWt
and
f is a function of X and t;
calculate df, where a, X∞ and s are constant.
What if the Brownian motion term in the above is 0 (i.e. s.dWt = 0)?
• Write a probability density function for normal and log-normal distributions.
• You are dealt 13 cards randomly from a pack of 52. What is the probability your hand contains exactly 2 aces?
- Page 4 of 9 -
5. Michael Page
CITY
4) Object Orientation and C++
Software development in C++ is one of the key technologies employed by global financial institutions, particularly due
to its support of object-oriented (OO) programming. This has resulted in a minimum requirement for all quantitative
professionals to have a solid core background in C++ modelling. Often entry level or junior candidates will have used
C++ sporadically in relation to one or two isolated problems (e.g. implement a PDE solver), but do not have a deeper
understanding of the core programming principles (particularly OO) and language whereby they are able to apply it
easily to a wider range of problems. You should aim to learn C++ as a skill/subject in isolation so you are able to apply
it with a high degree of fluency to a general range of problems/circumstances.
Areas of theory to cover:
Table 2: Areas of programming that are fundamental for a Quant interview
Variables, types and Expressions: Branch and loop statements:
o Identifiers o Boolean Values
o Data Types o Expressions and Functions
o Declarations o ‘For', 'While' and 'Do....While' Loops
o Constants and Enumerations o Multiple selection and Switch statements
o Assignment and Expressions o Blocks and Scoping
Functions and Procedural abstraction: Files and streams:
o User-defined functions o Input and Output using files and streams
o Value and Reference parameters o Streams as arguments to functions
o Polymorphism and Overloading o Input and Output using '<' and '>'
o Procedural abstraction and good programming
style
o Splitting programs into different files
Arrays and Strings: Pointers:
o Declaring arrays and strings o Declaring pointers
o Arrays as parameters o The '*','&','new' and 'delete' operators
o Sorting arrays o Pointer arithmetic
o Two-dimensional arrays o Automatic and dynamic variables
o String manipulation
Recursion: Classes:
o Recursion and iteration o The object-oriented paradigm
o Mechanics of a recursive call o Encapsulation and inheritance in C++
o Recursive data structures o Constructors, friends and overloaded operators
o Quick sort o Static members
Numerical Methods:
o Approximating a PDF/CDF
o Solutions of linear systems
o Direct methods of solution and iterative
techniques
o Numerical integration
o Power method
o Explicit and implicit finite difference methods
for parabolic PDEs
o Monte Carlo method
- Page 5 of 9 -
6. Michael Page
CITY
Sample questions:
Basic:
• What is the difference between a pointer and a reference?
• When would you use a pointer/reference?
• What does it mean to declare a function or variable as static?
• What is a class?
• What is the difference between a struct and a class in C++?
• What is the purpose of a constructor/destructor?
• What is a constructor, destructor, default constructor, copy constructor?
• What does it mean to declare a member function as virtual/static?
• What is virtual inheritance?
• What is polymorphism?
• What is the most difficult program you have had to write?
Intermediate:
• What happens when you have a non-virtual method in a base class and a method of the same name in a derived
class?
• What about “overriding” a virtual method in a base class with one in a derived class? Why doesn’t this work the
same?
• Can you call a virtual function in a base class when you have overridden it?
Other:
• How could you determine if a linked list contains a cycle in it?
• How would you reverse a doubly linked list?
• Write a function to sum 1 to n numbers?
• How would you traverse a binary tree?
• Write a program to produce the Fibonacci series?
Useful online resource: http://www.parashift.com/c++-faq-lite
These are merely a list of general questions and should be easily answerable in detail if a comprehensive general study
of C++ has been undertaken. It is likely that you will also have questions where you are given a sample of code and are
asked what is wrong with it, you may also be given a function and be asked to determine what the function will output.
Also in addition to learning the theory it is advisable that you put theory into practice by doing as much implementation
as possible. You may find it a useful exercise to implement a framework in which to price a variety of options using the
Black-Scholes formula and any appropriate extensions.
5) Problem Solving Questions/Brainteasers.
It is likely that during the course of interviews you will be asked some “brainteaser” type questions designed to test
your intuition for problem solving. The solutions are often mathematical but can also require simple logic or lateral
thinking. Often there can be several solutions, some more optimized than others. Interviewers are looking to see your
thought process in solving the problem and will usually require you to prove your answers.
Sample questions:
• What is the sum of all the numbers between 1 and 1000?
• How would you sum a series of 1 to n numbers? Demonstrate proof for this.
- Page 6 of 9 -
7. Michael Page
CITY
• You are given a set of balance scales which you are to use to measure eight balls. Seven of these balls have the
same weight: the eighth ball is heavier than the rest. What is the minimum number of weighs you could perform to
find the heaviest of the eight balls?
• Same as above but with 12 balls?
• To qualify for a race, you need to average 60 mph driving two laps around a 1 mile long track. You have some sort
of engine difficulty the first lap so that you only average 30 mph during that lap; how fast do you have to drive the
second lap to average 60 for both of them?
• A river is flowing downstream at 15 mph relative to the shore. A rowing team is practicing rowing and at first they
row upstream (against the current). They can only go 1.5 mph relative to the shore at this rate. The guy at the back
end of the boat is wearing a hat when they begin, but after a while his hat falls into the water (and floats) and it is
15 minutes before they notice it. They then instantaneously reverse direction and row back to catch up with the hat,
rowing with the same strength or power they were rowing with before. How long will it take them to catch up with
the hat as it is pushed downstream by the current?
• There are 10 open boxes containing 100 coins each. In 9 of these boxes the coins are made of gold, and in the other
the coins are made of copper. You are given a large digital balance which can be used once only. Can you identify
the box containing copper coins knowing the weight of both gold and copper coins?
• A bag contains a total of N balls with either blue or red colour. If five balls are randomly chosen from the bag, the
probability is precisely 1/2 that all five balls are blue. What's the smallest value of N for which this is possible?
(Hint: Use different number of blue/red balls to get to the answer?)
• You are given 5 bags containing 100 coins each. The bags can contain coins of 3 different types that look identical.
The first type weighs 9 grams, the second type 10 and the third type 11 grams. Each bag contains coins of equal
weight but you do not know how many of the 5 bags are of the different types. (i.e. all 5 bags might well contain 9
gram coins as far as you are concerned). You are given a huge digital balance. How many times do you need to use
the balance to clearly determine the type of coin contained in each bag?
• You are playing Russian roulette with a six chamber revolver, you load 2 bullets into the revolver in adjacent
chambers. You spin the barrel place the gun to your head and pull the trigger, you don’t shoot yourself. You now
have the option of either spinning the barrel or pulling the trigger again, which do you take?
• You are in a boat on a lake, in the boat there is a suitcase, you throw this suitcase over the side of the boat. What
happens to the level of the water in the lake? Does it rise, fall or stay the same?
• How many manhole covers are there in London?
• How many petrol stations are there in the UK?
• You are gambling on the roll of a fair six sided dice, in this game if you role a 1 you get $1, if you role a 2 you get
$2 if you role a 3 you get $3 and so on. What is the expected return after 100 roles of the dice.
(Note – there are a large number of variations on this game, you should spend some time looking at various dice
games and probabilities).
- Page 7 of 9 -
8. Michael Page
CITY
6) Reading List
Below are some of the more useful books that candidates have found helpful in their preparation.
• Stochastic Differential Equations: An Introduction with Applications, B. Oksendal (ISBN: 3540047581)
• Financial Calculus: An Introduction to Derivative Pricing, Martin W. Baxter, Andrew J. O. Rennie
(ISBN 0521552893)
• Efficient Methods for Valuing Interest Rate Derivatives (Springer Finance S.), Anton Pelsser (ISBN 1852333049 )
• Pricing and Hedging of Derivative Securities, Lars Tyge Nielsen (ISBN 0198776195)
• A First Course in Probability (International Edition), Sheldon Ross (ISBN 0131218026) - Don’t be offended by the
title !!! – packed with good ‘applied’ problems to solve of the type asked in interviews
• Beginning C++: The Complete Language, Ivor Horton (ISBN: 1590592271 )
• Effective C++: 50 Specific Ways to Improve Your Programs and Design (2nd Edition), Scott Meyers
(ISBN 0201924889)
If you wish to discuss any of this material further, please contact
Doug Ward Dr Tony Ofori Florence Perdriel
Consultant Consultant Consultant
Quantitative Analysis Quantitative Analysis Quantitative Analysis
Michael Page City Michael Page City Michael Page City
50 Cannon Street 50 Cannon Street 50 Cannon Street
London London London
ENGLAND ENGLAND ENGLAND
EC4N 6JJ EC4N 6JJ EC4N 6JJ
Tel: +44 (0) 20 7269 1981 Tel: +44 (0) 20 7269 1979 Tel: +44 (0) 20 7269 1848
Mob: +44 (0) 7876 565 803 Mob: +44 (0) 781 517 4459 Fax: +44 (0) 20 7329 2986
Fax: +44 (0) 20 7329 2986 Fax: +44 (0) 20 7329 2986 Email:
Email: dougward@michaelpage.com Email: anthonyofori@michaelpage.com florenceperdriel@michaelpage.com
- Page 8 of 9 -