My talk in the International Conference on Computational Finance 2019 (ICCF2019). The talk is about designing new efficient methods for option pricing under the rough Bergomi model.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
In this talk we will describe a methodology to handle the causality to make inference on common-cause failure in a situation of missing data. The data are collected in the form of contingency table but the available information are only the numbers of CCF of different orders and the numbers of failure due to a given cause. Therefore only the margins of the contingency table are observed; thefrequencies in each cell are unknown. Assuming a Poisson model for the count, we suggest a Bayesian approach and we use the inverse Bayes formula (IBF) combined with a Metropolis-Hastings algorithm to make inference on the rate of occurrence for the different combination cause, order. The performance of the resulting algorithm is evaluated through simulations. A comparison is made with results obtained from the _-composition approach to deal with causality suggested by Zheng et al. (2013).
My talk in the Mathematical Finance Seminar at Humboldt-Universität zu Berlin, October 27, 2022, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874), (ii) "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708) and (iii) "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" (link: https://arxiv.org/abs/2203.08196)
My talk in the International Conference on Computational Finance 2019 (ICCF2019). The talk is about designing new efficient methods for option pricing under the rough Bergomi model.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...Chiheb Ben Hammouda
My talk at the "Stochastic Numerics and Statistical Learning: Theory and Applications" Workshop at KAUST (King Abdullah University of Science and Technology), May 23, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation".
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Workshop: Numerical Analysis of Stochastic Partial Differential Equations (NASPDE), in Network Eurandom at Eindhoven University of Technology, May 16, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
In this talk we will describe a methodology to handle the causality to make inference on common-cause failure in a situation of missing data. The data are collected in the form of contingency table but the available information are only the numbers of CCF of different orders and the numbers of failure due to a given cause. Therefore only the margins of the contingency table are observed; thefrequencies in each cell are unknown. Assuming a Poisson model for the count, we suggest a Bayesian approach and we use the inverse Bayes formula (IBF) combined with a Metropolis-Hastings algorithm to make inference on the rate of occurrence for the different combination cause, order. The performance of the resulting algorithm is evaluated through simulations. A comparison is made with results obtained from the _-composition approach to deal with causality suggested by Zheng et al. (2013).
My talk in the Mathematical Finance Seminar at Humboldt-Universität zu Berlin, October 27, 2022, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874), (ii) "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708) and (iii) "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" (link: https://arxiv.org/abs/2203.08196)
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
I am Jayson L. I am a Signals and Systems Homework Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Sheffield. I have been helping students with their homework for the past 7 years. I solve homework related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems homework.
The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm". One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter Lambda, which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
Practical and Worst-Case Efficient ApportionmentRaphael Reitzig
Proportional apportionment is the problem of assigning seats to parties according to their relative share of votes. Divisor methods are the de-facto standard solution, used in many countries.
In recent literature, there are two algorithms that implement divisor methods: one by Cheng and Eppstein (ISAAC, 2014) has worst-case optimal running time but is complex, while the other (Pukelsheim, 2014) is relatively simple and fast in practice but does not offer worst-case guarantees.
This talk presents the ideas behind a novel algorithm that avoids the shortcomings of both. We investigate the three contenders in order to determine which is most useful in practice.
Read more over here: http://reitzig.github.io/publications/RW2015b
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
I am Jayson L. I am a Signals and Systems Homework Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Sheffield. I have been helping students with their homework for the past 7 years. I solve homework related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems homework.
The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm". One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter Lambda, which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
Practical and Worst-Case Efficient ApportionmentRaphael Reitzig
Proportional apportionment is the problem of assigning seats to parties according to their relative share of votes. Divisor methods are the de-facto standard solution, used in many countries.
In recent literature, there are two algorithms that implement divisor methods: one by Cheng and Eppstein (ISAAC, 2014) has worst-case optimal running time but is complex, while the other (Pukelsheim, 2014) is relatively simple and fast in practice but does not offer worst-case guarantees.
This talk presents the ideas behind a novel algorithm that avoids the shortcomings of both. We investigate the three contenders in order to determine which is most useful in practice.
Read more over here: http://reitzig.github.io/publications/RW2015b
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...
Is Sized Typing for Coq Practical?
1. Journal of Functional Programming, 33(e1), January 2023
Is Sized Typing for Coq Practical?
Jonathan Chan
University of British Columbia,
University of Pennsylvania
Yufeng (Michael) Li
University of Waterloo,
University of Cambridge
William J. Bowman
University of British Columbia
4. Termination Checking: Guardedness
4
Sized Types Contributions Implementation
Fixpoint div n m : nat :=
match n with
| S n′ => S (div minus n′ m m)
| O => O
end.
❌
Fixpoint minus n m : nat :=
match n, m with
| S n′, S m′ => minus n′ m′
| _, _ => O
end.
5. Termination Checking: Guardedness
5
Sized Types Contributions Implementation
Fixpoint div n m : nat :=
match n with
| S n′ => S (div minus n′ m m)
| O => O
end.
❌
Fixpoint minus n m : nat :=
match n, m with
| S n′, S m′ => minus n′ m′
| _, _ => O n
end.
❌
✅
6. Termination Checking: Sized Typing
6
Sized Types Contributions Implementation
Γ ⊢ n : nats+1
Γ ⊢ e1
: P O
Γ ⊢ n : nats
Γ, m : nats
⊢ e2
: P (S m)
───────────── ─────────────── ─────────────────────────
Γ ⊢ O : nats+1
Γ ⊢ S n : nats+1
Γ ⊢ match n with .
| O => e1
| S m => e2
end : P n
s ⩴ v | s+1 | ∞
7. Termination Checking: Sized Typing
7
Sized Types Contributions Implementation
Γ ⊢ n : nats+1
Γ ⊢ e1
: P O
Γ ⊢ n : nats
Γ, m : nats
⊢ e2
: P (S m)
───────────── ─────────────── ─────────────────────────
Γ ⊢ O : nats+1
Γ ⊢ S n : nats+1
Γ ⊢ match n with .
| O => e1
| S m => e2
end : P n
8. Termination Checking: Sized Typing
8
Sized Types Contributions Implementation
Fixpoint minus : natv
-> nat -> natv
.
Fixpoint div (n : natv+1
) (m : nat) : natv+1
:=
match n with
| S n′ => S (div (minus n′ m) m)
| O => O
end.
natv
natv
33. Real Example: MSets/MSetList.v
Mean over five trials
33
Sized Types Contributions
Unsized compilation (s) 15.122 ± 0.073
Sized compilation (s) 83.660 ± 0.286
Slowdown 5.5×
SAT ops only (s) 64.600 ± 0.437
SAT ops only (%) 77.2%
Inference Substitution Performance
34. Real Example: MSets/MSetList.v
34
Sized Types Contributions
log(count)
log distribution of |𝒱| × |𝒞| during SAT operations
|𝒱| × |𝒞|
Inference Substitution Performance
~4K vs. × ~250 cs.
39. Concrete sized naturals
O : natv+1
S O : natv+2
S (S O) : natv+3
O : natv+2
S O : natv+3
S (S O) : natv+4
39
40. Real Example: setoid_ring/Field_theory.v
40
Mean over two trials (August 2023)
Unsized compilation (s) 17.815 ± 0.545
Sized compilation (s) 106.87 ± 1.94
Slowdown 6.0×
SAT ops only (s) 84.70
SAT ops only (%) 79.3% 17755 vs. × 14057 cs. ≈ 250M
41. Artificial Example: Universe Polymorphism
Set Printing Universes.
Set Universe Polymorphism.
Time Definition T1 : Type := Type -> Type
-> Type -> Type -> Type -> Type. Print T1.
Time Definition T2 : Type :=
T1 -> T1 -> T1 -> T1 -> T1 -> T1. Print T2.
Time Definition T3 : Type :=
T2 -> T2 -> T2 -> T2 -> T2 -> T2. Print T3.
Time Definition T4 : Type :=
T3 -> T3 -> T3 -> T3 -> T3 -> T3. Print T4.
Time Definition T5 : Type :=
T4 -> T4 -> T4 -> T4 -> T4 -> T4. Print T5.
Time Definition T6 : Type :=
T5 -> T5 -> T5 -> T5 -> T5 -> T5. Print T6.
Time Definition T7 : Type :=
T6 -> T6 -> T6 -> T6 -> T6 -> T6. Print T7.
41
Definition #u Time (s)
T1 7 ~ 0
T2 43 0.002
T3 259 0.026
T4 1555 0.057
T5 9331 0.374
T6 55987 3.300
T7 335921 18.170