Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Numerical Integration in Hermite Spaces - Peter Kritzer, Aug 31, 2017
In this talk, we give an overview of results on numerical integration in Hermite spaces. These spaces contain functions defined on $\mathbb{R}^d$, and can be characterized by the decay of their Hermite coefficients. We consider the case of exponentially as well as polynomially decaying Hermite coefficients. For numerical integration, we either use Gauss-Hermite quadrature rules or algorithms based on quasi-Monte Carlo rules. We present upper and lower error bounds for these algorithms, and discuss their dependence on the dimension $d$. Furthermore, we comment on open problems for future research.
QMC algorithms usually rely on a choice of “N” evenly distributed integration nodes in $[0,1)^d$. A common means to assess such an equidistributional property for a point set or sequence is the so-called discrepancy function, which compares the actual number of points to the expected number of points (assuming uniform distribution on $[0,1)^{d}$) that lie within an arbitrary axis parallel rectangle anchored at the origin. The dependence of the integration error using QMC rules on various norms of the discrepancy function is made precise within the well-known Koksma--Hlawka inequality and its variations. In many cases, such as $L^{p}$ spaces, $1<p<\infty$, the best growth rate in terms of the number of points “N” as well as corresponding explicit constructions are known. In the classical setting $p=\infty$ sharp results are absent for $d\geq3$ already and appear to be intriguingly hard to obtain. This talk shall serve as a survey on discrepancy theory with a special emphasis on the $L^{\infty}$ setting. Furthermore, it highlights the evolution of recent techniques and presents the latest results.
We study QPT (quasi-polynomial tractability) in the worst case setting of linear tensor product problems defined over Hilbert spaces. We prove QPT for algorithms that use only function values under three assumptions'
1. the minimal errors for the univariate case decay polynomially fast to zero,
2. the largest singular value for the univariate case is simple,
3. the eigenfunction corresponding to the largest singular value is a multiple of the function value at some point.
The first two assumptions are necessary for QPT. The third assumption is necessary for QPT for some Hilbert spaces.
Joint work with Erich Novak
The generation of Gaussian random fields over a physical domain is a challenging problem in computational mathematics, especially when the correlation length is short and the field is rough. The traditional approach is to make use of a truncated Karhunen-Loeve (KL) expansion, but the generation of even a single realisation of the field may then be effectively beyond reach (especially for 3-dimensional domains) if the need is to obtain an expected L2 error of say 5%, because of the potentially very slow convergence of the KL expansion. In this talk, based on joint work with Ivan Graham, Frances Kuo, Dirk Nuyens, and Rob Scheichl, a completely different approach is used, in which the field is initially generated at a regular grid on a 2- or 3-dimensional rectangle that contains the physical domain, and then possibly interpolated to obtain the field at other points. In that case there is no need for any truncation. Rather the main problem becomes the factorisation of a large dense matrix. For this we use circulant embedding and FFT ideas. Quasi-Monte Carlo integration is then used to evaluate the expected value of some functional of the finite-element solution of an elliptic PDE with a random field as input.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
Presentation at ICIP 2016.
Slide 4, there is a typo, replace absolute value by parenthesis. The cross-ratio can be negative and we use the principal complex logarithm
QMC algorithms usually rely on a choice of “N” evenly distributed integration nodes in $[0,1)^d$. A common means to assess such an equidistributional property for a point set or sequence is the so-called discrepancy function, which compares the actual number of points to the expected number of points (assuming uniform distribution on $[0,1)^{d}$) that lie within an arbitrary axis parallel rectangle anchored at the origin. The dependence of the integration error using QMC rules on various norms of the discrepancy function is made precise within the well-known Koksma--Hlawka inequality and its variations. In many cases, such as $L^{p}$ spaces, $1<p<\infty$, the best growth rate in terms of the number of points “N” as well as corresponding explicit constructions are known. In the classical setting $p=\infty$ sharp results are absent for $d\geq3$ already and appear to be intriguingly hard to obtain. This talk shall serve as a survey on discrepancy theory with a special emphasis on the $L^{\infty}$ setting. Furthermore, it highlights the evolution of recent techniques and presents the latest results.
We study QPT (quasi-polynomial tractability) in the worst case setting of linear tensor product problems defined over Hilbert spaces. We prove QPT for algorithms that use only function values under three assumptions'
1. the minimal errors for the univariate case decay polynomially fast to zero,
2. the largest singular value for the univariate case is simple,
3. the eigenfunction corresponding to the largest singular value is a multiple of the function value at some point.
The first two assumptions are necessary for QPT. The third assumption is necessary for QPT for some Hilbert spaces.
Joint work with Erich Novak
The generation of Gaussian random fields over a physical domain is a challenging problem in computational mathematics, especially when the correlation length is short and the field is rough. The traditional approach is to make use of a truncated Karhunen-Loeve (KL) expansion, but the generation of even a single realisation of the field may then be effectively beyond reach (especially for 3-dimensional domains) if the need is to obtain an expected L2 error of say 5%, because of the potentially very slow convergence of the KL expansion. In this talk, based on joint work with Ivan Graham, Frances Kuo, Dirk Nuyens, and Rob Scheichl, a completely different approach is used, in which the field is initially generated at a regular grid on a 2- or 3-dimensional rectangle that contains the physical domain, and then possibly interpolated to obtain the field at other points. In that case there is no need for any truncation. Rather the main problem becomes the factorisation of a large dense matrix. For this we use circulant embedding and FFT ideas. Quasi-Monte Carlo integration is then used to evaluate the expected value of some functional of the finite-element solution of an elliptic PDE with a random field as input.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
Presentation at ICIP 2016.
Slide 4, there is a typo, replace absolute value by parenthesis. The cross-ratio can be negative and we use the principal complex logarithm
On Twisted Paraproducts and some other Multilinear Singular IntegralsVjekoslavKovac1
Presentation.
9th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial, June 12, 2012.
The 24th International Conference on Operator Theory, Timisoara, July 3, 2012.
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
The (fast) component-by-component construction of lattice point sets and polynomial lattice point sets is a powerful method to obtain quadrature rules for approximating integrals over the dimensional unit cube.
In this talk, we present modifications of the component-by-component algorithm and of the more recent successive coordinate search algorithm, which yield savings of the construction cost for lattice rules and polynomial lattice rules in weighted function spaces. The idea is to reduce the size of the search space for coordinates which are associated with small weights and are therefore of less importance to the overall error compared to coordinates associated with large
weights. We analyze tractability conditions of the resulting quasi-Monte Carlo rules, and show some numerical results.
On Twisted Paraproducts and some other Multilinear Singular IntegralsVjekoslavKovac1
Presentation.
9th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial, June 12, 2012.
The 24th International Conference on Operator Theory, Timisoara, July 3, 2012.
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
Similar to Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Numerical Integration in Hermite Spaces - Peter Kritzer, Aug 31, 2017
The (fast) component-by-component construction of lattice point sets and polynomial lattice point sets is a powerful method to obtain quadrature rules for approximating integrals over the dimensional unit cube.
In this talk, we present modifications of the component-by-component algorithm and of the more recent successive coordinate search algorithm, which yield savings of the construction cost for lattice rules and polynomial lattice rules in weighted function spaces. The idea is to reduce the size of the search space for coordinates which are associated with small weights and are therefore of less importance to the overall error compared to coordinates associated with large
weights. We analyze tractability conditions of the resulting quasi-Monte Carlo rules, and show some numerical results.
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some operator. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. In the context of ordinary differential equations, Conrad et. al. (Stat.
Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by
considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech), T. J. Sullivan (Free University of Berlin).
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Double Robustness: Theory and Applications with Missing DataLu Mao
When data are missing at random (MAR), complete-case analysis with the full-data estimating equation is in general not valid. To correct the bias, we can employ the inverse probability weighting (IPW) technique on the complete cases. This requires modeling the missing pattern on the observed data (call it the $\pi$ model). The resulting IPW estimator, however, ignores information contained in cases with missing components, and is thus statistically inefficient. Efficiency can be improved by modifying the estimating equation along the lines of the semiparametric efficiency theory of Bickel et al. (1993). This modification usually requires modeling the distribution of the missing component on the observed ones (call it the $\mu$ model). Hence, when both the $\pi$ and the $\mu$ models are correct, the modified estimator is valid and is more efficient than the IPW one. In addition, the modified estimator is "doubly robust" in the sense that it is valid when either the $\pi$ model or the $\mu$ model is correct.
Essential materials of the slides are extracted from the book "Semiparametric Theory and Missing Data" (Tsiatis, 2006). The slides were originally presented in the class BIOS 773 Statistical Analysis with Missing Data in Spring 2013 at UNC Chapel Hill as a final project.
In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples acn be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Seminar at IEEE Computational Intelligence Society, Singapore Chapter at School of Electrical and Electronic Engineering, NTU, Singapore, 20 February 2019
Similar to Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Numerical Integration in Hermite Spaces - Peter Kritzer, Aug 31, 2017 (20)
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
I will discuss paradigmatic statistical models of inference and learning from high dimensional data, such as sparse PCA and the perceptron neural network, in the sub-linear sparsity regime. In this limit the underlying hidden signal, i.e., the low-rank matrix in PCA or the neural network weights, has a number of non-zero components that scales sub-linearly with the total dimension of the vector. I will provide explicit low-dimensional variational formulas for the asymptotic mutual information between the signal and the data in suitable sparse limits. In the setting of support recovery these formulas imply sharp 0-1 phase transitions for the asymptotic minimum mean-square-error (or generalization error in the neural network setting). A similar phase transition was analyzed recently in the context of sparse high-dimensional linear regression by Reeves et al.
Many different measurement techniques are used to record neural activity in the brains of different organisms, including fMRI, EEG, MEG, lightsheet microscopy and direct recordings with electrodes. Each of these measurement modes have their advantages and disadvantages concerning the resolution of the data in space and time, the directness of measurement of the neural activity and which organisms they can be applied to. For some of these modes and for some organisms, significant amounts of data are now available in large standardized open-source datasets. I will report on our efforts to apply causal discovery algorithms to, among others, fMRI data from the Human Connectome Project, and to lightsheet microscopy data from zebrafish larvae. In particular, I will focus on the challenges we have faced both in terms of the nature of the data and the computational features of the discovery algorithms, as well as the modeling of experimental interventions.
Bayesian Additive Regression Trees (BART) has been shown to be an effective framework for modeling nonlinear regression functions, with strong predictive performance in a variety of contexts. The BART prior over a regression function is defined by independent prior distributions on tree structure and leaf or end-node parameters. In observational data settings, Bayesian Causal Forests (BCF) has successfully adapted BART for estimating heterogeneous treatment effects, particularly in cases where standard methods yield biased estimates due to strong confounding.
We introduce BART with Targeted Smoothing, an extension which induces smoothness over a single covariate by replacing independent Gaussian leaf priors with smooth functions. We then introduce a new version of the Bayesian Causal Forest prior, which incorporates targeted smoothing for modeling heterogeneous treatment effects which vary smoothly over a target covariate. We demonstrate the utility of this approach by applying our model to a timely women's health and policy problem: comparing two dosing regimens for an early medical abortion protocol, where the outcome of interest is the probability of a successful early medical abortion procedure at varying gestational ages, conditional on patient covariates. We discuss the benefits of this approach in other women’s health and obstetrics modeling problems where gestational age is a typical covariate.
Difference-in-differences is a widely used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale-dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, Angrist and Pischke (2009) show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting.
We develop sensitivity analyses for weak nulls in matched observational studies while allowing unit-level treatment effects to vary. In contrast to randomized experiments and paired observational studies, we show for general matched designs that over a large class of test statistics, any valid sensitivity analysis for the weak null must be unnecessarily conservative if Fisher's sharp null of no treatment effect for any individual also holds. We present a sensitivity analysis valid for the weak null, and illustrate why it is conservative if the sharp null holds through connections to inverse probability weighted estimators. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and is valid for the weak null under additional assumptions which may be deemed reasonable by practitioners. The methods may be applied to matched observational studies constructed using any optimal without-replacement matching algorithm, allowing practitioners to assess robustness to hidden bias while allowing for treatment effect heterogeneity.
The world of health care is full of policy interventions: a state expands eligibility rules for its Medicaid program, a medical society changes its recommendations for screening frequency, a hospital implements a new care coordination program. After a policy change, we often want to know, “Did it work?” This is a causal question; we want to know whether the policy CAUSED outcomes to change. One popular way of estimating causal effects of policy interventions is a difference-in-differences study. In this controlled pre-post design, we measure the change in outcomes of people who are exposed to the new policy, comparing average outcomes before and after the policy is implemented. We contrast that change to the change over the same time period in people who were not exposed to the new policy. The differential change in the treated group’s outcomes, compared to the change in the comparison group’s outcomes, may be interpreted as the causal effect of the policy. To do so, we must assume that the comparison group’s outcome change is a good proxy for the treated group’s (counterfactual) outcome change in the absence of the policy. This conceptual simplicity and wide applicability in policy settings makes difference-in-differences an appealing study design. However, the apparent simplicity belies a thicket of conceptual, causal, and statistical complexity. In this talk, I will introduce the fundamentals of difference-in-differences studies and discuss recent innovations including key assumptions and ways to assess their plausibility, estimation, inference, and robustness checks.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
A fundamental feature of evaluating causal health effects of air quality regulations is that air pollution moves through space, rendering health outcomes at a particular population location dependent upon regulatory actions taken at multiple, possibly distant, pollution sources. Motivated by studies of the public-health impacts of power plant regulations in the U.S., this talk introduces the novel setting of bipartite causal inference with interference, which arises when 1) treatments are defined on observational units that are distinct from those at which outcomes are measured and 2) there is interference between units in the sense that outcomes for some units depend on the treatments assigned to many other units. Interference in this setting arises due to complex exposure patterns dictated by physical-chemical atmospheric processes of pollution transport, with intervention effects framed as propagating across a bipartite network of power plants and residential zip codes. New causal estimands are introduced for the bipartite setting, along with an estimation approach based on generalized propensity scores for treatments on a network. The new methods are deployed to estimate how emission-reduction technologies implemented at coal-fired power plants causally affect health outcomes among Medicare beneficiaries in the U.S.
Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
The method of differences-in-differences (DID) is widely used to estimate causal effects. The primary advantage of DID is that it can account for time-invariant bias from unobserved confounders. However, the standard DID estimator will be biased if there is an interaction between history in the after period and the groups. That is, bias will be present if an event besides the treatment occurs at the same time and affects the treated group in a differential fashion. We present a method of bounds based on DID that accounts for an unmeasured confounder that has a differential effect in the post-treatment time period. These DID bracketing bounds are simple to implement and only require partitioning the controls into two separate groups. We also develop two key extensions for DID bracketing bounds. First, we develop a new falsification test to probe the key assumption that is necessary for the bounds estimator to provide consistent estimates of the treatment effect. Next, we develop a method of sensitivity analysis that adjusts the bounds for possible bias based on differences between the treated and control units from the pretreatment period. We apply these DID bracketing bounds and the new methods we develop to an application on the effect of voter identification laws on turnout. Specifically, we focus estimating whether the enactment of voter identification laws in Georgia and Indiana had an effect on voter turnout.
We study experimental design in large-scale stochastic systems with substantial uncertainty and structured cross-unit interference. We consider the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and propose a class of local experimentation schemes that can be used to optimize these payments without perturbing the overall market equilibrium. We show that, as the system size grows, our scheme can estimate the gradient of the platform’s utility with respect to p while perturbing the overall market equilibrium by only a vanishingly small amount. We can then use these gradient estimates to optimize p via any stochastic first-order optimization method. These results stem from the insight that, while the system involves a large number of interacting units, any interference can only be channeled through a small number of key statistics, and this structure allows us to accurately predict feedback effects that arise from global system changes using only information collected while remaining in equilibrium.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects using Bayesian nonparametric models. We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects with appropriate measures of uncertainty.
Climate change mitigation has traditionally been analyzed as some version of a public goods game (PGG) in which a group is most successful if everybody contributes, but players are best off individually by not contributing anything (i.e., “free-riding”)—thereby creating a social dilemma. Analysis of climate change using the PGG and its variants has helped explain why global cooperation on GHG reductions is so difficult, as nations have an incentive to free-ride on the reductions of others. Rather than inspire collective action, it seems that the lack of progress in addressing the climate crisis is driving the search for a “quick fix” technological solution that circumvents the need for cooperation.
This seminar discussed ways in which to produce professional academic writing, from academic papers to research proposals or technical writing in general.
Machine learning (including deep and reinforcement learning) and blockchain are two of the most noticeable technologies in recent years. The first one is the foundation of artificial intelligence and big data, and the second one has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating them for more secure and efficient data sharing and analysis. In this paper, we review the research on combining blockchain and machine learning technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more researches on deeper integration of the two promising technologies.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
More from The Statistical and Applied Mathematical Sciences Institute (20)
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Numerical Integration in Hermite Spaces - Peter Kritzer, Aug 31, 2017
1. Numerical Integration in Hermite Spaces
Peter Kritzer
Johann Radon Institute for Computational and Applied Mathematics (RICAM)
Austrian Academy of Sciences
Linz, Austria
Joint work with C. Irrgeher (RICAM, Linz),
G. Leobacher (KFU Graz)
and F. Pillichshammer (JKU Linz)
SAMSI QMC Workshop 2017, August 2017
Research supported by the Austrian Science Fund, Project F5506-N26
Peter Kritzer Numerical Integration in Hermite Spaces 1
2. Johann Radon Institute for Computational and Applied Mathematics
Introduction
Peter Kritzer Numerical Integration in Hermite Spaces 2
3. Johann Radon Institute for Computational and Applied Mathematics
Approximate the d-dimensional integral
Id (f) :=
Rd
f(x)ϕd (x) dx
for f in a normed function space (Hd , · Hd
) by a linear algorithm
AN,d =
N
k=1
wk f(xk ),
where
ϕd (x) =
1
(2π)d/2
exp
−x · x
2
.
Peter Kritzer Numerical Integration in Hermite Spaces 3
4. Johann Radon Institute for Computational and Applied Mathematics
Integration error of AN,d for f ∈ Hd ,
err(AN,d , f) = Id (f) − AN,d (f).
Worst case integration error of AN,d ,
ewor
(AN,d , Hd ) := sup
f∈Hd
f Hd
≤1
|err(AN,d , f)| .
N-th minimal worst case error,
ewor
(N, Hd ) := inf
AN,d
ewor
(AN,d , Hd ).
Peter Kritzer Numerical Integration in Hermite Spaces 4
5. Johann Radon Institute for Computational and Applied Mathematics
Hermite spaces
Peter Kritzer Numerical Integration in Hermite Spaces 5
6. Johann Radon Institute for Computational and Applied Mathematics
Hermite polynomials Hk , k ∈ Nd
0 form an ONB of L2(Rd
, ϕd ).
The k-th (k ∈ N0) normalized univariate (probabilists’) Hermite
polynomial is
Hk (x) =
(−1)k
√
k!
exp(x2
/2)
dk
dxk
exp(−x2
/2).
Multivariate version:
For k = (k1, . . . , kd ) ∈ Nd
0 and x = (x1, . . . , xd ) ∈ Rd
,
Hk (x) =
d
j=1
Hkj
(xj ).
Peter Kritzer Numerical Integration in Hermite Spaces 6
7. Johann Radon Institute for Computational and Applied Mathematics
k-th Hermite coefficient of f ∈ L2(Rd
, ϕd )
ˆf(k) :=
Rd
f(x)Hk (x)ϕd (x) dx.
Peter Kritzer Numerical Integration in Hermite Spaces 7
8. Johann Radon Institute for Computational and Applied Mathematics
k-th Hermite coefficient of f ∈ L2(Rd
, ϕd )
ˆf(k) :=
Rd
f(x)Hk (x)ϕd (x) dx.
Define a positive function rd : Nd
0 → R,
rd (k) =
d
j=1
r(kj ),
where r : N0 → R is positive, r(0) = 1.
Peter Kritzer Numerical Integration in Hermite Spaces 7
9. Johann Radon Institute for Computational and Applied Mathematics
k-th Hermite coefficient of f ∈ L2(Rd
, ϕd )
ˆf(k) :=
Rd
f(x)Hk (x)ϕd (x) dx.
Define a positive function rd : Nd
0 → R,
rd (k) =
d
j=1
r(kj ),
where r : N0 → R is positive, r(0) = 1.
If r depends on j ∈ {1, . . . , d}: indicated by writing rj (kj ).
Peter Kritzer Numerical Integration in Hermite Spaces 7
10. Johann Radon Institute for Computational and Applied Mathematics
k-th Hermite coefficient of f ∈ L2(Rd
, ϕd )
ˆf(k) :=
Rd
f(x)Hk (x)ϕd (x) dx.
Define a positive function rd : Nd
0 → R,
rd (k) =
d
j=1
r(kj ),
where r : N0 → R is positive, r(0) = 1.
If r depends on j ∈ {1, . . . , d}: indicated by writing rj (kj ).
Two choices for rd in this talk:
rpol
d : polynomial decay of r, e.g., r(k) k−α
for α ∈ N.
rexp
d : exponential decay of r, e.g., r(k) ωk
for ω ∈ (0, 1).
Peter Kritzer Numerical Integration in Hermite Spaces 7
11. Johann Radon Institute for Computational and Applied Mathematics
Hermite space (depending on rd ):
Hd,rd
:= f : Rd
→ R : f continuous,
Rd
(f(x))2
ϕd (x) dx < ∞, f d,rd
< ∞ .
Norm:
f d,rd
:=
k∈Nd
0
1
rd (k)
(ˆf(k))2
1/2
.
Inner product:
f, g d,rd
:=
k∈Nd
0
1
rd (k)
ˆf(k)ˆg(k)
Peter Kritzer Numerical Integration in Hermite Spaces 8
12. Johann Radon Institute for Computational and Applied Mathematics
Hermite space Hd,rd
is reproducing kernel Hilbert space with kernel
Kd,rd
(x, y) :=
k∈Nd
0
rd (k)Hk (x)Hk (y).
Introduced in
C. Irrgeher, G. Leobacher. High-dimensional integration on the
Rd
, weighted Hermite spaces, and orthogonal transforms. J.
Complexity 31, 174–205, 2015.
Peter Kritzer Numerical Integration in Hermite Spaces 9
13. Johann Radon Institute for Computational and Applied Mathematics
The case rexp
d
Peter Kritzer Numerical Integration in Hermite Spaces 10
14. Johann Radon Institute for Computational and Applied Mathematics
Studied in the paper
C. Irrgeher, P. Kritzer, G. Leobacher, F. Pillichshammer.
Integration in Hermite spaces of analytic functions. J. Complexity
31, 380–404, 2015.
Peter Kritzer Numerical Integration in Hermite Spaces 11
15. Johann Radon Institute for Computational and Applied Mathematics
Studied in the paper
C. Irrgeher, P. Kritzer, G. Leobacher, F. Pillichshammer.
Integration in Hermite spaces of analytic functions. J. Complexity
31, 380–404, 2015.
Choose ω ∈ (0, 1) and two real sequences a = {aj }, b = {bj },
1 ≤ a1 ≤ a2 ≤ · · · and inf
j≥1
bj ≥ 1
and set
rexp
d (k) =
d
j=1
rj (kj ), with rj (k) = ωaj k
bj
.
Peter Kritzer Numerical Integration in Hermite Spaces 11
16. Johann Radon Institute for Computational and Applied Mathematics
Study integration in Hd,rexp
d
.
Hermite coefficients of functions in Hd,rexp
d
decrease exponentially
fast.
Hd,rexp
d
contains analytic functions.
Peter Kritzer Numerical Integration in Hermite Spaces 12
17. Johann Radon Institute for Computational and Applied Mathematics
First goal: show exponential error convergence.
Peter Kritzer Numerical Integration in Hermite Spaces 13
18. Johann Radon Institute for Computational and Applied Mathematics
First goal: show exponential error convergence.
Exponential convergence (EXP) if there exist q ∈ (0, 1) and
Cd , Md , pd > 0 for all d such that
ewor
(N, Hd,rexp
d
) ≤ Cd q (N/Md ) pd
for all N ∈ N. (1)
Peter Kritzer Numerical Integration in Hermite Spaces 13
19. Johann Radon Institute for Computational and Applied Mathematics
First goal: show exponential error convergence.
Exponential convergence (EXP) if there exist q ∈ (0, 1) and
Cd , Md , pd > 0 for all d such that
ewor
(N, Hd,rexp
d
) ≤ Cd q (N/Md ) pd
for all N ∈ N. (1)
Uniform exponential convergence (UEXP) if pd = p > 0 for all d ∈ N
in (1).
Peter Kritzer Numerical Integration in Hermite Spaces 13
20. Johann Radon Institute for Computational and Applied Mathematics
First goal: show exponential error convergence.
Exponential convergence (EXP) if there exist q ∈ (0, 1) and
Cd , Md , pd > 0 for all d such that
ewor
(N, Hd,rexp
d
) ≤ Cd q (N/Md ) pd
for all N ∈ N. (1)
Uniform exponential convergence (UEXP) if pd = p > 0 for all d ∈ N
in (1).
Interest in largest possible rate pd (or p).
Peter Kritzer Numerical Integration in Hermite Spaces 13
21. Johann Radon Institute for Computational and Applied Mathematics
First goal: show exponential error convergence.
Exponential convergence (EXP) if there exist q ∈ (0, 1) and
Cd , Md , pd > 0 for all d such that
ewor
(N, Hd,rexp
d
) ≤ Cd q (N/Md ) pd
for all N ∈ N. (1)
Uniform exponential convergence (UEXP) if pd = p > 0 for all d ∈ N
in (1).
Interest in largest possible rate pd (or p).
Second goal: Study dependence of ewor
(N, Hd,rexp
d
) on d.
Peter Kritzer Numerical Integration in Hermite Spaces 13
22. Johann Radon Institute for Computational and Applied Mathematics
Use Gauss-Hermite-rules.
Peter Kritzer Numerical Integration in Hermite Spaces 14
23. Johann Radon Institute for Computational and Applied Mathematics
Use Gauss-Hermite-rules.
Univariate case: Gauss-Hermite rule of order N:
GN(f) =
N
i=1
αi f(xi ).
Peter Kritzer Numerical Integration in Hermite Spaces 14
24. Johann Radon Institute for Computational and Applied Mathematics
Use Gauss-Hermite-rules.
Univariate case: Gauss-Hermite rule of order N:
GN(f) =
N
i=1
αi f(xi ).
Nodes x1, . . . , xN ∈ R zeros of HN.
Weights
αi =
1
NH2
N−1(xi )
.
Rule is exact for all polynomials of degree less than 2N.
Peter Kritzer Numerical Integration in Hermite Spaces 14
25. Johann Radon Institute for Computational and Applied Mathematics
d-variate case:
Use the product rule
GN = GN1
⊗ · · · ⊗ GNd
,
with N = N1 · · · Nd .
Peter Kritzer Numerical Integration in Hermite Spaces 15
26. Johann Radon Institute for Computational and Applied Mathematics
d-variate case:
Use the product rule
GN = GN1
⊗ · · · ⊗ GNd
,
with N = N1 · · · Nd .
Proposition 1
Let GN be a d-variate Gauss-Hermite product rule as above. Then
(ewor
(GN, Hd,rexp
d
))2
≤ −1 +
d
j=1
1 + ωaj (2Nj )
bj
√
8π
1 − ω2
.
Peter Kritzer Numerical Integration in Hermite Spaces 15
27. Johann Radon Institute for Computational and Applied Mathematics
Use the previous proposition and choose Nj = Nj (aj , bj ).
Peter Kritzer Numerical Integration in Hermite Spaces 16
28. Johann Radon Institute for Computational and Applied Mathematics
Use the previous proposition and choose Nj = Nj (aj , bj ).
Then GN can yield EXP, and UEXP if B :=
∞
j=1 b−1
j < ∞. E.g.,
Peter Kritzer Numerical Integration in Hermite Spaces 16
29. Johann Radon Institute for Computational and Applied Mathematics
Use the previous proposition and choose Nj = Nj (aj , bj ).
Then GN can yield EXP, and UEXP if B :=
∞
j=1 b−1
j < ∞. E.g.,
Theorem 1 (Irrgeher, K., Leobacher, Pillichshammer, 2015)
Let B < ∞. Let ε > 0 be given, and choose
Nj :=
log
√
8π
1−ω2
π2
6
j2
log(1+ε2)
aj 2bj log ω−1
1/bj
.
Peter Kritzer Numerical Integration in Hermite Spaces 16
30. Johann Radon Institute for Computational and Applied Mathematics
Use the previous proposition and choose Nj = Nj (aj , bj ).
Then GN can yield EXP, and UEXP if B :=
∞
j=1 b−1
j < ∞. E.g.,
Theorem 1 (Irrgeher, K., Leobacher, Pillichshammer, 2015)
Let B < ∞. Let ε > 0 be given, and choose
Nj :=
log
√
8π
1−ω2
π2
6
j2
log(1+ε2)
aj 2bj log ω−1
1/bj
.
Then
ewor
(GN, Hd,rexp
d
) ≤ ε,
and for any δ > 0 there exists Cδ > 0 such that
N ≤ Cδ logB+δ
(1 + ε−1
).
Peter Kritzer Numerical Integration in Hermite Spaces 16
31. Johann Radon Institute for Computational and Applied Mathematics
Further results on EXP/UEXP and tractability in the paper.
Related results: L2-approximation in Hd,rexp
d
:
C. Irrgeher, P. Kritzer, F. Pillichshammer, H. Wo´zniakowski.
Approximation in Hermite spaces of smooth functions. J. Approx.
Th. 207, 98–126, 2016.
C. Irrgeher, P. Kritzer, F. Pillichshammer, H. Wo´zniakowski.
Tractability of multivariate approximation defined over Hilbert
spaces with exponential weights. J. Approx. Th. 207, 301–338,
2016.
Peter Kritzer Numerical Integration in Hermite Spaces 17
32. Johann Radon Institute for Computational and Applied Mathematics
The case rpol
d
Peter Kritzer Numerical Integration in Hermite Spaces 18
33. Johann Radon Institute for Computational and Applied Mathematics
J. Dick, C. Irrgeher, G. Leobacher, F. Pillichshammer. On the
optimal order of integration in Hermite spaces with finite
smoothness. Submitted, 2017. https://arxiv.org/abs/1608.06061
Peter Kritzer Numerical Integration in Hermite Spaces 19
34. Johann Radon Institute for Computational and Applied Mathematics
J. Dick, C. Irrgeher, G. Leobacher, F. Pillichshammer. On the
optimal order of integration in Hermite spaces with finite
smoothness. Submitted, 2017. https://arxiv.org/abs/1608.06061
Let α ∈ {1, 2, . . .} and set
rpol
d (k) =
d
j=1
rj (kj ), with rj (k)
1
kα
(precise definition of rj in the paper).
Peter Kritzer Numerical Integration in Hermite Spaces 19
35. Johann Radon Institute for Computational and Applied Mathematics
J. Dick, C. Irrgeher, G. Leobacher, F. Pillichshammer. On the
optimal order of integration in Hermite spaces with finite
smoothness. Submitted, 2017. https://arxiv.org/abs/1608.06061
Let α ∈ {1, 2, . . .} and set
rpol
d (k) =
d
j=1
rj (kj ), with rj (k)
1
kα
(precise definition of rj in the paper).
Functions in Hd,rpol
d
are α times (weakly) differentiable; norm can be
re-written as Sobolev-type norm using derivatives.
Peter Kritzer Numerical Integration in Hermite Spaces 19
36. Johann Radon Institute for Computational and Applied Mathematics
Theorem 2 (Dick, Irrgeher, Leobacher, Pillichshammer, 2017)
Let d, α ∈ N. Then for all N ∈ N it is true that
ewor
(N, Hd,rpol
d
) ≥ Cd,α
(log N)
d−1
2
Nα
,
where Cd,α depends on d and α.
Proof: Fooling function argument.
Peter Kritzer Numerical Integration in Hermite Spaces 20
37. Johann Radon Institute for Computational and Applied Mathematics
Upper bound:
Truncate the domain Rd
to [−b, b] with b = (b, b, . . . , b) and
b = 2 α log N.
Peter Kritzer Numerical Integration in Hermite Spaces 21
38. Johann Radon Institute for Computational and Applied Mathematics
Upper bound:
Truncate the domain Rd
to [−b, b] with b = (b, b, . . . , b) and
b = 2 α log N.
Linear transformation T : [0, 1]d
→ [−b, b].
Peter Kritzer Numerical Integration in Hermite Spaces 21
39. Johann Radon Institute for Computational and Applied Mathematics
Upper bound:
Truncate the domain Rd
to [−b, b] with b = (b, b, . . . , b) and
b = 2 α log N.
Linear transformation T : [0, 1]d
→ [−b, b].
Integration nodes: xk = T(zk ), k = 1, . . . , N, where zk stem from
a digital higher-order net in [0, 1]d
.
Peter Kritzer Numerical Integration in Hermite Spaces 21
40. Johann Radon Institute for Computational and Applied Mathematics
Upper bound:
Truncate the domain Rd
to [−b, b] with b = (b, b, . . . , b) and
b = 2 α log N.
Linear transformation T : [0, 1]d
→ [−b, b].
Integration nodes: xk = T(zk ), k = 1, . . . , N, where zk stem from
a digital higher-order net in [0, 1]d
.
Integration weights wk = (2b)d
ϕd (T(zk ))/N, k = 1, . . . , N in
AN,d =
N
k=1
wk f(xk ).
Peter Kritzer Numerical Integration in Hermite Spaces 21
41. Johann Radon Institute for Computational and Applied Mathematics
Theorem 3 (Dick, Irrgeher, Leobacher, Pillichshammer, 2017)
Let d, α ∈ N. Then
ewor
(AN,d , Hd,rpol
d
) ≤ Cd,α
(log N)d 2α+3
4 − 1
2
Nα
,
where Cd,α depends on d and α.
Peter Kritzer Numerical Integration in Hermite Spaces 22
42. Johann Radon Institute for Computational and Applied Mathematics
Theorem 3 (Dick, Irrgeher, Leobacher, Pillichshammer, 2017)
Let d, α ∈ N. Then
ewor
(AN,d , Hd,rpol
d
) ≤ Cd,α
(log N)d 2α+3
4 − 1
2
Nα
,
where Cd,α depends on d and α.
Optimal main convergence order N−α
, but presumably
non-optimal power of (log N)-term,
Cd,α depends on d.
Peter Kritzer Numerical Integration in Hermite Spaces 22
43. Johann Radon Institute for Computational and Applied Mathematics
Open problems
Peter Kritzer Numerical Integration in Hermite Spaces 23
44. Johann Radon Institute for Computational and Applied Mathematics
Main open problem:
Can we vanquish the curse of dimensionality for the case rpol
d ?
Peter Kritzer Numerical Integration in Hermite Spaces 24
45. Johann Radon Institute for Computational and Applied Mathematics
Main open problem:
Can we vanquish the curse of dimensionality for the case rpol
d ?
The upper bound in Theorem 3 was obtained by analyzing
(1) the error of approximating the integral outside of [−b, b] by 0,
(2) the error of approximating the integral within [−b, b] by AN,d .
Peter Kritzer Numerical Integration in Hermite Spaces 24
46. Johann Radon Institute for Computational and Applied Mathematics
Main open problem:
Can we vanquish the curse of dimensionality for the case rpol
d ?
The upper bound in Theorem 3 was obtained by analyzing
(1) the error of approximating the integral outside of [−b, b] by 0,
(2) the error of approximating the integral within [−b, b] by AN,d .
Common way to reduce the curse of dimensionality in QMC: Use
weights in the sense of Sloan/Wo´zniakowski.
Peter Kritzer Numerical Integration in Hermite Spaces 24
47. Johann Radon Institute for Computational and Applied Mathematics
Main open problem:
Can we vanquish the curse of dimensionality for the case rpol
d ?
The upper bound in Theorem 3 was obtained by analyzing
(1) the error of approximating the integral outside of [−b, b] by 0,
(2) the error of approximating the integral within [−b, b] by AN,d .
Common way to reduce the curse of dimensionality in QMC: Use
weights in the sense of Sloan/Wo´zniakowski.
Natural approach: Use weights and adjust the bounds of the box
[−b, b] coordinate-wise.
Peter Kritzer Numerical Integration in Hermite Spaces 24
48. Johann Radon Institute for Computational and Applied Mathematics
Main open problem:
Can we vanquish the curse of dimensionality for the case rpol
d ?
The upper bound in Theorem 3 was obtained by analyzing
(1) the error of approximating the integral outside of [−b, b] by 0,
(2) the error of approximating the integral within [−b, b] by AN,d .
Common way to reduce the curse of dimensionality in QMC: Use
weights in the sense of Sloan/Wo´zniakowski.
Natural approach: Use weights and adjust the bounds of the box
[−b, b] coordinate-wise.
Problem: Choosing the weights to reduce (1) increases (2), and vice
versa.
Peter Kritzer Numerical Integration in Hermite Spaces 24
49. Johann Radon Institute for Computational and Applied Mathematics
Solutions to this problem?
Peter Kritzer Numerical Integration in Hermite Spaces 25
50. Johann Radon Institute for Computational and Applied Mathematics
Solutions to this problem?
Use a different way of estimating (1) and (2) to obtain better
bounds. Is this possible at all?
Peter Kritzer Numerical Integration in Hermite Spaces 25
51. Johann Radon Institute for Computational and Applied Mathematics
Solutions to this problem?
Use a different way of estimating (1) and (2) to obtain better
bounds. Is this possible at all?
Approximate the integral outside of [−b, b] by something more
sophisticated than 0. What is “something more sophisticated” ?
Peter Kritzer Numerical Integration in Hermite Spaces 25
52. Johann Radon Institute for Computational and Applied Mathematics
Solutions to this problem?
Use a different way of estimating (1) and (2) to obtain better
bounds. Is this possible at all?
Approximate the integral outside of [−b, b] by something more
sophisticated than 0. What is “something more sophisticated” ?
Use a different approach altogether. Which approach would
work?
Peter Kritzer Numerical Integration in Hermite Spaces 25
53. Johann Radon Institute for Computational and Applied Mathematics
Conclusion
Peter Kritzer Numerical Integration in Hermite Spaces 26
54. Johann Radon Institute for Computational and Applied Mathematics
We studied numerical integration in Hermite spaces.
Peter Kritzer Numerical Integration in Hermite Spaces 27
55. Johann Radon Institute for Computational and Applied Mathematics
We studied numerical integration in Hermite spaces.
Case of exponentially decaying Hermite coefficients: many
results (even for function approximation).
Peter Kritzer Numerical Integration in Hermite Spaces 27
56. Johann Radon Institute for Computational and Applied Mathematics
We studied numerical integration in Hermite spaces.
Case of exponentially decaying Hermite coefficients: many
results (even for function approximation).
Case of polynomially decaying Hermite coefficients: higher order
digital nets can be used.
Peter Kritzer Numerical Integration in Hermite Spaces 27
57. Johann Radon Institute for Computational and Applied Mathematics
We studied numerical integration in Hermite spaces.
Case of exponentially decaying Hermite coefficients: many
results (even for function approximation).
Case of polynomially decaying Hermite coefficients: higher order
digital nets can be used.
Curse of dimensionality remains unresolved problem, not clear
which strategy will work.
Peter Kritzer Numerical Integration in Hermite Spaces 27
58. Johann Radon Institute for Computational and Applied Mathematics
Thank you very much for your attention.
Peter Kritzer Numerical Integration in Hermite Spaces 28