The document describes curve registration through nonparametric testing of Fourier coefficients. It presents a model where curves are represented by their Fourier coefficients with added noise. The hypotheses are that the curves either match up to a shift, or are minimally distant overall after any shift. A generalized likelihood ratio test statistic is developed that minimizes the squared differences between the Fourier coefficients of the two curves over all possible shift values.
Further discriminatory signature of inflationLaila A
These are the slides of the talk I gave on discriminating between models of inflation using space based gravitational wave detectors, at KEK in Tskuba University, Japan.
This document summarizes research on modeling the resting brain using multi-subject models. It discusses using spatial independent component analysis (ICA) to decompose brain activity into spatial maps that are consistent across subjects. It also discusses estimating functional connectivity networks by imposing sparsity on inverse covariance matrices estimated across subjects. Multi-subject dictionary learning approaches that estimate shared spatial patterns across subjects while modeling subject variability are presented. These approaches aim to overcome challenges from small sample sizes by leveraging information across subjects.
Learning and comparing multi-subject models of brain functional connecitivityGael Varoquaux
High-level brain function arises through functional interactions. These can be mapped via co-fluctuations in activity observed in functional imaging.
First, I first how spatial maps characteristic of on-going activity in a population of subjects can be learned using multi-subject decomposition models extending the popular Independent Component Analysis. These methods single out spatial atoms of brain activity: functional networks or brain regions. With a probabilistic model of inter-subject variability, they open the door to building data-driven atlases of on-going activity.
Subsequently, I discuss graphical modeling of the interactions between brain regions. To learn highly-resolved large scale individual
graphical models models, we use sparsity-inducing penalizations introducing a population prior that mitigates the data scarcity at the subject-level. The corresponding graphs capture better the community structure of brain activity than single-subject models or group averages.
Finally, I address the detection of connectivity differences between subjects. Explicit group variability models of the covariance structure can be used to build optimal edge-level test statistics. On stroke patients resting-state data, these models detect patient-specific functional connectivity perturbations.
The document summarizes research on charge density waves in rare-earth nickelates using a mean-field approach. It first provides background on rare-earth nickelates and proposes a low-energy model Hamiltonian. It then describes the application of mean-field theory to analyze charge ordering, deriving a self-consistency equation. Numerical results from the mean-field theory show a phase transition from a uniform to a charge ordered state as a function of model parameters. The document concludes by discussing open questions regarding parameter values and the validity of the mean-field approach for describing nickelates.
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the range is divided into equal intervals. It also discusses non-uniform quantization which has smaller intervals near zero to better match real audio signals. Examples and MATLAB code demonstrations are provided to illustrate quantization of audio signals at different bit rates.
The document discusses quiescent steady state (DC) analysis using the Newton-Raphson method. It begins by introducing DC analysis and defining the goal as solving the system's differential algebraic equations (DAEs) under the assumption of no time variation. It then describes the Newton-Raphson method as an iterative numerical technique for solving nonlinear systems of equations. The method computes the Jacobian matrix at each iteration to determine the update to the state vector that will converge to a solution.
The document discusses scalar quantization and the Lloyd-Max algorithm. It provides examples of using the Lloyd-Max algorithm to design scalar quantizers for Gaussian and Laplacian distributed signals. The algorithm works by iteratively calculating decision thresholds and representative levels to minimize mean squared error. At high rates, the distortion-rate function of a Lloyd-Max quantizer is approximated. The document also discusses entropy-constrained scalar quantization and an iterative algorithm to design those quantizers.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
Further discriminatory signature of inflationLaila A
These are the slides of the talk I gave on discriminating between models of inflation using space based gravitational wave detectors, at KEK in Tskuba University, Japan.
This document summarizes research on modeling the resting brain using multi-subject models. It discusses using spatial independent component analysis (ICA) to decompose brain activity into spatial maps that are consistent across subjects. It also discusses estimating functional connectivity networks by imposing sparsity on inverse covariance matrices estimated across subjects. Multi-subject dictionary learning approaches that estimate shared spatial patterns across subjects while modeling subject variability are presented. These approaches aim to overcome challenges from small sample sizes by leveraging information across subjects.
Learning and comparing multi-subject models of brain functional connecitivityGael Varoquaux
High-level brain function arises through functional interactions. These can be mapped via co-fluctuations in activity observed in functional imaging.
First, I first how spatial maps characteristic of on-going activity in a population of subjects can be learned using multi-subject decomposition models extending the popular Independent Component Analysis. These methods single out spatial atoms of brain activity: functional networks or brain regions. With a probabilistic model of inter-subject variability, they open the door to building data-driven atlases of on-going activity.
Subsequently, I discuss graphical modeling of the interactions between brain regions. To learn highly-resolved large scale individual
graphical models models, we use sparsity-inducing penalizations introducing a population prior that mitigates the data scarcity at the subject-level. The corresponding graphs capture better the community structure of brain activity than single-subject models or group averages.
Finally, I address the detection of connectivity differences between subjects. Explicit group variability models of the covariance structure can be used to build optimal edge-level test statistics. On stroke patients resting-state data, these models detect patient-specific functional connectivity perturbations.
The document summarizes research on charge density waves in rare-earth nickelates using a mean-field approach. It first provides background on rare-earth nickelates and proposes a low-energy model Hamiltonian. It then describes the application of mean-field theory to analyze charge ordering, deriving a self-consistency equation. Numerical results from the mean-field theory show a phase transition from a uniform to a charge ordered state as a function of model parameters. The document concludes by discussing open questions regarding parameter values and the validity of the mean-field approach for describing nickelates.
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the range is divided into equal intervals. It also discusses non-uniform quantization which has smaller intervals near zero to better match real audio signals. Examples and MATLAB code demonstrations are provided to illustrate quantization of audio signals at different bit rates.
The document discusses quiescent steady state (DC) analysis using the Newton-Raphson method. It begins by introducing DC analysis and defining the goal as solving the system's differential algebraic equations (DAEs) under the assumption of no time variation. It then describes the Newton-Raphson method as an iterative numerical technique for solving nonlinear systems of equations. The method computes the Jacobian matrix at each iteration to determine the update to the state vector that will converge to a solution.
The document discusses scalar quantization and the Lloyd-Max algorithm. It provides examples of using the Lloyd-Max algorithm to design scalar quantizers for Gaussian and Laplacian distributed signals. The algorithm works by iteratively calculating decision thresholds and representative levels to minimize mean squared error. At high rates, the distortion-rate function of a Lloyd-Max quantizer is approximated. The document also discusses entropy-constrained scalar quantization and an iterative algorithm to design those quantizers.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
T. Popov - Drinfeld-Jimbo and Cremmer-Gervais Quantum Lie AlgebrasSEENET-MTP
This document summarizes work on Drinfeld-Jimbo and Cremmer-Gervais quantum Lie algebras. It describes how quantum spaces arise from braided deformations of commutative spaces, and how bicovariant differential calculi on quantum groups lead to quantum Lie algebras. It presents the Drinfeld-Jimbo and Cremmer-Gervais R-matrices, and shows how they give rise to quantum Lie algebra structures through their associated braidings. It also establishes relationships between Drinfeld-Jimbo, Cremmer-Gervais, and "strict RIME" quantum Lie algebras through changes of basis.
The document discusses targeted Bayesian network learning (TBNL) and its application to predicting criminal suspects. It compares TBNL to traditional Bayesian network learning approaches, noting that TBNL aims to maximize the amount of information learned about a specific target variable rather than the entire distribution. The document provides examples of TBNL outperforming naive Bayes and tree-augmented networks on several datasets by exploiting correlations between attributes and the target more effectively for prediction tasks. It also analyzes the differential complexity of TBNL versus traditional explanatory models.
The document discusses the equivalence between context-free grammars and pushdown automata. It shows that:
1) For any context-free grammar, a pushdown automata can be constructed that accepts the same language as the grammar.
2) For any pushdown automata, an equivalent context-free grammar can be constructed that generates the same language as the automata.
3) This establishes that the class of languages defined by context-free grammars and those accepted by pushdown automata are equivalent.
Catalogue of Models for Electricity Prices Part 2NicolasRR
This document provides an overview and examples of several stochastic models for electricity spot prices, including:
1) A one-factor affine jump diffusion model that adds a jump component to allow for strong price variations. The jumps follow a Gaussian distribution.
2) A jump diffusion model where jump amplitudes follow an Erlang distribution. Examples show how the parameters n and λ impact spike magnitude.
3) A Markov-chain model where the spot price depends on the percentage of online generators. Deterministic functions are used to influence spike amplitude and timing.
Security of continuous variable quantum key distribution against general attackswtyru1989
This document summarizes a paper on the security of continuous-variable quantum key distribution (CVQKD) against general attacks. It describes several CVQKD protocols that use homodyne or heterodyne detection to encode and measure information on the quadratures of electromagnetic fields. The paper outlines a proof that CVQKD protocols are secure against general attacks by first making the protocols permutation invariant through a test that restricts the state to a finite-dimensional subspace, then applying the postselection technique from finite-dimensional systems. It proposes a test using heterodyne detection on some modes to bound the photon number and ensure the state is close to a finite-dimensional projection. Symmetries in phase space from transformations of phase shifts
Framework for the analysis and design of encryption strategies based on d...darg0001
MRET2 λ=3.8123
0.2 λ variable
0
1000 2000 3000 4000 5000 6000 7000 8000 9000
The document discusses the analysis and design of encryption strategies based on discrete-time chaotic dynamical systems. It covers 3 key topics: 1) Why chaos-based encryption is used and how it works, 2) Important design rules for chaos-based cryptosystems, and 3) Methods for analyzing the security of chaos-based encryption, such as estimating cryptosystem parameters from ciphertext or measuring the entropy of the underlying chaotic map.
Quantum key distribution with continuous variables at telecom wavelengthwtyru1989
This document outlines a study on quantum key distribution using continuous variables at telecom wavelengths. It discusses implementing quantum cryptography by encoding secret keys onto continuously varying properties like amplitude and phase of coherent laser pulses. The receiving party uses homodyne detection to measure the signals. While this allows higher key rates than single-photon methods, it requires precise optics and signal processing. The document covers analyzing the security and performance of such a system, including modeling noise and information leakage, and using error correction codes to reconcile the correlated data between parties. The goal is to establish a secret key that is secure against potential eavesdropping attacks.
Quantifying the call blending balance in two way communication retrial queues...wrogiest
This document describes research into distinguishing between two models (A and B) of call sequences in a call center using a single server. Model A uses a classical retrial rate, while Model B uses a constant retrial rate. The key difference studied is the short-term correlation between incoming and outgoing call types, quantified by the correlation coefficient γ. Numerical examples calculating γ under Model B demonstrate it can be positive when outgoing call activity is limited, positive when the time share of incoming/outgoing calls is matched, and strictly negative when call durations are strongly mismatched. The goal is to compare γ between Models A and B to distinguish the two models.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a study on the Lp-convergence of the Rees-Stanojevic modified cosine sum for 0 < p < 1. It presents a theorem showing that if the sequence {ak} satisfies conditions ak → 0 and Σ|Δak| < ∞, then the limit of the integral of |f(x) - hn(x)|p dx from -π to π is 0 as n → ∞. It also includes a corollary deducing an earlier theorem by Ul'yanov as a special case where hn(x) is replaced with the partial sum Sn(x).
This document discusses a stochastic wave propagation model in heterogeneous media. It presents a general operator theory framework that allows modeling of linear PDEs with random coefficients. For elliptic PDEs like diffusion equations, the framework guarantees well-posedness if the sum of operator norms is less than 2. For wave equations modeled by the Helmholtz equation, well-posedness requires restricting the wavenumber k due to dependencies of operator norms on k. Establishing explicit bounds on the norms remains an open problem, particularly for wave-trapping media.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
This document provides an introduction to global sensitivity analysis. It discusses how sensitivity analysis can quantify the sensitivity of a model output to variations in its input parameters. It introduces Sobol' sensitivity indices, which measure the contribution of each input parameter to the variance of the model output. The document outlines how Sobol' indices are defined based on decomposing the model output variance into terms related to individual input parameters and their interactions. It notes that Sobol' indices are generally estimated using Monte Carlo-type sampling approaches due to the high-dimensional integrals involved in their exact calculation.
This document discusses quantifying measurement uncertainty. There are two main sources of uncertainty: a repeatable component and a random component. The random component incorporates all factors affecting measurement precision and leads to uncertainty in measured and calculated values. There are two approaches to quantifying standard uncertainty: Type A uses statistical analysis of replicates, while Type B uses best estimates from other factors like instrument specifications. Standard uncertainty is reported with measured values to indicate the precision of the measurement.
The document summarizes a lesson on diagonalization of matrices. It defines eigenvalues and eigenvectors, provides an example to illustrate the geometric effect of a non-diagonal linear transformation, and outlines the procedure for diagonalizing a matrix by finding its eigenvalues and eigenvectors and arranging them into diagonal and invertible matrices. It also gives an worked example of diagonalizing a specific 2x2 matrix.
This document contains notes from a lesson on eigenvalues and eigenvectors. It includes examples of drawing the geometric effect of linear transformations represented by diagonal and non-diagonal matrices. Key points covered are the effect of multiplying vectors by matrices, which stretches and rotates the vectors. Practice problems and office hours are also announced.
This document summarizes various tests that can be used to determine if an infinite series converges or diverges, including:
1) The divergence test, integral test, p-series test, comparison test, limit comparison test, alternating series test, and ratio test.
2) It also discusses power series, including determining the radius of convergence and using Taylor series approximations with Taylor's inequality to estimate the remainder term.
3) Key concepts are that convergence tests check if partial sums approach a limit, while divergence tests examine the behavior of individual terms, and that power series have a radius of convergence determining the interval on which they converge.
ANISOTROPIC SURFACES DETECTION USING INTENSITY MAPS ACQUIRED BY AN AIRBORNE L...grssieee
The document discusses methods for estimating the spatial anisotropy of surfaces using near-infrared LiDAR intensity maps over coastal environments. It presents two estimators - one based on 1D correlations of columns and lines in sliding windows, and another based on 2D correlations of windows and their transposes. The estimators are evaluated on synthetic data with varying anisotropy, relative anisotropy, and signal-to-noise ratio. The estimators are then applied to LiDAR intensity maps from coastal areas to characterize anisotropic surfaces independently of intensity variations. Future work involves combining these methods with multi-resolution wavelet approaches and comparing LiDAR intensity to DEM and dual-polarization SAR data.
Spacetime Meshing for Discontinuous Galerkin Methodsshripadthite
This document summarizes Shripad Thite's Ph.D. defense on spacetime meshing for discontinuous Galerkin methods. The defense presented new algorithms for adaptively generating spacetime meshes that can handle changing wavespeeds, refine and coarsen resolution, and track moving boundaries. The algorithms were proven to guarantee progress of the mesh while satisfying causality and other constraints. Several publications resulted that presented these spacetime meshing techniques and their application to problems in elastodynamics.
This document discusses Bayesian variable selection methods for regression models. It begins by reviewing traditional ANOVA tables and their limitations for modern applications with many variables, such as GWAS studies. It then introduces Bayesian approaches using priors to perform variable selection by building it into the regression model. Several variable selection methods are described that use different prior distributions, such as slab and spike priors, the stochastic search variable selection (SSVS) method, and the normal-exponential-gamma (NEG) distribution. The document discusses how these methods can be implemented using MCMC sampling and compares their performance. It also discusses some extensions like using random effects and polynomial terms.
Parameter Estimation for Semiparametric Models with CMARS and Its ApplicationsSSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 11.
More info at http://summerschool.ssa.org.ua
T. Popov - Drinfeld-Jimbo and Cremmer-Gervais Quantum Lie AlgebrasSEENET-MTP
This document summarizes work on Drinfeld-Jimbo and Cremmer-Gervais quantum Lie algebras. It describes how quantum spaces arise from braided deformations of commutative spaces, and how bicovariant differential calculi on quantum groups lead to quantum Lie algebras. It presents the Drinfeld-Jimbo and Cremmer-Gervais R-matrices, and shows how they give rise to quantum Lie algebra structures through their associated braidings. It also establishes relationships between Drinfeld-Jimbo, Cremmer-Gervais, and "strict RIME" quantum Lie algebras through changes of basis.
The document discusses targeted Bayesian network learning (TBNL) and its application to predicting criminal suspects. It compares TBNL to traditional Bayesian network learning approaches, noting that TBNL aims to maximize the amount of information learned about a specific target variable rather than the entire distribution. The document provides examples of TBNL outperforming naive Bayes and tree-augmented networks on several datasets by exploiting correlations between attributes and the target more effectively for prediction tasks. It also analyzes the differential complexity of TBNL versus traditional explanatory models.
The document discusses the equivalence between context-free grammars and pushdown automata. It shows that:
1) For any context-free grammar, a pushdown automata can be constructed that accepts the same language as the grammar.
2) For any pushdown automata, an equivalent context-free grammar can be constructed that generates the same language as the automata.
3) This establishes that the class of languages defined by context-free grammars and those accepted by pushdown automata are equivalent.
Catalogue of Models for Electricity Prices Part 2NicolasRR
This document provides an overview and examples of several stochastic models for electricity spot prices, including:
1) A one-factor affine jump diffusion model that adds a jump component to allow for strong price variations. The jumps follow a Gaussian distribution.
2) A jump diffusion model where jump amplitudes follow an Erlang distribution. Examples show how the parameters n and λ impact spike magnitude.
3) A Markov-chain model where the spot price depends on the percentage of online generators. Deterministic functions are used to influence spike amplitude and timing.
Security of continuous variable quantum key distribution against general attackswtyru1989
This document summarizes a paper on the security of continuous-variable quantum key distribution (CVQKD) against general attacks. It describes several CVQKD protocols that use homodyne or heterodyne detection to encode and measure information on the quadratures of electromagnetic fields. The paper outlines a proof that CVQKD protocols are secure against general attacks by first making the protocols permutation invariant through a test that restricts the state to a finite-dimensional subspace, then applying the postselection technique from finite-dimensional systems. It proposes a test using heterodyne detection on some modes to bound the photon number and ensure the state is close to a finite-dimensional projection. Symmetries in phase space from transformations of phase shifts
Framework for the analysis and design of encryption strategies based on d...darg0001
MRET2 λ=3.8123
0.2 λ variable
0
1000 2000 3000 4000 5000 6000 7000 8000 9000
The document discusses the analysis and design of encryption strategies based on discrete-time chaotic dynamical systems. It covers 3 key topics: 1) Why chaos-based encryption is used and how it works, 2) Important design rules for chaos-based cryptosystems, and 3) Methods for analyzing the security of chaos-based encryption, such as estimating cryptosystem parameters from ciphertext or measuring the entropy of the underlying chaotic map.
Quantum key distribution with continuous variables at telecom wavelengthwtyru1989
This document outlines a study on quantum key distribution using continuous variables at telecom wavelengths. It discusses implementing quantum cryptography by encoding secret keys onto continuously varying properties like amplitude and phase of coherent laser pulses. The receiving party uses homodyne detection to measure the signals. While this allows higher key rates than single-photon methods, it requires precise optics and signal processing. The document covers analyzing the security and performance of such a system, including modeling noise and information leakage, and using error correction codes to reconcile the correlated data between parties. The goal is to establish a secret key that is secure against potential eavesdropping attacks.
Quantifying the call blending balance in two way communication retrial queues...wrogiest
This document describes research into distinguishing between two models (A and B) of call sequences in a call center using a single server. Model A uses a classical retrial rate, while Model B uses a constant retrial rate. The key difference studied is the short-term correlation between incoming and outgoing call types, quantified by the correlation coefficient γ. Numerical examples calculating γ under Model B demonstrate it can be positive when outgoing call activity is limited, positive when the time share of incoming/outgoing calls is matched, and strictly negative when call durations are strongly mismatched. The goal is to compare γ between Models A and B to distinguish the two models.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a study on the Lp-convergence of the Rees-Stanojevic modified cosine sum for 0 < p < 1. It presents a theorem showing that if the sequence {ak} satisfies conditions ak → 0 and Σ|Δak| < ∞, then the limit of the integral of |f(x) - hn(x)|p dx from -π to π is 0 as n → ∞. It also includes a corollary deducing an earlier theorem by Ul'yanov as a special case where hn(x) is replaced with the partial sum Sn(x).
This document discusses a stochastic wave propagation model in heterogeneous media. It presents a general operator theory framework that allows modeling of linear PDEs with random coefficients. For elliptic PDEs like diffusion equations, the framework guarantees well-posedness if the sum of operator norms is less than 2. For wave equations modeled by the Helmholtz equation, well-posedness requires restricting the wavenumber k due to dependencies of operator norms on k. Establishing explicit bounds on the norms remains an open problem, particularly for wave-trapping media.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
This document provides an introduction to global sensitivity analysis. It discusses how sensitivity analysis can quantify the sensitivity of a model output to variations in its input parameters. It introduces Sobol' sensitivity indices, which measure the contribution of each input parameter to the variance of the model output. The document outlines how Sobol' indices are defined based on decomposing the model output variance into terms related to individual input parameters and their interactions. It notes that Sobol' indices are generally estimated using Monte Carlo-type sampling approaches due to the high-dimensional integrals involved in their exact calculation.
This document discusses quantifying measurement uncertainty. There are two main sources of uncertainty: a repeatable component and a random component. The random component incorporates all factors affecting measurement precision and leads to uncertainty in measured and calculated values. There are two approaches to quantifying standard uncertainty: Type A uses statistical analysis of replicates, while Type B uses best estimates from other factors like instrument specifications. Standard uncertainty is reported with measured values to indicate the precision of the measurement.
The document summarizes a lesson on diagonalization of matrices. It defines eigenvalues and eigenvectors, provides an example to illustrate the geometric effect of a non-diagonal linear transformation, and outlines the procedure for diagonalizing a matrix by finding its eigenvalues and eigenvectors and arranging them into diagonal and invertible matrices. It also gives an worked example of diagonalizing a specific 2x2 matrix.
This document contains notes from a lesson on eigenvalues and eigenvectors. It includes examples of drawing the geometric effect of linear transformations represented by diagonal and non-diagonal matrices. Key points covered are the effect of multiplying vectors by matrices, which stretches and rotates the vectors. Practice problems and office hours are also announced.
This document summarizes various tests that can be used to determine if an infinite series converges or diverges, including:
1) The divergence test, integral test, p-series test, comparison test, limit comparison test, alternating series test, and ratio test.
2) It also discusses power series, including determining the radius of convergence and using Taylor series approximations with Taylor's inequality to estimate the remainder term.
3) Key concepts are that convergence tests check if partial sums approach a limit, while divergence tests examine the behavior of individual terms, and that power series have a radius of convergence determining the interval on which they converge.
ANISOTROPIC SURFACES DETECTION USING INTENSITY MAPS ACQUIRED BY AN AIRBORNE L...grssieee
The document discusses methods for estimating the spatial anisotropy of surfaces using near-infrared LiDAR intensity maps over coastal environments. It presents two estimators - one based on 1D correlations of columns and lines in sliding windows, and another based on 2D correlations of windows and their transposes. The estimators are evaluated on synthetic data with varying anisotropy, relative anisotropy, and signal-to-noise ratio. The estimators are then applied to LiDAR intensity maps from coastal areas to characterize anisotropic surfaces independently of intensity variations. Future work involves combining these methods with multi-resolution wavelet approaches and comparing LiDAR intensity to DEM and dual-polarization SAR data.
Spacetime Meshing for Discontinuous Galerkin Methodsshripadthite
This document summarizes Shripad Thite's Ph.D. defense on spacetime meshing for discontinuous Galerkin methods. The defense presented new algorithms for adaptively generating spacetime meshes that can handle changing wavespeeds, refine and coarsen resolution, and track moving boundaries. The algorithms were proven to guarantee progress of the mesh while satisfying causality and other constraints. Several publications resulted that presented these spacetime meshing techniques and their application to problems in elastodynamics.
This document discusses Bayesian variable selection methods for regression models. It begins by reviewing traditional ANOVA tables and their limitations for modern applications with many variables, such as GWAS studies. It then introduces Bayesian approaches using priors to perform variable selection by building it into the regression model. Several variable selection methods are described that use different prior distributions, such as slab and spike priors, the stochastic search variable selection (SSVS) method, and the normal-exponential-gamma (NEG) distribution. The document discusses how these methods can be implemented using MCMC sampling and compares their performance. It also discusses some extensions like using random effects and polynomial terms.
Parameter Estimation for Semiparametric Models with CMARS and Its ApplicationsSSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 11.
More info at http://summerschool.ssa.org.ua
We study an elliptic eigenvalue problem, with a random coefficient that can be parametrised by infinitely-many stochastic parameters. The physical motivation is the criticality problem for a nuclear reactor: in steady state the fission reaction can be modeled by an elliptic eigenvalue
problem, and the smallest eigenvalue provides a measure of how close the reaction is to equilibrium -- in terms of production/absorption of neutrons. The coefficients are allowed to be random to model the uncertainty of the composition of materials inside the reactor, e.g., the
control rods, reactor structure, fuel rods etc.
The randomness in the coefficient also results in randomness in the eigenvalues and corresponding eigenfunctions. As such, our quantity of interest is the expected value, with
respect to the stochastic parameters, of the smallest eigenvalue, which we formulate as an integral over the infinite-dimensional parameter domain. Our approximation involves three steps: truncating the stochastic dimension, discretizing the spatial domain using finite elements and approximating the now finite but still high-dimensional integral.
To approximate the high-dimensional integral we use quasi-Monte Carlo (QMC) methods. These are deterministic or quasi-random quadrature rules that can be proven to be very efficient for the numerical integration of certain classes of high-dimensional functions. QMC methods have previously been applied to linear functionals of the solution of a similar elliptic source problem; however, because of the nonlinearity of eigenvalues the existing analysis of the integration error
does not hold in our case.
We show that the minimal eigenvalue belongs to the spaces required for QMC theory, outline the approximation algorithm and provide numerical results.
The document discusses developing a wireless sensor network system for structural health monitoring using non-destructive evaluation techniques like acoustic emission testing and ultrasound testing. It outlines objectives like sensor node development, network control, and damage detection algorithms. The project status updates sensor node development and a finite element model for lamb wave propagation. Future plans include more signal processing algorithms and investigating additional non-destructive methods.
This document summarizes a research paper about using hierarchical deterministic quadrature methods for option pricing under the rough Bergomi model. It discusses the rough Bergomi model and challenges in pricing options under this model numerically. It then describes the methodology used, which involves analytic smoothing, adaptive sparse grids quadrature, quasi Monte Carlo, and coupling these with hierarchical representations and Richardson extrapolation. Several figures are included to illustrate the adaptive construction of sparse grids and simulation of the rough Bergomi dynamics.
1) The document proposes a cardinality-constrained k-means clustering approach to address practical challenges with standard k-means, such as skewed clustering and sensitivity to outliers.
2) It formulates the problem as a mixed integer nonlinear program (MINLP) and provides a convex relaxation to the problem using semidefinite programming (SDP).
3) The approach provides optimality guarantees and a rounding algorithm to recover an integer feasible solution. Numerical experiments demonstrate competitive performance versus heuristics.
This document discusses heteroskedasticity in econometric models. It defines heteroskedasticity as non-constant variance of the error term, in contrast to the homoskedasticity assumption of constant variance. It explains that while OLS estimates remain unbiased with heteroskedasticity, the standard errors are biased. Robust standard errors can provide consistent standard errors even with heteroskedasticity. The Breusch-Pagan and White tests are presented as methods to test for the presence of heteroskedasticity based on the residuals. Weighted least squares is also introduced as a method to obtain more efficient estimates than OLS when the form of heteroskedasticity is known.
This presentation on Pseudo Random Number Generator enlists the different generators, their mechanisms and the various applications of random numbers and pseudo random numbers in different arenas.
Slides of a report on Machine Learning Seminar Series'11 at Kazan (Volga Region) Federal University. See http://cll.niimm.ksu.ru/cms/main/seminars/mlseminar
Importance sampling has been widely used to improve the efficiency of deterministic computer simulations where the simulation output is uniquely determined, given a fixed input. To represent complex system behavior more realistically, however, stochastic computer models are gaining popularity. Unlike deterministic computer simulations, stochastic simulations produce different outputs even at the same input. This extra degree of stochasticity presents a challenge for reliability assessment in engineering system designs. Our study tackles this challenge by providing a computationally efficient method to estimate a system's reliability. Specifically, we derive the optimal importance sampling density and allocation procedure that minimize the variance of a reliability estimator. The application of our method to a computationally intensive, aeroelastic wind turbine simulator demonstrates the benefits of the proposed approaches.
The document discusses various proof techniques used in mathematics including:
1) Inductive reasoning, deductive reasoning, proof by exhaustion, and direct proof are discussed as informal proof methods.
2) Formal proofs involve using axioms, definitions, and theorems to logically deduce conclusions.
3) Proof by contradiction involves assuming a statement and its negation to deduce a contradiction and thereby prove the statement.
This document summarizes an optimization of the TINKER classical molecular dynamics code to improve performance while maintaining readability. It discusses using compiler flags, reducing cache misses, and lookup tables. Compiler optimizations like -O2 improved performance by up to 20%. Summing intermediate values into temporary scalars reduced cache misses and provided an 8% speedup. Pre-computing common mathematical functions like sqrt and exp into lookup tables improved performance further.
Control of Uncertain Hybrid Nonlinear Systems Using Particle FiltersLeo Asselborn
This paper proposes an optimization-based algorithm for the control of uncertain hybrid nonlinear systems. The considered system class combines the nondeterministic evolution of a discrete-time Markov process with the deterministic switching of continuous dynamics which itself contains uncertain elements. A weighted particle filter approach is used to approximate the uncertain evolution of the system by a set of deterministic runs. The desired control performance for a finite time horizon is encoded by a suitable cost function and a chance-constraint, which restricts the maximum probability for entering unsafe state sets. The optimization considers input and state constraints in addition. It is demonstrated that the resulting optimization problem can be solved by techniques of conventional mixed-integer nonlinear programming (MINLP). As an illustrative example, a path planning scenario of a ground vehicle with switching nonlinear dynamics is presented.
The document discusses uncertainty quantification (UQ) using quasi-Monte Carlo (QMC) integration methods. It introduces parametric operator equations for modeling input uncertainty in partial differential equations. Both forward and inverse UQ problems are considered. QMC methods like interlaced polynomial lattice rules are discussed for approximating high-dimensional integrals arising in UQ, with convergence rates superior to standard Monte Carlo. Algorithms for single-level and multilevel QMC are presented for solving forward and inverse UQ problems.
Nonconvex Compressed Sensing with the Sum-of-Squares MethodTasuku Soma
This document presents a method for nonconvex compressed sensing using the sum-of-squares (SoS) method. It formulates q-minimization, which requires fewer samples than l1-minimization but is nonconvex, as a polynomial optimization problem. The SoS method is then applied to obtain a pseudoexpectation operator satisfying a pseudo robust null space property, guaranteeing stable signal recovery. Specifically, it shows that for a Rademacher measurement matrix, with the number of measurements scaling quadratically in the sparsity s, the SoS method finds a solution x^ satisfying ||x^-x||_q ≤ O(σs(x)q) + ε, providing nearly q-stable recovery.
Bayesian inversion of deterministic dynamic causal modelskhbrodersen
1. The document discusses various methods for Bayesian inference and model comparison in dynamic causal models, including variational Laplace approximation, sampling methods, and computing model evidence.
2. Variational Laplace approximation involves factorizing the posterior distribution and iteratively optimizing a lower bound on the model evidence called the negative free energy.
3. Sampling methods like Markov chain Monte Carlo generate stochastic approximations to the posterior by constructing a Markov chain with the target distribution as its equilibrium distribution.
- Regression analysis is used to study the relationship between variables and predict how the value of one variable changes with the other. It is one of the most commonly used tools for business analysis.
- Simple linear regression analyzes the relationship between one independent variable and one dependent variable. The regression equation estimates the dependent variable as a linear function of the independent variable.
- Least squares regression fits a line to the data by minimizing the sum of the squared residuals, providing estimates of the slope and y-intercept coefficients in the regression equation.
This document discusses Bayesian nonparametric posterior concentration rates under different loss functions.
1. It provides an overview of posterior concentration, how it gives insights into priors and inference, and how minimax rates can characterize concentration classes.
2. The proof technique involves constructing tests and relating distances like KL divergence to the loss function. Examples where nice results exist include density estimation, regression, and white noise models.
3. For the white noise model with a random truncation prior, it shows L2 concentration and pointwise concentration rates match minimax. But for sup-norm loss, existing results only achieve a suboptimal rate. The document explores how to potentially obtain better adaptation for sup-norm loss.
This document describes a clustering procedure and nonparametric mixture estimation. It introduces a mixture density model where the goal is to efficiently estimate the mixture weights (αi) and component densities (fi). A two-stage clustering algorithm is proposed: 1) perform clustering on covariates (X) to estimate labels (Ik), and 2) estimate component densities (fi) using kernel density estimation within each cluster. The performance of this approach depends on the clustering method's misclassification error. A toy example with two components having disjoint support densities for X is provided to illustrate the model.
The document discusses measuring the influence of units in two-phase sampling designs. It begins by defining influential units as those with large design weights or values. While a good sampling design can minimize their impact, influential units may still be selected. The double expansion estimator is an unbiased estimator for estimating population totals, but influential units can increase its variance. The document explores measuring a unit's influence through its conditional bias and constructing robust estimators to reduce the impact of influential units. It considers the influence of units that are sampled in one or both phases.
This document proposes using a mobile sensor network to detect nuclear material in large cities. Sensors would be installed in many vehicles like taxis and police cars. A control center would receive signals from the sensors in real time and analyze the data using a dynamic model approach to estimate the location of any nuclear sources. Particle filtering is used to update estimates after each time period based on new sensor data. Simulations show this mobile sensor network can effectively detect nuclear material.
The document discusses the Wang-Landau algorithm, which is used to estimate the density of states of a physical system. It summarizes the original Wang-Landau algorithm and introduces an improved version that guarantees achieving a "flat histogram" - where each bin in the histogram has approximately equal frequency - in a finite number of steps. This is achieved by only decreasing the schedule parameter γ when the flat histogram criterion is met, rather than at every step. The document proves that this algorithm will converge to the flat histogram given some assumptions about the state space and proposal distribution. It also shows that an alternative update rule does not guarantee convergence in finite time.
The document discusses approximate regeneration schemes for Markov chains. It introduces the concept of regeneration blocks between visits to an atom set. For chains without an atom, the Nummelin splitting technique extends the chain to be atomic. An approximate regeneration scheme is proposed using an estimated transition density over a small set to split the chain. This allows treating blocks of data as approximately i.i.d.
1) The document discusses bias amplification that can occur when using instrumental variable calibration estimators with missing survey data. It presents models where a variable of interest (y) and instrumental variables (z) are related, and response propensity depends on the instrumental variables.
2) When an imperfect proxy for the instrumental variables (x) is used in calibration instead of the true variables, it can lead to bias amplification if the proxy is also related to response propensity. This violates the assumption that the proxy is independent of response given the instrumental variables.
3) A simulation study is presented to illustrate how using an imperfect proxy in calibration can amplify bias compared to the naive estimator that ignores nonresponse. The degree of bias
The document outlines the main points of a paper on partial identification with missing data:
1. It introduces the problem of partial identification in missing data problems and surveys related literature.
2. It formalizes the general framework as estimating a parameter θ0 that depends on an unobserved variable U based on an observed variable O that is related to U.
3. The main result shows that for a large class of missing data problems, bounds on the identified set Θ0 can be obtained by optimizing over the extreme parts of the restriction set Rθ rather than the full set, making the optimization problem tractable.
9. Curve registration by minimax nonparametric testing
Introduction
Solution:
keypoint ⇒ descriptor
We say two points match if they have the same descriptor.
10. Curve registration by minimax nonparametric testing
Introduction
Example: Descriptors of the main orientation of the local gradient
11. Curve registration by minimax nonparametric testing
Introduction
The descriptors should be:
12. Curve registration by minimax nonparametric testing
Introduction
The descriptors should be:
discriminating enough,
13. Curve registration by minimax nonparametric testing
Introduction
The descriptors should be:
discriminating enough,
invariant for some basic transformations.
14. Curve registration by minimax nonparametric testing
Introduction
Transformations to consider:
15. Curve registration by minimax nonparametric testing
Introduction
Transformations to consider:
translation,
16. Curve registration by minimax nonparametric testing
Introduction
Transformations to consider:
translation,
rotation,
17. Curve registration by minimax nonparametric testing
Introduction
Transformations to consider:
translation,
rotation,
scale change...
18. Curve registration by minimax nonparametric testing
Introduction
Famous example: SIFT
19. Curve registration by minimax nonparametric testing
Introduction
Famous example: SIFT
Compute the histogram of the local gradient with relation to
the angle.
−→
20. Curve registration by minimax nonparametric testing
Introduction
Invariance by translation: the position of the keypoint is not
taken into account.
21. Curve registration by minimax nonparametric testing
Introduction
Invariance by translation: the position of the keypoint is not
taken into account.
Invariance by rotation: the histogram is centered on the main
gradient orientation.
−→
23. Curve registration by minimax nonparametric testing
Introduction
New approach:
Use the non-centered histogram, to avoid the computation of
the main orientation.
24. Curve registration by minimax nonparametric testing
Introduction
New approach:
Use the non-centered histogram, to avoid the computation of
the main orientation.
But a rotation of the image yields a translation of the
non-centered histogram.
25. Curve registration by minimax nonparametric testing
Introduction
New approach:
Use the non-centered histogram, to avoid the computation of
the main orientation.
But a rotation of the image yields a translation of the
non-centered histogram.
⇒ New matching criterion: Two keypoints match if their
descriptors are shifted from each other.
26. Curve registration by minimax nonparametric testing
Introduction
Consequence: We want to detect when
∃ τ ∈ [0, 2π], f (t) = g (t + τ ) ?
27. Curve registration by minimax nonparametric testing
Model
1 Introduction
2 Model
3 Generalized likelihood ratio
4 Wilks’ phenomenon
5 Minimax considerations
6 Upper bound
7 Lower bound
8 Adaptation
28. Curve registration by minimax nonparametric testing
Model
We state the model using Fourier coefficients:
29. Curve registration by minimax nonparametric testing
Model
We state the model using Fourier coefficients:
Xi = ci + σξi ,
Xi# = ci# + σξi# .
30. Curve registration by minimax nonparametric testing
Model
Xi = ci + σξi
Xi# = ci# + σξi#
31. Curve registration by minimax nonparametric testing
Model
Xi = ci + σξi
Xi# = ci# + σξi#
∗
H0 : ∃τ ∗ ∈ [0, 2π], ∀j ≥ 1, cj# = e ijτ cj ,
32. Curve registration by minimax nonparametric testing
Model
Xi = ci + σξi
Xi# = ci# + σξi#
∗
H0 : ∃τ ∗ ∈ [0, 2π], ∀j ≥ 1, cj# = e ijτ cj ,
H1 : d (c, c # ) minτ ∈[0,2π] +∞
j=1 |cj − e −ijτ cj# |2 ≥ ρ.
34. Curve registration by minimax nonparametric testing
Model
Hypotheses:
The noise level σ is known.
35. Curve registration by minimax nonparametric testing
Model
Hypotheses:
The noise level σ is known.
+∞ 2k 2
c, c # ∈ Fk,L {u, j=1 j |uj | ≤ L}.
36. Curve registration by minimax nonparametric testing
Model
Hypotheses:
The noise level σ is known.
+∞ 2k 2
c, c # ∈ Fk,L {u, j=1 j |uj | ≤ L}.
The regularity parameters k, L are known.
37. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
1 Introduction
2 Model
3 Generalized likelihood ratio
4 Wilks’ phenomenon
5 Minimax considerations
6 Upper bound
7 Lower bound
8 Adaptation
38. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Standard likelihood ratio:
39. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Standard likelihood ratio:
The minimization of the negative log-likelihood in the general
set-up leads to
40. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Standard likelihood ratio:
The minimization of the negative log-likelihood in the general
set-up leads to
1 2
min X −u + λj j 2k |uj |2 + . . . ,
u 2σ 2
j≥1
where the λj are the Lagrange multipliers.
41. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Generalized likelihood ratio:
We replace
42. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Generalized likelihood ratio:
We replace
1 2
min X −u + λj j 2k |uj |2
u 2σ 2
j≥1
43. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Generalized likelihood ratio:
We replace
1 2
min X −u + λj j 2k |uj |2
u 2σ 2
j≥1
by
1 2
min X −u + ωj |uj |2 .
u 2σ 2
j≥1
44. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
So we get the test statistic:
1
Tσ = min νj |Xj − e −ijτ Xj# |2 .
σ 2 τ ∈[0,2π]
j≥1
with νj = 1/(1 + ωj ).
45. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Possible weights:
νj = 1j≤N , (projection weights),
46. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Possible weights:
νj = 1j≤N , (projection weights),
−1
j
νj = 1 + N 1j≤N , (Tikhonov weights),
47. Curve registration by minimax nonparametric testing
Generalized likelihood ratio
Possible weights:
νj = 1j≤N , (projection weights),
−1
j
νj = 1 + N 1j≤N , (Tikhonov weights),
j
νj = 1 − N , (Pinsker weights).
+
49. Curve registration by minimax nonparametric testing
Wilks’ phenomenon
Theorem
We assume that
|c1 | > 0 and c ∈ F1,L ,
50. Curve registration by minimax nonparametric testing
Wilks’ phenomenon
Theorem
We assume that
|c1 | > 0 and c ∈ F1,L ,
ν satisfies reasonable assumptions, i.e., ν 2 ≈ Nσ with
2
5/2
σ 2 Nσ log(Nσ ) → 0.
51. Curve registration by minimax nonparametric testing
Wilks’ phenomenon
Theorem
We assume that
|c1 | > 0 and c ∈ F1,L ,
ν satisfies reasonable assumptions, i.e., ν 2 ≈ Nσ with
2
5/2
σ 2 Nσ log(Nσ ) → 0.
Then, under H0 ,
Tσ − 4 ν 1 L
− N (0, 1).
→
4 ν 2
52. Curve registration by minimax nonparametric testing
Wilks’ phenomenon
Theorem (Wilks’ phenomenon)
The generalized likelihood ratio test
1{Tσ ≥4 ν 1 +4 ν 2 qα }
is asymptotically of level α and does not depend on the nuisance
parameters.
53. Curve registration by minimax nonparametric testing
Wilks’ phenomenon
Theorem
We assume that ν satisfies reasonable conditions and σ 4 Nσ → 0,
then, under H1 ,
P
Tσ − +∞.
→
55. Curve registration by minimax nonparametric testing
Minimax considerations
We consider the sets
56. Curve registration by minimax nonparametric testing
Minimax considerations
We consider the sets
Θ0 = {(c, c # ) ∈ Fs,L | d (c, c # ) = 0},
Θ1 = {(c, c # ) ∈ Fs,L | d (c, c # ) ≥ C ρσ },
57. Curve registration by minimax nonparametric testing
Minimax considerations
and the errors (for a test ψ)
α(ψ, Θ0 ) = supΘ0 P(c,c # ) (ψ = 1),
β(ψ, Θ1 ) = supΘ1 P(c,c # ) (ψ = 0).
58. Curve registration by minimax nonparametric testing
Minimax considerations
Problem:
What is the smallest rate ρσ allowing to consistently decide
between Θ0 and Θ1 ?
60. Curve registration by minimax nonparametric testing
Upper bound
We consider the tests
ψ(N, q) = 1{λσ (N)>q}
where
N √
1
λσ (N) = √ min |Xj − e −ijτ Xj# |2 − N.
4σ 2 N τ
j=1
61. Curve registration by minimax nonparametric testing
Upper bound
Theorem (Upper bound)
Let α be in (0, 1). Define ψσ,α = ψ(Nσ , qα ) with
−1/s
Nσ = [cs,L ρσ ],
√ 2/4s+1
cs,L = 4sL2 4s + 1
qα the quantile of order 1 − α of the N (0, 1) distribution.
−2s 256cs,L 2s/4s+1
If C 2 > 4L2 cs,L + 4s+1 and ρσ = σ 2 log σ −1 , then
lim supσ→0 α(ψσ,α , Θ0 ) ≤ α
limσ→0 β(ψσ,α , Θ1 ) = 0.
63. Curve registration by minimax nonparametric testing
Lower bound
Consider the general model
Xi = ci + σξi ,
Xi# = ci# + σξi# .
where we want to test
(c, c # ) ∈ Θ0
against (c, c # ) ∈ Θ1 .
64. Curve registration by minimax nonparametric testing
Lower bound
If c # = 0, we get the simpler model
Xi = ci + σξi ,
where we want to test
c ∈ Θclass = {0}
0
against c ∈ Θclass = {c ∈ Fs,L , c
1 2 ≥ C ρσ }.
65. Curve registration by minimax nonparametric testing
Lower bound
Theorem (Lower bound)
If ρσ σ 4s/4s+1 then consistent testing of H0 against H1 is
impossible.
66. Curve registration by minimax nonparametric testing
Adaptation
1 Introduction
2 Model
3 Generalized likelihood ratio
4 Wilks’ phenomenon
5 Minimax considerations
6 Upper bound
7 Lower bound
8 Adaptation
67. Curve registration by minimax nonparametric testing
Adaptation
Problem:
How to obtain similar performances when you do not know the
regularity parameter s and the radius L ?