The document discusses using wavelet representations for density estimation and shape analysis. It proposes using a constrained maximum likelihood objective to estimate density coefficients in a multi-resolution wavelet basis. Model selection criteria like MDL, AIC and BIC are compared for selecting the number of resolution levels in the wavelet expansion, with MDL shown to be invariant to the multi-resolution analysis used. The criteria are tested on 1D densities with different shapes, with MDL and MSE performing best in distinguishing the densities.
T. Popov - Drinfeld-Jimbo and Cremmer-Gervais Quantum Lie AlgebrasSEENET-MTP
This document summarizes work on Drinfeld-Jimbo and Cremmer-Gervais quantum Lie algebras. It describes how quantum spaces arise from braided deformations of commutative spaces, and how bicovariant differential calculi on quantum groups lead to quantum Lie algebras. It presents the Drinfeld-Jimbo and Cremmer-Gervais R-matrices, and shows how they give rise to quantum Lie algebra structures through their associated braidings. It also establishes relationships between Drinfeld-Jimbo, Cremmer-Gervais, and "strict RIME" quantum Lie algebras through changes of basis.
This document introduces graph C*-algebras and related concepts:
- A graph C*-algebra is generated by partial isometries and projections representing a directed graph.
- A Cuntz-Krieger E-family satisfies relations involving these operators to represent a graph as bounded operators on a Hilbert space.
- The graph C*-algebra is then the closed algebra generated by this family of operators.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a study on the Lp-convergence of the Rees-Stanojevic modified cosine sum for 0 < p < 1. It presents a theorem showing that if the sequence {ak} satisfies conditions ak → 0 and Σ|Δak| < ∞, then the limit of the integral of |f(x) - hn(x)|p dx from -π to π is 0 as n → ∞. It also includes a corollary deducing an earlier theorem by Ul'yanov as a special case where hn(x) is replaced with the partial sum Sn(x).
The document provides three questions from a past exam on Engineering Mathematics IV. Question 1a asks to find the third order Taylor approximation of the differential equation dy/dx = y + 1 with the initial condition y(0) = 0. Question 1b asks to solve a differential equation using the modified Euler's method at two points. Question 1c asks to find the value of y(0.4) using Milne's predictor-corrector method for a given differential equation.
Recent developments in control, power electronics and renewable energy by Dr ...Qing-Chang Zhong
The document provides an overview of Qing-Chang Zhong's research activities in control theory and engineering. It outlines his work in areas such as robust control, time-delay systems, process control, and applying infinite-dimensional systems theory to time-delay systems. It also lists publications, research projects solved, teaching responsibilities, funding sources, and future research plans.
This document discusses Bayesian variable selection methods for regression models. It begins by reviewing traditional ANOVA tables and their limitations for modern applications with many variables, such as GWAS studies. It then introduces Bayesian approaches using priors to perform variable selection by building it into the regression model. Several variable selection methods are described that use different prior distributions, such as slab and spike priors, the stochastic search variable selection (SSVS) method, and the normal-exponential-gamma (NEG) distribution. The document discusses how these methods can be implemented using MCMC sampling and compares their performance. It also discusses some extensions like using random effects and polynomial terms.
The document discusses targeted Bayesian network learning (TBNL) and its application to predicting criminal suspects. It compares TBNL to traditional Bayesian network learning approaches, noting that TBNL aims to maximize the amount of information learned about a specific target variable rather than the entire distribution. The document provides examples of TBNL outperforming naive Bayes and tree-augmented networks on several datasets by exploiting correlations between attributes and the target more effectively for prediction tasks. It also analyzes the differential complexity of TBNL versus traditional explanatory models.
T. Popov - Drinfeld-Jimbo and Cremmer-Gervais Quantum Lie AlgebrasSEENET-MTP
This document summarizes work on Drinfeld-Jimbo and Cremmer-Gervais quantum Lie algebras. It describes how quantum spaces arise from braided deformations of commutative spaces, and how bicovariant differential calculi on quantum groups lead to quantum Lie algebras. It presents the Drinfeld-Jimbo and Cremmer-Gervais R-matrices, and shows how they give rise to quantum Lie algebra structures through their associated braidings. It also establishes relationships between Drinfeld-Jimbo, Cremmer-Gervais, and "strict RIME" quantum Lie algebras through changes of basis.
This document introduces graph C*-algebras and related concepts:
- A graph C*-algebra is generated by partial isometries and projections representing a directed graph.
- A Cuntz-Krieger E-family satisfies relations involving these operators to represent a graph as bounded operators on a Hilbert space.
- The graph C*-algebra is then the closed algebra generated by this family of operators.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a study on the Lp-convergence of the Rees-Stanojevic modified cosine sum for 0 < p < 1. It presents a theorem showing that if the sequence {ak} satisfies conditions ak → 0 and Σ|Δak| < ∞, then the limit of the integral of |f(x) - hn(x)|p dx from -π to π is 0 as n → ∞. It also includes a corollary deducing an earlier theorem by Ul'yanov as a special case where hn(x) is replaced with the partial sum Sn(x).
The document provides three questions from a past exam on Engineering Mathematics IV. Question 1a asks to find the third order Taylor approximation of the differential equation dy/dx = y + 1 with the initial condition y(0) = 0. Question 1b asks to solve a differential equation using the modified Euler's method at two points. Question 1c asks to find the value of y(0.4) using Milne's predictor-corrector method for a given differential equation.
Recent developments in control, power electronics and renewable energy by Dr ...Qing-Chang Zhong
The document provides an overview of Qing-Chang Zhong's research activities in control theory and engineering. It outlines his work in areas such as robust control, time-delay systems, process control, and applying infinite-dimensional systems theory to time-delay systems. It also lists publications, research projects solved, teaching responsibilities, funding sources, and future research plans.
This document discusses Bayesian variable selection methods for regression models. It begins by reviewing traditional ANOVA tables and their limitations for modern applications with many variables, such as GWAS studies. It then introduces Bayesian approaches using priors to perform variable selection by building it into the regression model. Several variable selection methods are described that use different prior distributions, such as slab and spike priors, the stochastic search variable selection (SSVS) method, and the normal-exponential-gamma (NEG) distribution. The document discusses how these methods can be implemented using MCMC sampling and compares their performance. It also discusses some extensions like using random effects and polynomial terms.
The document discusses targeted Bayesian network learning (TBNL) and its application to predicting criminal suspects. It compares TBNL to traditional Bayesian network learning approaches, noting that TBNL aims to maximize the amount of information learned about a specific target variable rather than the entire distribution. The document provides examples of TBNL outperforming naive Bayes and tree-augmented networks on several datasets by exploiting correlations between attributes and the target more effectively for prediction tasks. It also analyzes the differential complexity of TBNL versus traditional explanatory models.
Spectral Learning Methods for Finite State Machines with Applications to Na...LARCA UPC
The document summarizes a spectral learning method for probabilistic finite-state machines (FSMs). It introduces observable operator models that represent probabilistic transducers using conditional probabilities between inputs, outputs, and hidden states. A key contribution is a spectral algorithm that learns the parameters of these models from data in linear time, with theoretical PAC-style guarantees. Experimental results on synthetic data show the method outperforms baselines like HMMs and k-HMMs on learning tasks.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Decomposition including Spatio-Temporal Constraint”, International Workshop on Background Model Challenges, ACCV 2012, Daejon, Korea, November 2012.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
1. The document discusses fuzzy topological spaces, where open sets have fuzzy boundaries defined by a complete lattice L.
2. It defines cl∞-monoids which are used to measure the degree of membership of points in sets, and L- or fuzzy sets. Collections of fuzzy sets form L-topological spaces.
3. The main result is a version of the Tychonoff theorem for L-topological spaces, which gives necessary and sufficient conditions on L for all collections of a given cardinality of compact L-spaces to have a compact product.
4. It is shown that every product of α-compact L-spaces is compact if and only if 1 is α-isolated in L
The document describes a Hamiltonian with terms including Ji,j|ωiωj| and Ei|ωiωi| that depends on parameters ∆/J and ω. It studies the behavior of the system as ∆/J increases from 0 to greater than 6, including plots of the momentum distribution |P(k)|2 that show it spreading out over more values of k/k1. The dependence of the system on other parameters like α, s1, and s2 is also examined through additional plots.
This document contains questions from a fourth semester engineering examination on design and analysis of algorithms. It asks students to:
1) Define asymptotic notations and analyze the time complexity of a sample algorithm.
2) Solve recurrence relations for different algorithms.
3) Explain how bubble sort and quicksort work, including tracing quicksort on a sample data set and deriving its worst case complexity.
4) Write the recursive algorithm for merge sort.
The document contains questions assessing students' understanding of algorithm analysis, asymptotic notations, solving recurrence relations, and sorting algorithms like bubble sort, quicksort, and merge sort.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
This document summarizes research on using deformable models for object recognition. It discusses using deformable part models to detect objects by optimizing part locations. Efficient algorithms like dynamic programming and min-convolutions are used for matching. Non-rigid objects are modeled using triangulated polygons that can deform individual triangles. Hierarchical shape models capture shape variations. The document applies these techniques to the PASCAL visual object recognition challenge, achieving state-of-the-art results on 10 of 20 object categories through discriminatively trained, multiscale deformable part models.
Object Detection with Discrmininatively Trained Part based Modelszukun
The document describes an object detection method using deformable part-based models that are discriminatively trained. The models consist of root filters and deformable part filters at multiple resolutions. Latent SVM training is used to learn the filters and deformation costs from weakly labeled images. The method achieved state-of-the-art results on the PASCAL object detection challenge, outperforming other methods in accuracy and speed.
Realizations, Differential Equations, Canonical Quantum Commutators And Infin...vcuesta
1) The document discusses finding different realizations of quantum operators q and p that obey the canonical commutator [q,p]=iħ. It considers cases where p is defined as -iħf(q)∂/∂q and solves for the corresponding q operator.
2) This leads to an infinite number of possible representations, as f(q) can be any function of q. Three specific cases are analyzed.
3) For each case, the Schrodinger equation is derived and solutions are found for a free particle and infinite square well potential. However, some cases cannot yield normalizable wavefunctions or satisfy boundary conditions.
An introduction to quantum stochastic calculusSpringer
The document discusses tensor products of Hilbert spaces. It defines positive definite kernels on sets and shows how they can be used to define tensor products. Given Hilbert spaces H1, ..., Hn, it constructs a kernel on the cartesian product of the spaces and shows that its Gelfand pair (H,φ) gives a tensor product of the Hilbert spaces. The map φ from the product space into H is multilinear and H is the completion of the algebraic tensor product of the vector spaces H1, ..., Hn.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
The document derives the normal probability density function from basic assumptions. It assumes that errors in perpendicular directions are independent, large errors are less likely than small errors, and the distribution is not dependent on orientation. This leads to a differential equation that can only be satisfied by an exponential function, giving the normal distribution. The values of the coefficients are determined by requiring the total area under the curve to be 1 and that the variance equals 1/k. This fully specifies the normal probability density function.
This document provides an overview of Bayesian methods for machine learning. It introduces some foundational Bayesian concepts including representing beliefs with probabilities, the Dutch book theorem, asymptotic certainty, and model comparison using Occam's razor. It discusses challenges like intractable integrals and presents approximation tools like Laplace's approximation, variational inference, and MCMC. It also covers choosing priors, including objective priors like noninformative, Jeffreys, and reference priors as well as subjective and hierarchical priors.
D-Branes and The Disformal Dark Sector - Danielle Wills and Tomi KoivistoCosmoAIMS Bassett
The document discusses disformal relations between the physical and gravitational geometry. It begins by introducing the most general form such a relation could take, with two arbitrary functions C and D of a scalar field and its derivative.
It then discusses how this type of relation naturally arises in many modified gravity and scalar-tensor theories. Specific examples mentioned include f(R) gravity and the Dirac-Born-Infeld (DBI) string scenario.
The document outlines how a disformal coupling could have interesting phenomenological implications and be detectable through effects on cosmology and structure formation. It concludes by stating the disformal relation is an important generalization worth further study.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 2: Interest Pointszukun
1) The document discusses methods for visual localization and texture categorization using interest point detection and entropy saliency. It focuses on using scale-space analysis and learning distributions to filter out less salient regions for computational efficiency.
2) An entropy saliency detector is proposed that uses local entropy calculations at multiple scales to identify salient regions. Scale-space analysis allows detection of salient regions without prior knowledge of scale.
3) Techniques including Chernoff information and Kullback-Leibler divergence are discussed for learning distributions of image categories and defining thresholds to filter regions, reducing computational costs of interest point detection and description.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 3: Feature Selectionzukun
This document discusses high-dimensional feature selection for images, genes, and graphs. It covers several key topics:
1) Feature selection aims to reduce dimensionality for improving classifier performance and identify important patterns. This is challenging with thousands of features.
2) Mutual information is proposed as an optimal criterion for evaluating feature subsets, as it relates to the Bayesian error rate.
3) The mRMR criterion is introduced to maximize feature relevance while minimizing redundancy between features.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registrationzukun
This document discusses using isocontours and image registration. It proposes estimating densities and entropies from isocontour areas rather than samples to allow image-based density estimation without binning issues. The joint probability of two images can be estimated from the overlapping area of isocontours. Mutual information can then be used for registration by minimizing the joint entropy minus individual entropies estimated from isocontour densities. This approach is compared to standard histogramming.
Spectral Learning Methods for Finite State Machines with Applications to Na...LARCA UPC
The document summarizes a spectral learning method for probabilistic finite-state machines (FSMs). It introduces observable operator models that represent probabilistic transducers using conditional probabilities between inputs, outputs, and hidden states. A key contribution is a spectral algorithm that learns the parameters of these models from data in linear time, with theoretical PAC-style guarantees. Experimental results on synthetic data show the method outperforms baselines like HMMs and k-HMMs on learning tasks.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Decomposition including Spatio-Temporal Constraint”, International Workshop on Background Model Challenges, ACCV 2012, Daejon, Korea, November 2012.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
1. The document discusses fuzzy topological spaces, where open sets have fuzzy boundaries defined by a complete lattice L.
2. It defines cl∞-monoids which are used to measure the degree of membership of points in sets, and L- or fuzzy sets. Collections of fuzzy sets form L-topological spaces.
3. The main result is a version of the Tychonoff theorem for L-topological spaces, which gives necessary and sufficient conditions on L for all collections of a given cardinality of compact L-spaces to have a compact product.
4. It is shown that every product of α-compact L-spaces is compact if and only if 1 is α-isolated in L
The document describes a Hamiltonian with terms including Ji,j|ωiωj| and Ei|ωiωi| that depends on parameters ∆/J and ω. It studies the behavior of the system as ∆/J increases from 0 to greater than 6, including plots of the momentum distribution |P(k)|2 that show it spreading out over more values of k/k1. The dependence of the system on other parameters like α, s1, and s2 is also examined through additional plots.
This document contains questions from a fourth semester engineering examination on design and analysis of algorithms. It asks students to:
1) Define asymptotic notations and analyze the time complexity of a sample algorithm.
2) Solve recurrence relations for different algorithms.
3) Explain how bubble sort and quicksort work, including tracing quicksort on a sample data set and deriving its worst case complexity.
4) Write the recursive algorithm for merge sort.
The document contains questions assessing students' understanding of algorithm analysis, asymptotic notations, solving recurrence relations, and sorting algorithms like bubble sort, quicksort, and merge sort.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
This document summarizes research on using deformable models for object recognition. It discusses using deformable part models to detect objects by optimizing part locations. Efficient algorithms like dynamic programming and min-convolutions are used for matching. Non-rigid objects are modeled using triangulated polygons that can deform individual triangles. Hierarchical shape models capture shape variations. The document applies these techniques to the PASCAL visual object recognition challenge, achieving state-of-the-art results on 10 of 20 object categories through discriminatively trained, multiscale deformable part models.
Object Detection with Discrmininatively Trained Part based Modelszukun
The document describes an object detection method using deformable part-based models that are discriminatively trained. The models consist of root filters and deformable part filters at multiple resolutions. Latent SVM training is used to learn the filters and deformation costs from weakly labeled images. The method achieved state-of-the-art results on the PASCAL object detection challenge, outperforming other methods in accuracy and speed.
Realizations, Differential Equations, Canonical Quantum Commutators And Infin...vcuesta
1) The document discusses finding different realizations of quantum operators q and p that obey the canonical commutator [q,p]=iħ. It considers cases where p is defined as -iħf(q)∂/∂q and solves for the corresponding q operator.
2) This leads to an infinite number of possible representations, as f(q) can be any function of q. Three specific cases are analyzed.
3) For each case, the Schrodinger equation is derived and solutions are found for a free particle and infinite square well potential. However, some cases cannot yield normalizable wavefunctions or satisfy boundary conditions.
An introduction to quantum stochastic calculusSpringer
The document discusses tensor products of Hilbert spaces. It defines positive definite kernels on sets and shows how they can be used to define tensor products. Given Hilbert spaces H1, ..., Hn, it constructs a kernel on the cartesian product of the spaces and shows that its Gelfand pair (H,φ) gives a tensor product of the Hilbert spaces. The map φ from the product space into H is multilinear and H is the completion of the algebraic tensor product of the vector spaces H1, ..., Hn.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
The document derives the normal probability density function from basic assumptions. It assumes that errors in perpendicular directions are independent, large errors are less likely than small errors, and the distribution is not dependent on orientation. This leads to a differential equation that can only be satisfied by an exponential function, giving the normal distribution. The values of the coefficients are determined by requiring the total area under the curve to be 1 and that the variance equals 1/k. This fully specifies the normal probability density function.
This document provides an overview of Bayesian methods for machine learning. It introduces some foundational Bayesian concepts including representing beliefs with probabilities, the Dutch book theorem, asymptotic certainty, and model comparison using Occam's razor. It discusses challenges like intractable integrals and presents approximation tools like Laplace's approximation, variational inference, and MCMC. It also covers choosing priors, including objective priors like noninformative, Jeffreys, and reference priors as well as subjective and hierarchical priors.
D-Branes and The Disformal Dark Sector - Danielle Wills and Tomi KoivistoCosmoAIMS Bassett
The document discusses disformal relations between the physical and gravitational geometry. It begins by introducing the most general form such a relation could take, with two arbitrary functions C and D of a scalar field and its derivative.
It then discusses how this type of relation naturally arises in many modified gravity and scalar-tensor theories. Specific examples mentioned include f(R) gravity and the Dirac-Born-Infeld (DBI) string scenario.
The document outlines how a disformal coupling could have interesting phenomenological implications and be detectable through effects on cosmology and structure formation. It concludes by stating the disformal relation is an important generalization worth further study.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 2: Interest Pointszukun
1) The document discusses methods for visual localization and texture categorization using interest point detection and entropy saliency. It focuses on using scale-space analysis and learning distributions to filter out less salient regions for computational efficiency.
2) An entropy saliency detector is proposed that uses local entropy calculations at multiple scales to identify salient regions. Scale-space analysis allows detection of salient regions without prior knowledge of scale.
3) Techniques including Chernoff information and Kullback-Leibler divergence are discussed for learning distributions of image categories and defining thresholds to filter regions, reducing computational costs of interest point detection and description.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 3: Feature Selectionzukun
This document discusses high-dimensional feature selection for images, genes, and graphs. It covers several key topics:
1) Feature selection aims to reduce dimensionality for improving classifier performance and identify important patterns. This is challenging with thousands of features.
2) Mutual information is proposed as an optimal criterion for evaluating feature subsets, as it relates to the Bayesian error rate.
3) The mRMR criterion is introduced to maximize feature relevance while minimizing redundancy between features.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registrationzukun
This document discusses using isocontours and image registration. It proposes estimating densities and entropies from isocontour areas rather than samples to allow image-based density estimation without binning issues. The joint probability of two images can be estimated from the overlapping area of isocontours. Mutual information can then be used for registration by minimizing the joint entropy minus individual entropies estimated from isocontour densities. This approach is compared to standard histogramming.
The document discusses object recognition and categorization. It outlines the challenges of object recognition including viewpoint variation, illumination changes, occlusion, scale differences, deformations, and background clutter. It also discusses representations, learning methods, and recognition approaches for object categorization including generative vs. discriminative models and different levels of supervision.
The document discusses multiclass object detection and how knowledge can be transferred between object categories. It notes that objects share parts and properties, and these commonalities can be leveraged for multitask learning. Models like convolutional neural networks are naturally able to share representations due to translation invariance built into the network. Contextual information from surrounding objects and scenes can also improve detection of difficult objects. Approaches like conditional random fields can model long-range relationships to capture these contextual cues.
- The document contains a table of contents listing applications of image segmentation, including medical image analysis.
- It then discusses using game theory to integrate region-based and boundary-based image segmentation approaches. Pixels and boundaries are modeled as players in a game, with the goal of maximizing both region and boundary posteriors through limited interaction.
- Dominant sets, a graph-based clustering technique, is also discussed for applications like intensity, color, texture segmentation of images and video. Hierarchical segmentation is achieved by regularizing dominant sets with boundary information.
The document discusses the Lucas-Kanade template tracking method for video object tracking. It begins with a review of the Lucas-Kanade optical flow method and how it can be applied to template tracking by imposing the constraint that neighboring pixels within the template have the same flow. It notes the limitation that a constant flow assumption is unreasonable over long periods, but describes how the Lucas-Kanade approach can be generalized to other parametric motion models. It then provides a step-by-step derivation of the Lucas-Kanade tracking algorithm and shows an example using an affine warp model. Finally, it provides an overview of the tracking algorithm and discusses state-of-the-art applications to facial mesh tracking.
This document discusses using Gaussian process models for change point detection in atmospheric dispersion problems. It proposes using multiple kernels in a Gaussian process to model different regimes indicated by change points. A two-stage process is used to first estimate the change point (release time) and then estimate the source location. Simulation results show the approach outperforms existing techniques in estimating change points and source locations from concentration sensor measurements. The approach is applied to model real concentration data to estimate a CBRN release scenario.
1. The document summarizes analysis of Gaussian belief propagation (GaBP) on graphical models, including walk-sum analysis of means and variances and orbit-product analysis of determinants.
2. GaBP provides an approximate inference algorithm that computes marginal distributions by passing messages between nodes. In tree models it is exact, but in loopy graphs it can underestimate variances.
3. The analysis shows that GaBP computes a complete walk-sum for the means but an incomplete walk-sum for the variances, accounting for its inexactness on loopy graphs. It also shows that the GaBP estimate of the partition function is equal to the totally backtracking orbit-product.
Change of variables in double integralsTarun Gehlot
1. The document discusses change of variables for double integrals, introducing the Jacobian determinant which relates the differentials of the original and transformed variables.
2. It provides an example of using a change of variables (u=x-y, v=x+y) to evaluate an integral over a parallelogram region.
3. Polar coordinates are also discussed as a common change of variables technique for double integrals, with an example evaluating an integral over a circular region in polar coordinates.
Change of variables in double integralsTarun Gehlot
1. The document discusses change of variables for double integrals, introducing the Jacobian determinant which relates the differentials of the original and transformed variables.
2. It provides an example of using a change of variables (u=x-y, v=x+y) to evaluate an integral over a parallelogram region.
3. Polar coordinates are also discussed as a common change of variables technique for double integrals, with an example evaluating an integral over a circular region in polar coordinates.
This document provides an overview of compressive sensing and discusses several key aspects:
- Compressive sensing acquisition uses fewer measurements than traditional sampling to reconstruct sparse signals.
- Theoretical guarantees like the restricted isometry property ensure accurate reconstruction is possible from few random measurements.
- Fourier domain measurements and structured sensing matrices like partial Fourier matrices can enable fast acquisition.
- Parameters like the regularization parameter λ in reconstruction algorithms must be selected appropriately, such as through risk minimization and prediction risk estimation techniques.
The document summarizes key concepts in social network analysis including metrics like degree distribution, path lengths, transitivity, and clustering coefficients. It also discusses models of network growth and structure like random graphs, small-world networks, and preferential attachment. Computational aspects of analyzing large networks like calculating shortest paths and the diameter are also covered.
This document is the final exam for ENGR 371 - Probability and Statistics given on April 29, 2010 at Concordia University. It contains 6 questions testing concepts like probability, confidence intervals, hypothesis testing, and distributions. Formulas relevant to the exam questions are also provided.
1) Laplace's equation describes situations where the electric potential (V) or other scalar field satisfies ∇^2V = 0. It can be solved in one, two, or three dimensions using separation of variables.
2) In three dimensions, the general solution is a sum of multipole terms involving associated Legendre polynomials. The leading terms are the monopole and dipole contributions.
3) For a dipole potential, the electric field is proportional to p/r^3 where p is the dipole moment. The field points radially away from a head-to-tail dipole and has no φ dependence.
This document discusses lazy sparse stochastic gradient descent for regularized multinomial logistic regression. It introduces multinomial logistic regression and maximum likelihood estimation. It then discusses adding regularization through Gaussian, Laplace, Cauchy, and uniform priors over the model parameters. The error function and gradient are defined to optimize the log likelihood of the data while also incorporating the log prior. Stochastic gradient descent is used to efficiently optimize this regularized objective.
This document discusses theta functions with spherical coefficients and their behavior under transformations of the modular group. It begins by defining spherical polynomials and proving that a polynomial is spherical of degree r if and only if it is a linear combination of terms of the form (ξ·x)r, where ξ has zero norm if r is greater than or equal to 2. It then defines theta functions associated with a lattice Γ, a point z, and a spherical polynomial P. The behavior of these theta functions under substitutions of the modular group is studied by applying the Poisson summation formula. A table of relevant Fourier transforms is also provided.
This document contains exam questions related to Engineering Mathematics and Microcontrollers.
Part A of Engineering Mathematics asks students to: 1) Find an approximate value of y at x=0.1 and 0.2 using Taylor's series, 2) Solve a differential equation using Euler's modified method and carry out three modifications, 3) Determine the value of y(1.4) using Adams-Bashforth method given values of y at other points.
Part B asks students to: 1) Fit a least squares line to given data, 2) Prove and explain a trigonometric identity, 3) Find the probability of solving a problem given individual student probabilities, 4) Define terms related to probability distributions,
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
The document discusses using the Fast Fourier Transform (FFT) algorithm to multiply polynomials in faster than quadratic time. It explains that the FFT represents polynomials in a point-value representation using complex roots of unity, which allows multiplication to be performed pointwise in linear time. The FFT algorithm recursively decomposes the polynomial multiplication problem into smaller subproblems of half the size, using divide and conquer, to compute the discrete Fourier transform in O(n log n) time rather than the naive O(n^2) time. Interpolation can also be performed in similar time to convert back from the point-value representation to coefficients. Overall the FFT provides a faster algorithm for polynomial multiplication and convolution.
Parameter Estimation in Stochastic Differential Equations by Continuous Optim...SSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 8.
More info at http://summerschool.ssa.org.ua
This document introduces tensors through examples. It defines a vector as a rank 1 tensor and a matrix as a rank 2 tensor. It then provides an example of a rank 3 tensor. The document discusses how to define an inner product between tensors and provides examples using vectors and matrices. It also gives an example of how derivatives of a function can produce tensors of different ranks. Finally, it introduces the concept of decomposing matrices into their symmetric and antisymmetric parts.
This document discusses quantum modes and the correspondence between classical and quantum mechanics. It provides three key principles of quantum mechanics: (1) quantum states are represented by ket vectors, (2) quantum observables are hermitian operators, and (3) the Schrodinger equation governs the causal evolution of quantum systems. It also outlines how classical quantities like position and momentum correspond to quantum operators and how they form Lie algebras through commutation relations. Representations of quantum mechanics are discussed through examples like the energy basis of the harmonic oscillator.
The document presents the cooperative-Lasso, a regularization method for variable selection in regression that assumes sign-coherent group structure. It begins by introducing generalized linear models and the group Lasso estimator. It then notes two limitations of the group Lasso: it does not allow for single zeros within groups, and it does not enforce sign coherence within groups. The cooperative-Lasso is introduced as a penalty that assumes groups will have either all non-positive, non-negative, or null parameters. Examples of applications that could benefit from sign coherence between variables within groups are given.
This document discusses divergent series and integrals and proposes methods to assign them finite values. It begins by defining a generalized Borel transform that can be applied to divergent series. It then shows how this can be used to solve integral equations and derive asymptotic formulas like the prime number theorem. Finally, it demonstrates how divergent series and integrals over positive powers can be regularized and assigned finite values using tools like the Euler-Maclaurin summation formula and Laurent series expansions.
Similar to CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend (20)
Mylyn helps address information overload and context loss when multi-tasking. It integrates tasks into the IDE workflow and uses a degree-of-interest model to monitor user interaction and provide a task-focused UI with features like view filtering, element decoration, automatic folding and content assist ranking. This creates a single view of all tasks that are centrally managed within the IDE.
This document provides an overview of OpenCV, an open source computer vision and machine learning software library. It discusses OpenCV's core functionality for representing images as matrices and directly accessing pixel data. It also covers topics like camera calibration, feature point extraction and matching, and estimating camera pose through techniques like structure from motion and planar homography. Hints are provided for Android developers on required permissions and for planar homography estimation using additional constraints rather than OpenCV's general homography function.
This document provides information about the Computer Vision Laboratory 2012 course at the Institute of Visual Computing. The course focuses on computer vision on mobile devices and will involve 180 hours of project work per person. Students will work in groups of 1-2 people on topics like 3D reconstruction from silhouettes or stereo images on mobile devices. Key dates are provided for submitting a work plan, mid-term presentation, and final report. Contact information is given for the lecturers and teaching assistant.
This document summarizes a presentation on natural image statistics given by Siwei Lyu at the 2009 CIFAR NCAP Summer School. The presentation covered several key topics:
1) It discussed the motivation for studying natural image statistics, which is to understand representations in the visual system and develop computer vision applications like denoising.
2) It reviewed common statistical properties found in natural images like 1/f power spectra and non-Gaussian distributions.
3) Maximum entropy and Bayesian models were presented as approaches to model these statistics, with Gaussian and independent component analysis discussed as specific examples.
4) Efficient coding principles from information theory were introduced as a framework for understanding neural representations that aim to decorrelate and
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known geometry and solving the equations that relate the 3D target points to their 2D image locations. Homogeneous coordinates and projection matrices are used to represent the calibration transformations mathematically.
Brunelli 2008: template matching techniques in computer visionzukun
The document discusses template matching techniques in computer vision. It begins with an overview that defines template matching and discusses some common computer vision tasks it can be used for, like object detection. It then covers topics like detection as hypothesis testing, training and testing techniques, and provides a bibliography.
The HARVEST Programme evaluates feature detectors and descriptors through indirect and direct benchmarks. Indirect benchmarks measure repeatability and matching scores on the affine covariant testbed to evaluate how features persist across transformations. Direct benchmarks evaluate features on image retrieval tasks using the Oxford 5k dataset to measure real-world performance. VLBenchmarks provides software for easily running these benchmarks and reproducing published results. It allows comparing features and selecting the best for a given application.
This document summarizes VLFeat, an open source computer vision library. It provides concise summaries of VLFeat's features, including SIFT, MSER, and other covariant detectors. It also compares VLFeat's performance to other libraries like OpenCV. The document highlights how VLFeat achieves state-of-the-art results in tasks like feature detection, description and matching while maintaining a simple MATLAB interface.
This document summarizes and compares local image descriptors. It begins with an introduction to modern descriptors like SIFT, SURF and DAISY. It then discusses efficient descriptors such as binary descriptors like BRIEF, ORB and BRISK which use comparisons of intensity value pairs. The document concludes with an overview section.
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
The document discusses modern feature detection techniques. It provides an introduction and agenda for a talk on advances in feature detectors and descriptors, including improvements since a 2005 paper. It also discusses software suites and benchmarks for feature detection. Several application domains are described, such as wide baseline matching, panoramic image stitching, 3D reconstruction, image search, location recognition, and object tracking.
System 1 and System 2 were basic early systems for image matching that used color and texture matching. Descriptor-based approaches like SIFT provided more invariance but not perfect invariance. Patch descriptors like SIFT were improved by making them more invariant to lighting changes like color and illumination shifts. The best performance came from combining descriptors with color invariance. Representing images as histograms of visual word occurrences captured patterns in local image patches and allowed measuring similarity between images. Large vocabularies of visual words provided more discriminative power but were costly to compute and store.
This document summarizes a research paper on internet video search. It discusses several key challenges: [1] the large variation in how the same thing can appear in images/videos due to lighting, viewpoint etc., [2] defining what defines different objects, and [3] the huge number of different things that exist. It also notes gaps in narrative understanding, shared concepts between humans and machines, and addressing diverse query contexts. The document advocates developing powerful yet simple visual features that capture uniqueness with invariance to irrelevant changes.
The document discusses computer vision techniques for object detection and localization. It describes methods like selective search that group image regions hierarchically to propose object locations. Large datasets like ImageNet and LabelMe that provide training examples are also discussed. Performance on object detection benchmarks like PASCAL VOC is shown to improve significantly over time. Evaluation standards for concept detection like those used in TRECVID are presented. The document concludes that results are impressively improving each year but that the number of detectable concepts remains limited. It also discusses making feature extraction more efficient using techniques like SURF that take advantage of integral images.
This document provides an outline and overview of Yoshua Bengio's 2012 tutorial on representation learning. The key points covered include:
1) The tutorial will cover motivations for representation learning, algorithms such as probabilistic models and auto-encoders, and analysis and practical issues.
2) Representation learning aims to automatically learn good representations of data rather than relying on handcrafted features. Learning representations can help address challenges like exploiting unlabeled data and the curse of dimensionality.
3) Deep learning algorithms attempt to learn multiple levels of increasingly complex representations, with the goal of developing more abstract, disentangled representations that generalize beyond local patterns in the data.
Advances in discrete energy minimisation for computer visionzukun
This document discusses string algorithms and data structures. It introduces the Knuth-Morris-Pratt algorithm for finding patterns in strings in O(n+m) time where n is the length of the text and m is the length of the pattern. It also discusses common string data structures like tries, suffix trees, and suffix arrays. Suffix trees and suffix arrays store all suffixes of a string and support efficient pattern matching and other string operations in linear time or O(m+logn) time where m is the pattern length and n is the text length.
This document provides a tutorial on how to use Gephi software to analyze and visualize network graphs. It outlines the basic steps of importing a sample graph file, applying layout algorithms to organize the nodes, calculating metrics, detecting communities, filtering the graph, and exporting/saving the results. The tutorial demonstrates features of Gephi including node ranking, partitioning, and interactive visualization of the graph.
EM algorithm and its application in probabilistic latent semantic analysiszukun
The document discusses the EM algorithm and its application in Probabilistic Latent Semantic Analysis (pLSA). It begins by introducing the parameter estimation problem and comparing frequentist and Bayesian approaches. It then describes the EM algorithm, which iteratively computes lower bounds to the log-likelihood function. Finally, it applies the EM algorithm to pLSA by modeling documents and words as arising from a mixture of latent topics.
This document describes an efficient framework for part-based object recognition using pictorial structures. The framework represents objects as graphs of parts with spatial relationships. It finds the optimal configuration of parts through global minimization using distance transforms, allowing fast computation despite modeling complex spatial relationships between parts. This enables soft detection to handle partial occlusion without early decisions about part locations.
Iccv2011 learning spatiotemporal graphs of human activities zukun
The document presents a new approach for learning spatiotemporal graphs of human activities from weakly supervised video data. The approach uses 2D+t tubes as mid-level features to represent activities as segmentation graphs, with nodes describing tubes and edges describing various relations. A probabilistic graph mixture model is used to model activities, and learning estimates the model parameters and permutation matrices using a structural EM algorithm. The learned models allow recognizing and segmenting activities in new videos through robust least squares inference. Evaluation on benchmark datasets demonstrates the ability to learn characteristic parts of activities and recognize them under weak supervision.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
1. SQUARE-ROOT WAVELET
DENSITIES AND SHAPE ANALYSIS
Anand Rangarajan, Center for Vision, Graphics and Medical Imaging
(CVGMI), University of Florida, Gainesville
3. Square-root densities
Wavelets
∞
p( x) = ∑
k
α j0 , k φ j0 , k ( x) + ∑ β j ,kψ
j ≥ j0 , k
j ,k ( x)
Shape is a point on hypersphere
due to Fisher-Rao geometry
4. Wavelet Representations
Father Mother
Wavelets can approximate any f∊ℒ2, i.e.
∞
f ( x) = ∑ α j0 , k φ j0 , k ( x) + ∑ β j ,kψ
j ≥ j0 , k
j ,k ( x)
Translation index k Resolution level
Only work with compactly supported, orthogonal
basis families: Haar, Daubechies, Symlets, Coiflets
5. Expand p, Not p !
Expand in multi-resolution basis:
∞
p( x) = ∑ k
α j0 , k φ j0 , k ( x) + ∑ β j ,kψ
j ≥ j0 , k
j ,k ( x)
Integrability constraints: ∞
h(α j0 , k , β j ,k ) = ∑ k
α 2
j0 , k + ∑ β
j ≥ j0 , k
2
j ,k =1
Estimate coefficients using a constrained maximum
likelihood objective:
(
)
N ∞
L (Θ ) = − log ∏ p ( xi | Θ ) + λ ∑ α j ,k + ∑ β j ,k − 1
2 2 2
0
i= 1 k j ≥ j0 , k
Asymptotic Hessian of
negative log likelihood E{ H Objective is
L
convex
}= 4I where Θ = α { j0 , k ,β j ,k }
6. 2D Density Estimation
Density WDE KDE
Basis ISE Fixed BW Variable
ISE BW ISE
Bimodal SYM7 6.773E-03 1.752E-02 8.114E-03
Trimodal COIF2 6.439E-03 6.621E-03 1.037E-02
Kurtotic COIF4 6.739E-03 8.050E-03 7.470E-03
Quadrimodal COIF5 3.977E-04 1.516E-03 3.098E-03
Skewed SYM10 4.561E-03 8.166E-03 5.102E-03
Peter and Rangarajan, IEEE T-IP, 2008
9. How Do We Select the Number of
Levels?
In the wavelet expansion of p we need set j0
(starting level) and j1 (ending level)
j 1
p( x) = ∑ k
α j0 , k φ j0 , k ( x) + ∑ β j ,kψ
j > j0 , k
j ,k ( x)
Balasubramanian [32] proposed geometric
approach by analyzing the posterior of a model
class p(M) ∫ p(Θ ) p( E | Θ )dΘ
p(M | E ) =
p( E )
The model selection criterion (razor) is
~
det g ij (Θˆ )
ˆ (Mk =ln( ln p( E | Θˆ ) +det g V Θ M )Θ +
N ( 1
Total volume of manifold
R(M ) = − ln p( E | Θ ) + ) − ) + ln ∫ ln ij ( )d
R ln
2 2π V ˆ (M ) 2 Volume ofg ij (Θˆ )
det distinguishable
Θ
distributions around ML
Scales with Volume of model Ratio of expected Fisher
ML fit parameters and class manifold to empirical Fisher
samples.
10. Connections to MDL
1
Volume around MLE k
2π det g ij (Θˆ )
2
2
VΘˆ (M ) = ~
N det g ij (Θˆ )
Last term of razor disappears
det gij (Θˆ )
G(Θ ) = ~
→ 1, N → ∞
det gij (Θˆ )
This simplification leads to
~ ˆ ) + k ln( N ) + ln det g (Θ )dΘ
⇒ R (M ) = MDL = − ln p( E | Θ
2 2π ∫ ij
11. Geometric Intuition
e rs
re f
z or p .
Th e ra these
Space of distributions Counting volumes
12. MDL for Wavelet Densities on the
Hypersphere
saupto50Color
Space of distributions
13. Intuition Behind Shrinking Surface Area
Volume gets pushed into corners as dimensions
increase.
d Vs/Vc
1 1
2 .785
3 .524
4 .308
5 .164
6 .08
In 100 dimensions diagonal of unit length for
sphere is only 10% of way to the cube diagonal.
14. Nested Subspaces Lead to Simpler
Model Selection
Hypersphere dimensionality remains the same with
MRA
k k k k k
k= + = + + =
2 2 2 4 4
It is sufficient to search over j0, using only scaling
functions for density estimation.
MDL is invariant to MRA, however sparsity not
considered.
15. Other Model Selection Criteria
Two-term MDL (MDL2) (Rissanen 1978)
MDL = − ln p( E | Θ
2 ˆ ) + k ln N
2 2π
Akaike Information Criterion (AIC) (Akaike 1973)
AIC = − 2 ln p( E | Θˆ ) + 2k
Bayesian Information Criterion (BIC) (Schwarz 1978)
BIC = − 2 ln p( E | Θˆ ) + 2k ln( N )
Also compared to other distance measures
Hellinger divergence (HELL)
Mean Squared Error (MSE)
L1
17. MSE, j0=4 BIC, j0=0
MDL3 vs. BIC and MSE
MDL3, j0=2 MDL3, j0=1
18. Part III Summary
Simplified geometry of p allows us to compute the
model volume term of MDL in closed form.
Misspecified models can be avoided by assuring
we have enough samples relative to the number of
coefficients in the wavelet density expansion.
Leveraged the nested property of the hypersphere
to restrict the parameter search space to only
scaling function start levels.
MDL for WDE provides a geometrically motivated
way to select the decomposition levels for wavelet
densities.
21. Geometry of Shape Matching
ap
oint Shape
on h is
ype
rsph
ere
Point set representation Wavelet density estimation
Fast Shape Similarity Using Hellinger Divergence
∫( ) 2
D( p1 || p2 ) = p(x | Θ 1 ) − p ( x | Θ 2 ) dx
(
= 2− 2 Θ 1Θ
T
2 )
Or Geodesic Distance
D( p1 , p2 ) = cos − 1 (Θ 1 Θ 2 )
T
22. Slidin
Localized Alignment Via
g
T T
1 1 1 1 1 1
0 0 3 0 0 0 3 0 0 0 3 0 0 0 0 0 0 0 0 0 0 3 0 0 0 3 0 0 0 3 0 0
Local shape differences will cause coefficients to
Permutations ⇒ Translations
shift.
Slide coefficients back into alignment.
23. Penalize Excessive Sliding
Location operator, r ( j , k ) , gives centroid of each (j,k) basis.
Sliding cost equal to square of Euclidean distance.
24. Sliding Objective
Objective minimizes over penalized permutation
assignments
r
E (π ) = − ∑ α j , kα j ,π ( k ) + ∑ β j , k β j ,π ( k )
(1)
0
( 2)
0
(1) ( 2)
o n o pe
rato
tion j0 , k j > j0 , k cati
ta Lo 2
r mu + λ ∑ r ( j0 , k ) − r ( π ( j0 , k ) ) + ∑ r ( j , k ) − r (π ( j , k ) )
2
P e
h t j0 , k j ,k
W e ig
Solve
a lty via linear assignment using cost matrix
Pen
C = Θ 1Θ + λ D
T
Θiis vectorized list of ith shape’s coefficients and
2
where
D is the matrix of distances between basis locations.
26. Recognition Results on MPEG-7 DB
All recognition rates are based on MPEG-7 bulls-
eye criterion.
D2 shape distributions (Osada et al.) only at
27. Summary
The geometry associated with the p wavelet
representation allows us to represent densities as
points on a unit hypersphere.
For the first time, non-rigid alignment can be
addressed using linear assignment framework.
Advantages of our method: no topological
restrictions, very little pre-processing, closed-form
metric.
Sliding wavelets provide a fast and accurate
method of shape matching