1) Biased Normalized Cuts presents a modification of Normalized Cuts that incorporates priors to allow for constrained image segmentation.
2) It seeks solutions that are sufficiently "correlated" with noisy top-down priors, like an object detector, and can be computed quickly given the unconstrained solution.
3) The algorithm constructs a "biased normalized cut vector" that linearly combines eigenvectors such that those correlated with a user-specified seed vector are upweighted while inversely correlated ones have their sign flipped.
Central Problem of Pattern Recognition: Supervised and Unsupervised Learning addresses challenging inference problems in pattern recognition including:
1. Representing objects through data representation.
2. Defining and modeling patterns and structures within the data.
3. Optimizing structures through searching for preferred structures within the data.
4. Validating structures by determining if the identified structures are actually present in the data or are due to random fluctuations.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 5: Shape, Matching and Diverg...zukun
The document discusses using divergence measures like the Jensen-Shannon divergence to align multiple point sets represented as probability density functions. It motivates using the JS divergence by modeling point sets as mixtures of density functions, and shows how the likelihood ratio between models leads to the JS divergence. It then formulates the problem of group-wise point set registration as minimizing the JS divergence between density functions, combined with a regularization term. Experimental results on aligning multiple 3D hippocampus point sets are also presented.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
International Journal of Computational Engineering Research(IJCER)ijceronline
The document presents some fixed point theorems for expansion mappings in complete metric spaces. It begins with definitions of terms like metric spaces, complete metric spaces, Cauchy sequences, and expansion mappings. It then summarizes several existing fixed point theorems for expansion mappings established by other mathematicians. The main result proved in this document is Theorem 3.1, which establishes a new fixed point theorem for expansion mappings under certain conditions on the metric space and mapping. It shows that if the mapping satisfies the given inequality, then it has a fixed point. The proof of this theorem constructs a sequence to show that it converges to a fixed point.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Generalization of Tensor Factorization and ApplicationsKohei Hayashi
This document presents two tensor factorization methods: Exponential Family Tensor Factorization (ETF) and Full-Rank Tensor Completion (FTC). ETF generalizes Tucker decomposition by allowing for different noise distributions in the tensor and handles mixed discrete and continuous values. FTC completes missing tensor values without reducing dimensionality by kernelizing Tucker decomposition. The document outlines these methods and their motivations, discusses Tucker decomposition, and provides an example applying ETF to anomaly detection in time series sensor data.
This course is intended to give an overview of the nature of hash functions, their cryptographic and security properties and time-stamping as a practical usage for hash functions.
The first part of the lecture series gives an overview of what hash functions are. In the second part, we take a look at the cryptographic security requirements for hash functions. The third part of the series deals with the matter of security properties of hash functions.
In the fourth part, we explore hash functions which are provably secure but inefficient and hash functions which can be used practically. The fifth part in the series shows a practical application of hash functions on the example of time-stamping.
In the sixth and seventh part of the lectures, we look at security requirements for hash functions used in time-stamping and ways of proving how specific hash functions meet requirements for hash functions.
Lecturer Ahto Buldas is a professor at Tallinn University of Technology and University of Tartu, specializing in complexity theory, combinatorics, cryptography and data security. He has worked extensively with hash functions and their usage in timestamping and together with Margus Niitsoo in 2010 showed how global scale time-stamping can be used with 256-bit hash functions.
We make use of the conformal compactification of Minkowski spacetime M# to explore a way of describing general, nonlinear Maxwell fields with conformal symmetry. We distinguish the inverse Minkowski spacetime [M#]−1 obtained via conformal inversion, so as to discuss a doubled compactified spacetime on which Maxwell fields may be defined. Identifying M# with the projective light cone in (4+2)-dimensional spacetime, we write two independent conformal-invariant functionals of the 6-dimensional Maxwellian field strength tensors - one bilinear, the other trilinear in the field strengths -- which are to enter general nonlinear constitutive equations. We also make some remarks regarding the dimensional reduction procedure as we consider its generalization from linear to general nonlinear theories.
Central Problem of Pattern Recognition: Supervised and Unsupervised Learning addresses challenging inference problems in pattern recognition including:
1. Representing objects through data representation.
2. Defining and modeling patterns and structures within the data.
3. Optimizing structures through searching for preferred structures within the data.
4. Validating structures by determining if the identified structures are actually present in the data or are due to random fluctuations.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 5: Shape, Matching and Diverg...zukun
The document discusses using divergence measures like the Jensen-Shannon divergence to align multiple point sets represented as probability density functions. It motivates using the JS divergence by modeling point sets as mixtures of density functions, and shows how the likelihood ratio between models leads to the JS divergence. It then formulates the problem of group-wise point set registration as minimizing the JS divergence between density functions, combined with a regularization term. Experimental results on aligning multiple 3D hippocampus point sets are also presented.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
International Journal of Computational Engineering Research(IJCER)ijceronline
The document presents some fixed point theorems for expansion mappings in complete metric spaces. It begins with definitions of terms like metric spaces, complete metric spaces, Cauchy sequences, and expansion mappings. It then summarizes several existing fixed point theorems for expansion mappings established by other mathematicians. The main result proved in this document is Theorem 3.1, which establishes a new fixed point theorem for expansion mappings under certain conditions on the metric space and mapping. It shows that if the mapping satisfies the given inequality, then it has a fixed point. The proof of this theorem constructs a sequence to show that it converges to a fixed point.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Generalization of Tensor Factorization and ApplicationsKohei Hayashi
This document presents two tensor factorization methods: Exponential Family Tensor Factorization (ETF) and Full-Rank Tensor Completion (FTC). ETF generalizes Tucker decomposition by allowing for different noise distributions in the tensor and handles mixed discrete and continuous values. FTC completes missing tensor values without reducing dimensionality by kernelizing Tucker decomposition. The document outlines these methods and their motivations, discusses Tucker decomposition, and provides an example applying ETF to anomaly detection in time series sensor data.
This course is intended to give an overview of the nature of hash functions, their cryptographic and security properties and time-stamping as a practical usage for hash functions.
The first part of the lecture series gives an overview of what hash functions are. In the second part, we take a look at the cryptographic security requirements for hash functions. The third part of the series deals with the matter of security properties of hash functions.
In the fourth part, we explore hash functions which are provably secure but inefficient and hash functions which can be used practically. The fifth part in the series shows a practical application of hash functions on the example of time-stamping.
In the sixth and seventh part of the lectures, we look at security requirements for hash functions used in time-stamping and ways of proving how specific hash functions meet requirements for hash functions.
Lecturer Ahto Buldas is a professor at Tallinn University of Technology and University of Tartu, specializing in complexity theory, combinatorics, cryptography and data security. He has worked extensively with hash functions and their usage in timestamping and together with Margus Niitsoo in 2010 showed how global scale time-stamping can be used with 256-bit hash functions.
We make use of the conformal compactification of Minkowski spacetime M# to explore a way of describing general, nonlinear Maxwell fields with conformal symmetry. We distinguish the inverse Minkowski spacetime [M#]−1 obtained via conformal inversion, so as to discuss a doubled compactified spacetime on which Maxwell fields may be defined. Identifying M# with the projective light cone in (4+2)-dimensional spacetime, we write two independent conformal-invariant functionals of the 6-dimensional Maxwellian field strength tensors - one bilinear, the other trilinear in the field strengths -- which are to enter general nonlinear constitutive equations. We also make some remarks regarding the dimensional reduction procedure as we consider its generalization from linear to general nonlinear theories.
In this note we generalize the regularity concept for semigroups in two ways simul- taneously: to higher regularity and to higher arity. We show that the one-relational and multi-relational formulations of higher regularity do not coincide, and each element has several inverses. The higher idempotents are introduced, and their commutation leads to unique inverses in the multi-relational formulation, and then further to the higher inverse semigroups. For polyadic semigroups we introduce several types of higher regularity which satisfy the arity invariance principle as introduced: the expressions should not depend of the numerical arity values, which allows us to provide natural and correct binary limits. In the first definition no idempotents can be defined, analogously to the binary semigroups, and therefore the uniqueness of inverses can be governed by shifts. In the second definition called sandwich higher regularity, we are able to introduce the higher polyadic idempotents, but their commutation does not provide uniqueness of inverses, because of the middle terms in the higher polyadic regularity conditions.
11.final paper -0047www.iiste.org call-for_paper-58Alexander Decker
This document discusses generating new Julia sets and Mandelbrot sets using the tangent function. It introduces using the tangent function of the form tan(zn) + c, where n ≥ 2, and applying Ishikawa iteration to generate new Relative Superior Mandelbrot sets and Relative Superior Julia sets. The results are entirely different from existing literature on transcendental functions. It describes using escape criteria for polynomials to generate the fractals and discusses the geometry of the Relative Superior Mandelbrot and Julia sets generated, which possess symmetry along the real axis.
This document describes a method called "k-means" for partitioning multivariate data into k groups based on minimizing within-group variance. It can be used for applications like classification, prediction, and testing independence among variables. The k-means process works by starting each group with a single data point, then iteratively assigning new points to the nearest group and updating group means. The document proves that the k-means values converge almost surely to a set of distinct points that minimize within-group variance. It also describes some applications and extensions of the k-means method.
This document discusses generalizing the concept of regularity for semigroups in two ways: higher regularity and higher arity (polyadic semigroups).
For binary semigroups, higher n-regularity is defined such that each element has multiple inverse elements rather than a single inverse. However, for binary semigroups this reduces to ordinary regularity. For polyadic semigroups, several definitions of regularity and higher regularity are introduced to account for the higher arity operations. Idempotents and identities are also generalized for polyadic semigroups. It is shown that the definitions of regularity for polyadic semigroups cannot be reduced in the same
This document discusses higher dimensional image analysis using concepts from convex geometry and mathematical morphology. It begins by defining types of images and morphological operators like dilation and erosion. It then links these concepts to convex analysis, discussing topics like morphological covers and the Brunn-Minkowski theorem. It proposes that convex structures can be extended to higher dimensional image analysis and defines the concept of a morphological space. The document concludes that convex sets play an important role in higher dimensional analysis and this work may help reduce difficulties in higher dimensional image processing.
Ireducible core and equal remaining obligations rule for mcst gamesvinnief
This document presents an algorithm for solving minimum cost spanning extension (MCSE) problems, which generalize minimum cost spanning tree problems. The algorithm finds a minimum cost extension of an existing network to connect all users to a source, and generates an associated set of cost allocations. It works by sequentially adding edges in order of non-decreasing cost, such that each added edge does not introduce new cycles. The algorithm's output is proved to be contained within the core of the associated MCSE game and is independent of the specific extension constructed. The paper also generalizes the definition of the "irreducible core" to MCSE problems and shows the algorithm's output coincides with this.
We generalize the regularity concept for semigroups in two ways simultaneously: to higher regularity and to higher arity. We show that the one-relational and multi-relational formulations of higher regularity do not coincide, and each element has several inverses. The higher idempotents are introduced, and their commutation leads to unique inverses in the multi-relational formulation, and then further to the higher inverse semigroups. For polyadic semigroups we introduce several types of higher regularity which satisfy the arity invariance principle as introduced: the expressions should not depend of the numerical arity values, which allows us to provide natural and correct binary limits. In the first definition no idempotents can be defined, analogously to the binary semigroups, and therefore the uniqueness of inverses can be governed by shifts. In the second definition called sandwich higher regularity, we are able to introduce the higher polyadic idempotents, but their commutation does not provide uniqueness of inverses, because of the middle terms in the higher polyadic regularity conditions. Finally, we introduce the sandwich higher polyadic regularity with generalized idempotents.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Common fixed point theorems in polish space for nonself mappingAlexander Decker
This document presents a theorem and proof about the existence and uniqueness of common fixed points for random operators in Polish spaces. The theorem states that if there exist measurable mappings satisfying certain contractive conditions for two continuous random multivalued operators, then the operators have a common random fixed point. The proof constructs a Cauchy sequence that converges to a measurable mapping, which is shown to be the unique common fixed point of the operators.
Proportional and decentralized rule mcst gamesvinnief
This document presents two cost allocation rules - the proportional rule and the decentralized rule - for minimum cost spanning extension problems. The proportional rule allocates costs proportionally based on an algorithm that constructs a minimum cost spanning extension while simultaneously allocating link costs. The decentralized rule is based on an algorithm that constructs the extension in a decentralized manner, with links being added one by one and their costs immediately allocated. Both rules are shown to be refinements of the irreducible core defined for these problems. The proportional rule is then characterized axiomatically.
3D gravity inversion by planting anomalous densitiesLeonardo Uieda
Paper presented at the 2011 SBGf International Congress in Rio de Janeiro, Brazil.
Abstract:
This paper presents a novel gravity inversion method for estimating a 3D density-contrast distribution defined on a grid of prisms. Our method consists of an iterative algorithm that does not require the solution of a large equation system. Instead, the solution
grows systematically around user-specified prismatic
elements called “seeds”. Each seed can have a different density contrast, allowing the interpretation of multiple bodies with different density contrasts and interfering gravitational effects. The compactness of the solution around the seeds is imposed by
means of a regularizing function. The solution
grows by the accretion of neighboring prisms of the
current solution. The prisms for the accretion are chosen by systematically searching the set of current neighboring prisms. Therefore, this approach allows that the columns of the Jacobian matrix be calculated on demand. This is a known technique from computer science called “lazy evaluation”, which greatly reduces the demand of computer memory and processing time. Test on synthetic data and on real data collected over the ultramafic Cana Brava complex, central Brazil, confirmed the ability of our method in detecting sharp
and compact bodies.
This document summarizes the use of the Ritz method to approximate the critical frequencies of a tapered hollow beam. It begins by introducing the governing equations and describing the uniform beam solution. It then outlines the Ritz method, which uses the uniform beam eigenfunctions as a basis to approximate the tapered beam solution. The method is applied numerically to predict the first three critical frequencies of the tapered beam, which are found to match well with finite element analysis results. The Ritz method is concluded to be an effective way to approximate critical frequencies for more complex beam geometries.
This document provides instructions for Homework 1 for the course 6.867. It is due on September 28 and will be 10% off for each day late. The homework involves exploring bias-variance tradeoffs in estimating the mean of different distributions from sample data. It provides questions to answer about maximum likelihood estimators for the mean of uniform distributions on intervals of different lengths. It also covers Bayesian estimation of probabilities for a "thick coin" that can land on heads, tails, or edge. Finally, it includes questions on Gaussian distributions, analyzing a presidential debate poll, and decision theory concepts.
EM Theory Term Presentation Keynotes Versionshelling ford
1) The document discusses Poynting's theorem and reciprocity relations for discontinuous electromagnetic fields across three cases: (1) a dielectric-PEC interface sustaining only electric surface current, (2) a dielectric-dielectric interface sustaining no surface current, and (3) an arbitrary media-media interface sustaining both electric and magnetic surface currents.
2) It introduces the concept of a characteristic function and sense of distributions to describe fields on both sides of an interface, and defines a surface impedance Z tensor to relate the electric and magnetic surface currents.
3) For Case 3 with Z=ZT, the interface satisfies reciprocity and can be treated as a reciprocal element,
This document discusses rectangular coordinate systems in three dimensions. It begins by introducing the 3D coordinate system with three perpendicular axes (x, y, z) that intersect at the origin. Points in 3D space are located using ordered triples (x, y, z). It then discusses visualizing points and objects in 3D, finding distances between points, and graphing simple objects like lines and planes. Examples are provided to illustrate key concepts like plotting points, finding distances, and graphing lines and planes defined by equations.
This document discusses various types of soft compatible maps and common fixed point theorems in soft metric spaces. It begins by generalizing concepts like (α)-soft compatible maps, (β)-soft compatible maps, soft compatible maps of type I and II. It then compares these different types of soft compatible maps. Finally, it proves a common fixed point theorem for four soft continuous self-maps on a complete soft metric space to demonstrate the utility of these new concepts.
This paper presents a new algorithm based on the Baum-Welch algorithm for estimating the parameters of a hidden Markov model that allows each state to be observed using a different set of features rather than relying on a common feature set. The algorithm uses likelihood ratios calculated from low-dimensional sufficient statistics for each state rather than high-dimensional probability density functions. This allows more accurate and robust parameter estimation compared to standard feature-based HMMs using a common feature set. The algorithm is mathematically equivalent to the standard Baum-Welch algorithm but avoids estimating high-dimensional densities.
1) The document summarizes research into generating conjectures for upper bounds on the domination number of bipartite graphs. 14 conjectures were produced, of which 3 were previously known to be true.
2) For the 11 false conjectures, smallest counterexamples were found to disprove the conjectures. An example of finding a smallest counterexample is shown.
3) The goal of the research was to determine a collection of upper bounds that could accurately predict the domination number of any bipartite graph. Such a collection could help compute domination numbers more efficiently.
The polyadic integer numbers, which form a polyadic ring, are representatives of a fixed congruence class. The basics of polyadic arithmetic are presented: prime polyadic numbers, the polyadic Euler totient function, polyadic division with a remainder, etc. are defined. Secondary congruence classes of polyadic integer numbers, which become ordinary residue classes in the binary limit, and the corresponding finite polyadic rings are introduced. Further, polyadic versions of (prime) finite fields are considered. These can be zeroless, zeroless and nonunital, or have several units; it is even possible for all of their elements to be units. There exist non-isomorphic finite polyadic fields of the same arity shape and order. None of the above situations is possible in the binary case. It is conjectured that any finite polyadic field should contain a certain canonical prime polyadic field as a smallest finite subfield, which can be considered a polyadic analogue of GF (p).
This document presents a new iterative method called the Parametric Method of Iteration for solving nonlinear systems of equations. The method rewrites each equation using a set of positive parameters and iterates the solutions until convergence within a desired accuracy. The method converges faster than traditional iteration and Newton-Raphson methods, as shown through examples. The parametric method generalizes the existing methods and allows tuning of parameters to accelerate convergence for different types of equations.
Interactive graph cuts for optimal boundary and region segmentation of objects yzxvvv
This paper proposes a new technique for interactive segmentation of multi-dimensional (N-D) images. The technique allows a user to mark pixels as "object" or "background" to provide hard constraints. A cost function is defined that balances boundary and region properties to find a globally optimal segmentation satisfying the constraints. Graph cuts are used to efficiently compute the optimal segmentation. The technique handles isolated object and background regions and can segment N-D volumes.
This document provides lecture notes on spanning trees. It begins with an introduction to graphs and defines a spanning tree as a tree containing all the vertices of a connected graph with no cycles. It then discusses two algorithms for computing a spanning tree - a vertex-centric algorithm that marks vertices as being in the tree, and an edge-centric algorithm that connects singleton trees with edges. The document also covers using spanning trees to generate random mazes and finding minimum weight spanning trees using properties of cycles and Kruskal's algorithm.
In this note we generalize the regularity concept for semigroups in two ways simul- taneously: to higher regularity and to higher arity. We show that the one-relational and multi-relational formulations of higher regularity do not coincide, and each element has several inverses. The higher idempotents are introduced, and their commutation leads to unique inverses in the multi-relational formulation, and then further to the higher inverse semigroups. For polyadic semigroups we introduce several types of higher regularity which satisfy the arity invariance principle as introduced: the expressions should not depend of the numerical arity values, which allows us to provide natural and correct binary limits. In the first definition no idempotents can be defined, analogously to the binary semigroups, and therefore the uniqueness of inverses can be governed by shifts. In the second definition called sandwich higher regularity, we are able to introduce the higher polyadic idempotents, but their commutation does not provide uniqueness of inverses, because of the middle terms in the higher polyadic regularity conditions.
11.final paper -0047www.iiste.org call-for_paper-58Alexander Decker
This document discusses generating new Julia sets and Mandelbrot sets using the tangent function. It introduces using the tangent function of the form tan(zn) + c, where n ≥ 2, and applying Ishikawa iteration to generate new Relative Superior Mandelbrot sets and Relative Superior Julia sets. The results are entirely different from existing literature on transcendental functions. It describes using escape criteria for polynomials to generate the fractals and discusses the geometry of the Relative Superior Mandelbrot and Julia sets generated, which possess symmetry along the real axis.
This document describes a method called "k-means" for partitioning multivariate data into k groups based on minimizing within-group variance. It can be used for applications like classification, prediction, and testing independence among variables. The k-means process works by starting each group with a single data point, then iteratively assigning new points to the nearest group and updating group means. The document proves that the k-means values converge almost surely to a set of distinct points that minimize within-group variance. It also describes some applications and extensions of the k-means method.
This document discusses generalizing the concept of regularity for semigroups in two ways: higher regularity and higher arity (polyadic semigroups).
For binary semigroups, higher n-regularity is defined such that each element has multiple inverse elements rather than a single inverse. However, for binary semigroups this reduces to ordinary regularity. For polyadic semigroups, several definitions of regularity and higher regularity are introduced to account for the higher arity operations. Idempotents and identities are also generalized for polyadic semigroups. It is shown that the definitions of regularity for polyadic semigroups cannot be reduced in the same
This document discusses higher dimensional image analysis using concepts from convex geometry and mathematical morphology. It begins by defining types of images and morphological operators like dilation and erosion. It then links these concepts to convex analysis, discussing topics like morphological covers and the Brunn-Minkowski theorem. It proposes that convex structures can be extended to higher dimensional image analysis and defines the concept of a morphological space. The document concludes that convex sets play an important role in higher dimensional analysis and this work may help reduce difficulties in higher dimensional image processing.
Ireducible core and equal remaining obligations rule for mcst gamesvinnief
This document presents an algorithm for solving minimum cost spanning extension (MCSE) problems, which generalize minimum cost spanning tree problems. The algorithm finds a minimum cost extension of an existing network to connect all users to a source, and generates an associated set of cost allocations. It works by sequentially adding edges in order of non-decreasing cost, such that each added edge does not introduce new cycles. The algorithm's output is proved to be contained within the core of the associated MCSE game and is independent of the specific extension constructed. The paper also generalizes the definition of the "irreducible core" to MCSE problems and shows the algorithm's output coincides with this.
We generalize the regularity concept for semigroups in two ways simultaneously: to higher regularity and to higher arity. We show that the one-relational and multi-relational formulations of higher regularity do not coincide, and each element has several inverses. The higher idempotents are introduced, and their commutation leads to unique inverses in the multi-relational formulation, and then further to the higher inverse semigroups. For polyadic semigroups we introduce several types of higher regularity which satisfy the arity invariance principle as introduced: the expressions should not depend of the numerical arity values, which allows us to provide natural and correct binary limits. In the first definition no idempotents can be defined, analogously to the binary semigroups, and therefore the uniqueness of inverses can be governed by shifts. In the second definition called sandwich higher regularity, we are able to introduce the higher polyadic idempotents, but their commutation does not provide uniqueness of inverses, because of the middle terms in the higher polyadic regularity conditions. Finally, we introduce the sandwich higher polyadic regularity with generalized idempotents.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Common fixed point theorems in polish space for nonself mappingAlexander Decker
This document presents a theorem and proof about the existence and uniqueness of common fixed points for random operators in Polish spaces. The theorem states that if there exist measurable mappings satisfying certain contractive conditions for two continuous random multivalued operators, then the operators have a common random fixed point. The proof constructs a Cauchy sequence that converges to a measurable mapping, which is shown to be the unique common fixed point of the operators.
Proportional and decentralized rule mcst gamesvinnief
This document presents two cost allocation rules - the proportional rule and the decentralized rule - for minimum cost spanning extension problems. The proportional rule allocates costs proportionally based on an algorithm that constructs a minimum cost spanning extension while simultaneously allocating link costs. The decentralized rule is based on an algorithm that constructs the extension in a decentralized manner, with links being added one by one and their costs immediately allocated. Both rules are shown to be refinements of the irreducible core defined for these problems. The proportional rule is then characterized axiomatically.
3D gravity inversion by planting anomalous densitiesLeonardo Uieda
Paper presented at the 2011 SBGf International Congress in Rio de Janeiro, Brazil.
Abstract:
This paper presents a novel gravity inversion method for estimating a 3D density-contrast distribution defined on a grid of prisms. Our method consists of an iterative algorithm that does not require the solution of a large equation system. Instead, the solution
grows systematically around user-specified prismatic
elements called “seeds”. Each seed can have a different density contrast, allowing the interpretation of multiple bodies with different density contrasts and interfering gravitational effects. The compactness of the solution around the seeds is imposed by
means of a regularizing function. The solution
grows by the accretion of neighboring prisms of the
current solution. The prisms for the accretion are chosen by systematically searching the set of current neighboring prisms. Therefore, this approach allows that the columns of the Jacobian matrix be calculated on demand. This is a known technique from computer science called “lazy evaluation”, which greatly reduces the demand of computer memory and processing time. Test on synthetic data and on real data collected over the ultramafic Cana Brava complex, central Brazil, confirmed the ability of our method in detecting sharp
and compact bodies.
This document summarizes the use of the Ritz method to approximate the critical frequencies of a tapered hollow beam. It begins by introducing the governing equations and describing the uniform beam solution. It then outlines the Ritz method, which uses the uniform beam eigenfunctions as a basis to approximate the tapered beam solution. The method is applied numerically to predict the first three critical frequencies of the tapered beam, which are found to match well with finite element analysis results. The Ritz method is concluded to be an effective way to approximate critical frequencies for more complex beam geometries.
This document provides instructions for Homework 1 for the course 6.867. It is due on September 28 and will be 10% off for each day late. The homework involves exploring bias-variance tradeoffs in estimating the mean of different distributions from sample data. It provides questions to answer about maximum likelihood estimators for the mean of uniform distributions on intervals of different lengths. It also covers Bayesian estimation of probabilities for a "thick coin" that can land on heads, tails, or edge. Finally, it includes questions on Gaussian distributions, analyzing a presidential debate poll, and decision theory concepts.
EM Theory Term Presentation Keynotes Versionshelling ford
1) The document discusses Poynting's theorem and reciprocity relations for discontinuous electromagnetic fields across three cases: (1) a dielectric-PEC interface sustaining only electric surface current, (2) a dielectric-dielectric interface sustaining no surface current, and (3) an arbitrary media-media interface sustaining both electric and magnetic surface currents.
2) It introduces the concept of a characteristic function and sense of distributions to describe fields on both sides of an interface, and defines a surface impedance Z tensor to relate the electric and magnetic surface currents.
3) For Case 3 with Z=ZT, the interface satisfies reciprocity and can be treated as a reciprocal element,
This document discusses rectangular coordinate systems in three dimensions. It begins by introducing the 3D coordinate system with three perpendicular axes (x, y, z) that intersect at the origin. Points in 3D space are located using ordered triples (x, y, z). It then discusses visualizing points and objects in 3D, finding distances between points, and graphing simple objects like lines and planes. Examples are provided to illustrate key concepts like plotting points, finding distances, and graphing lines and planes defined by equations.
This document discusses various types of soft compatible maps and common fixed point theorems in soft metric spaces. It begins by generalizing concepts like (α)-soft compatible maps, (β)-soft compatible maps, soft compatible maps of type I and II. It then compares these different types of soft compatible maps. Finally, it proves a common fixed point theorem for four soft continuous self-maps on a complete soft metric space to demonstrate the utility of these new concepts.
This paper presents a new algorithm based on the Baum-Welch algorithm for estimating the parameters of a hidden Markov model that allows each state to be observed using a different set of features rather than relying on a common feature set. The algorithm uses likelihood ratios calculated from low-dimensional sufficient statistics for each state rather than high-dimensional probability density functions. This allows more accurate and robust parameter estimation compared to standard feature-based HMMs using a common feature set. The algorithm is mathematically equivalent to the standard Baum-Welch algorithm but avoids estimating high-dimensional densities.
1) The document summarizes research into generating conjectures for upper bounds on the domination number of bipartite graphs. 14 conjectures were produced, of which 3 were previously known to be true.
2) For the 11 false conjectures, smallest counterexamples were found to disprove the conjectures. An example of finding a smallest counterexample is shown.
3) The goal of the research was to determine a collection of upper bounds that could accurately predict the domination number of any bipartite graph. Such a collection could help compute domination numbers more efficiently.
The polyadic integer numbers, which form a polyadic ring, are representatives of a fixed congruence class. The basics of polyadic arithmetic are presented: prime polyadic numbers, the polyadic Euler totient function, polyadic division with a remainder, etc. are defined. Secondary congruence classes of polyadic integer numbers, which become ordinary residue classes in the binary limit, and the corresponding finite polyadic rings are introduced. Further, polyadic versions of (prime) finite fields are considered. These can be zeroless, zeroless and nonunital, or have several units; it is even possible for all of their elements to be units. There exist non-isomorphic finite polyadic fields of the same arity shape and order. None of the above situations is possible in the binary case. It is conjectured that any finite polyadic field should contain a certain canonical prime polyadic field as a smallest finite subfield, which can be considered a polyadic analogue of GF (p).
This document presents a new iterative method called the Parametric Method of Iteration for solving nonlinear systems of equations. The method rewrites each equation using a set of positive parameters and iterates the solutions until convergence within a desired accuracy. The method converges faster than traditional iteration and Newton-Raphson methods, as shown through examples. The parametric method generalizes the existing methods and allows tuning of parameters to accelerate convergence for different types of equations.
Interactive graph cuts for optimal boundary and region segmentation of objects yzxvvv
This paper proposes a new technique for interactive segmentation of multi-dimensional (N-D) images. The technique allows a user to mark pixels as "object" or "background" to provide hard constraints. A cost function is defined that balances boundary and region properties to find a globally optimal segmentation satisfying the constraints. Graph cuts are used to efficiently compute the optimal segmentation. The technique handles isolated object and background regions and can segment N-D volumes.
This document provides lecture notes on spanning trees. It begins with an introduction to graphs and defines a spanning tree as a tree containing all the vertices of a connected graph with no cycles. It then discusses two algorithms for computing a spanning tree - a vertex-centric algorithm that marks vertices as being in the tree, and an edge-centric algorithm that connects singleton trees with edges. The document also covers using spanning trees to generate random mazes and finding minimum weight spanning trees using properties of cycles and Kruskal's algorithm.
This document provides an overview of topological quantum computing using anyons. It first discusses motivations for alternative computing approaches due to limitations of traditional computing. It then introduces braid theory, including definitions of braids and their equivalence. Braids form a braid group that can represent any braid as a word. Anyons are quasiparticles that emerge in 2D systems and can represent quantum bits in a way that is robust to environmental perturbations. Braids representing the trajectories of anyons over time can encode quantum information and computations in a topological manner.
This document discusses various topics in computer vision including affinity measures for image segmentation, normalized cuts, human stereopsis, epipolar geometry, and trinocular stereo. It also discusses tracking applications such as motion capture, recognition from motion, surveillance, and targeting. Vehicle tracking is discussed in detail for applications in predicting traffic flow using video from fixed cameras to initiate tracks automatically by constructing regions of interest that span each lane.
The document discusses pseudospectra as an alternative to eigenvalues for analyzing non-normal matrices and operators. It defines three equivalent definitions of pseudospectra: (1) the set of points where the resolvent is larger than ε-1, (2) the set of points that are eigenvalues of a perturbed matrix with perturbation smaller than ε, and (3) the set of points where the resolvent applied to a unit vector is larger than ε. It also shows that pseudospectra are nested sets and their intersection is the spectrum. The definitions extend to operators on Hilbert spaces using singular values.
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Gravitational Based Hierarchical Clustering AlgorithmIDES Editor
We propose a new gravitational based hierarchical
clustering algorithm using kd- tree. kd- tree generates densely
populated packets and finds the clusters using gravitational
force between the packets. Gravitational based hierarchical
clustering results are of high quality and robustness. Our
method is effective as well as robust. Our proposed algorithm
is tested on synthetic dataset and results are presented.
Explanation of the paper "Felzenszwalb, Pedro F., and Daniel P. Huttenlocher. "Efficient graph-based image segmentation." International Journal of Computer Vision 59.2 (2004): 167-181."
This is one of the most cited paper in computer vision, It describes an O( N logN ) algorithm for image segmentation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
1) The document discusses wavelet transforms as a recent algorithm for image compression. Wavelet transforms can capture variations at different scales in an image, making them well-suited for reducing spatial redundancy.
2) A typical lossy image compression system uses four main components - source encoding, thresholding, quantization, and entropy encoding - to achieve compression by removing different types of redundancy in images.
3) Experimental results on the Lena test image showed that soft thresholding followed by quantization achieved higher peak signal-to-noise ratios than hard thresholding and quantization, demonstrating the effectiveness of wavelet transforms for image compression.
Biao Hou--SAR IMAGE DESPECKLING BASED ON IMPROVED DIRECTIONLET DOMAIN GAUSSIA...grssieee
1) The document proposes a SAR image despeckling method based on cartoon-texture decomposition, an improved directionlet transform, and Gaussian mixture modeling of coefficients.
2) It decomposes SAR images into cartoon and texture parts, applies an improved directionlet transform to the texture part, and estimates noise-free coefficients using a Gaussian mixture model.
3) Experimental results on a SAR field image show the proposed method achieves better subjective visual quality and detail preservation compared to other methods like the G-MAP, wavelet, and nonsubsampled contourlet transforms.
Biao Hou--SAR IMAGE DESPECKLING BASED ON IMPROVED DIRECTIONLET DOMAIN GAUSSIA...grssieee
1) The document proposes a SAR image despeckling method based on cartoon-texture decomposition, an improved directionlet transform, and Gaussian mixture modeling of coefficients.
2) It decomposes SAR images into cartoon and texture parts, applies an improved directionlet transform to the texture part, and estimates noise-free coefficients using a Gaussian mixture model.
3) Experimental results on a SAR field image show the proposed method achieves better subjective visual quality and detail preservation compared to other methods like the G-MAP, wavelet, and nonsubsampled contourlet transforms.
Kakuro: Solving the Constraint Satisfaction ProblemVarad Meru
This work was done as a part of the project for the course CS 271: Introduction to Artificial Intelligence (http://www.ics.uci.edu/~kkask/Fall-2014%20CS271/index.html), taught in Fall 2014.
The document discusses various techniques for clustering and dimensionality reduction of web documents. It introduces machine learning clustering methods like k-means clustering and discusses challenges like handling different cluster sizes and shapes. It also covers dimensionality reduction methods like principal component analysis (PCA) and locality-sensitive hashing that can be used to cluster high dimensional web document datasets by reducing their dimensionality.
Spectral clustering is a technique for clustering data points into groups using the spectrum (eigenvalues and eigenvectors) of the similarity matrix of the points. It works by constructing a graph from the pairwise similarities of points, calculating the Laplacian of the graph, and using the k eigenvectors of the Laplacian corresponding to the smallest eigenvalues to embed the points into a k-dimensional space. K-means clustering is then applied to the embedded points to obtain the final clustering. The document discusses two basic spectral clustering algorithms that differ in whether they use the normalized or unnormalized Laplacian.
This document compares three image restoration techniques - Iterated Geometric Harmonics, Markov Random Fields, and Wavelet Decomposition - for removing noise from images. It describes each technique and the process used to test them. Noise was artificially added to images using different noise generation functions. Wavelet Decomposition and Markov Random Fields were then used to detect the noise locations. These noise locations were then used to create versions of the noisy images suitable for reconstruction via Iterated Geometric Harmonics. The reconstructed images were then compared to the original to evaluate the performance of each technique.
This document summarizes a student paper about using the shortest path algorithm to interpolate contours in images. It discusses how the human visual system perceives 3D representations from 2D images and how extracting meaningful contours is challenging due to noise and discontinuities. The paper proposes using a modified Dijkstra's algorithm to find the shortest path in log-polar space, which maps circles in images to straight lines. This approach aims to identify simple, closed curves representing object contours while ignoring irrelevant edges.
論文紹介:Learning With Neighbor Consistency for Noisy LabelsToru Tamaki
Ahmet Iscen, Jack Valmadre, Anurag Arnab, Cordelia Schmid, "Learning With Neighbor Consistency for Noisy Labels" CVPR2022
https://openaccess.thecvf.com/content/CVPR2022/html/Iscen_Learning_With_Neighbor_Consistency_for_Noisy_Labels_CVPR_2022_paper.html
This document provides an introduction to graph neural networks (GNNs). It discusses that GNNs are designed to perform inference on graph data using neural networks. GNNs can analyze graph data by performing node classification, graph classification, graph visualization, link prediction, and graph clustering. The document explains that GNNs apply the concepts of node embedding and message passing to learn node representations that encode structural information from a graph. It describes how GNNs use neural network layers and aggregation functions to learn representations by propagating and transforming node feature information across a graph.
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
ActiveContour(Snake) • LevelSet
• Applications
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energyfunctional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
Object segmentation by alignment of poselet activations to image contoursirisshicat
This document proposes techniques to segment objects in images using a combination of bottom-up (image edges) and top-down (object detector) cues. The key techniques are:
1. Extending an existing poselet detector to 19 additional categories beyond people. Poselets can predict masks for parts of objects.
2. Non-rigidly aligning the predicted poselet masks to image contours to increase segmentation precision and remove false positives.
3. Spatially smoothing the aligned masks while ensuring non-overlapping object regions using a variational technique.
4. Refining the segmentation further based on self-similarity of small image patches. The approach achieves competitive results on the PASCAL VOC benchmark
A probabilistic model for recursive factorized image features pptirisshicat
The document describes a probabilistic model called recursive latent Dirichlet allocation (rLDA) for hierarchical image modeling. rLDA is based on latent Dirichlet allocation and has multiple layers of representations with increasing spatial support, where each layer learns representations jointly across layers through joint inference. This allows for distributed coding of local image features in a hierarchical manner while performing full Bayesian inference. The model is evaluated for its ability to learn hierarchical representations from images.
A probabilistic model for recursive factorized image featuresirisshicat
This document proposes a probabilistic model for learning hierarchical visual representations in a recursive manner. The model, based on Latent Dirichlet Allocation, learns image features at multiple layers of abstraction jointly rather than in a strictly feedforward way. The model represents local image patches as distributions over visual words at the lowest layer, and higher layers learn distributions over the representations of lower layers. Evaluating the model on a standard recognition dataset, it outperforms existing hierarchical models and achieves performance on par with state-of-the-art single-feature models, demonstrating the benefits of joint learning and inference in hierarchical visual processing.
The document discusses the mean shift algorithm, a non-parametric technique for analyzing complex multimodal feature spaces and estimating the stationary points (modes) of the underlying probability density function without explicitly estimating it. It provides an intuitive description of mean shift using a distribution of billiard balls, and outlines how mean shift uses kernel density estimation to perform gradient ascent and converge at the densest regions, allowing it to be used for tasks like mode detection, clustering, and image segmentation.
The mean shift procedure is a general nonparametric technique for analyzing complex multimodal feature spaces and delineating arbitrarily shaped clusters. It works by recursively finding the nearest stationary point of the underlying density function, which corresponds to the mode of the density. The mean shift procedure relates to kernel density estimation and robust M-estimators of location. It provides a versatile tool for feature space analysis that can solve many low-level computer vision tasks with few parameters.
Shape matching and object recognition using shape context belongie pami02irisshicat
1) The document presents a novel approach for measuring shape similarity and using it for object recognition. It involves finding point correspondences between shapes, estimating an aligning transformation, and computing distance as a sum of matching errors and transformation magnitude.
2) At the core is using a "shape context" descriptor at sample points to solve the correspondence problem as a graph matching problem. This provides correspondences to estimate an aligning transformation.
3) Shape similarity is then a measure of matching errors between corresponding points after alignment, allowing nearest neighbor classification for recognition. Results are shown for various datasets.
. Color and texture-based image segmentation using the expectation-maximizat...irisshicat
This document proposes a new image representation called "blobworld" for content-based image retrieval. It uses EM segmentation on combined color and texture features to segment images into coherent "blobs". The system allows users to view an image's internal blobworld representation to better understand query results. It aims to improve on existing systems by recognizing images as combinations of objects rather than just "stuff".
Shape matching and object recognition using shape contextsirisshicat
The document discusses a seminar on shape matching and object recognition using shape contexts. Shape contexts are shape descriptors that can be used to match shapes. They involve creating log polar histograms of shapes and finding correspondences between points on different shapes using bipartite graph matching. The approach was used for applications like digit recognition, achieving an error rate of 63% using 20,000 training examples, and 3D object detection using 72 views per object.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
Biased normalized cuts
1. Biased Normalized Cuts
Subhransu Maji1 , Nisheeth K. Vishnoi2, and Jitendra Malik1
University of California, Berkeley1
Microsoft Research India, Bangalore2
smaji@eecs.berkeley.edu, nisheeth.vishnoi@gmail.com, malik@eecs.berkeley.edu
Abstract (a) Image (b) Bottom-up (c) Top-down (d) Biased Ncut
We present a modification of “Normalized Cuts” to in-
corporate priors which can be used for constrained im-
age segmentation. Compared to previous generalizations
of “Normalized Cuts” which incorporate constraints, our
technique has two advantages. First, we seek solutions Figure 1. Segmentation using bottom-up & top-down information.
which are sufficiently “correlated” with priors which allows
us to use noisy top-down information, for example from an
object detector. Second, given the spectral solution of the with results as shown in Figure 1(d). The formalism is that
unconstrained problem, the solution of the constrained one of graph partitioning approached using tools and techniques
can be computed in small additional time, which allows from spectral graph theory. As in Normalized Cuts [18], we
us to run the algorithm in an interactive mode. We com- begin by computing eigenvectors of the Normalized Lapla-
pare our algorithm to other graph cut based algorithms and cian of a graph where the edge weights wij correspond to
highlight the advantages. the “affinity” between two pixels based on low-level simi-
larity cues. This is illustrated in Figure 2 for the cat image,
where the top row shows the second to fifth eigenvectors,
1. Introduction ui . Note that none of them is a particularly good indicator
Consider Figure 1. Suppose that we want to segment the vector from which the cat can be extracted by thresholding.
cat from its background. We can approach this We are interested in cuts which not only minimize the
1. Bottom-up, we might detect contours corresponding to normalized cut value but, at the same time, one of the sets
significant changes in brightness, color or texture. The ¯
in the partition (S, S) is required to have a sufficient over-
output of one such detector is shown in Figure 1(b). lap with a “bias” or a “prior guess”. In Figure 1, this was
Note that there will be internal contours, e.g. corre- the output of a top-down detector. In Figure 2, bottom
sponding to the colored spots, that are of higher con- row, the user has specified the bias by indicating various
trast than the external contour separating the cat from point sets T in the different panels. We will define a seed-
its background. This makes it difficult for any bottom- vector sT associated to T . Let DG be the diagonal matrix
up segmentation process to separate the cat from its of the graph, and ui , λi be the eigenvectors and eigenval-
background in the absence of high-level knowledge of ues of the Normalized Laplacian. Theorem 3.1 (originally
what cats look like. due to [15]) shows that we can get cuts that meet the twin
goals of having a small normalized cut, and being suffi-
2. Top-down, we might look for strong activation of, say, ciently correlated with sT by a remarkably simple proce-
a sliding window cat detector. The output of one such dure. Construct the biased normalized cut vector, x⋆ , where
part-based detector is shown in Figure 1(c). But note n
x⋆ = c i=2 λi1 ui uT DG sT . Intuitively the eigenvec-
−γ i
that this too is inadequate for our purpose. The outputs tors are linearly combined such that the ones that are well
of these detectors are typically not sharply localized; correlated with the seed vector are up weighted, while those
indeed that is almost a consequence of the desire to that are inversely correlated have their sign flipped. In Fig-
make them invariant to small deformations. ure 2 bottom row, these vectors are shown as images, and
In this paper we present an approach, “Biased Normal- the bias point sets are overlaid. This vector x⋆ can be thresh-
ized Cuts”, where we try to get the best of both worlds, olded to find the desired “biased normalized cut”. One can
2057
2. run the algorithm in an interactive mode by computing the small number of single-commodity maximum-flows to find
eigenvectors once and generating the biased normalized cut low-conductance cuts not only inside the input subset T ,
for any bias. ¯
but among all cuts which are well-correlated with (T, T ).
Among spectral methods, local graph partitioning was in-
2. Previous Work troduced by Spielman and Teng [20], who were interested
in finding a low-conductance cut in a graph in time nearly-
Spectral graph theory originated in the 1970s with work linear in the volume of the output cut. They used random
such as Donath and Hoffman [10] and Fiedler [13]. By the walk based methods to do this; and subsequently this result
1990s, there was a well developed literature in mathematics was improved by Andersen, Chung and Lang [1] by doing
and theoretical computer science, and Chung [9] provides certain Personalized PageRank based random walks.
a monograph level treatment. In computer vision and ma-
chine learning, the work of Shi and Malik on normalized 3. Biased Graph Partitioning
cuts [18, 19] was very influential, and since then a signifi-
cant body of work has emerged in these fields that we can- Image as a Weighted Graph and Normalized Cuts. The
not possibly do justice here. The best current approaches image is represented as a weighted undirected graph G =
to finding contours and regions bottom up are still based on (V, E) where the nodes of the graph are the points in the
variations of this theme [16]. feature space, and an edge is formed between every pair of
Given our interest in this work of biasing the solution nodes. The weight function on the edges w : E → R≥0
to satisfy additional constraints, two papers need particular is a function of the similarity between the end points of the
mention. The problem in the constrained setting for image edge. The volume of a set of vertices S is the total weight
def
segmentation has been studied by Yu and Shi [21] where of the edges incident to it: vol(S) = i∈S,j∈V w(i, j).
they find the solution to the normalized cuts problem sub- def
vol(G) = i,j∈V w(i, j) be the total weight of the edges in
ject to a set of linear constraints of the form U T x = 0. This
the graph. Normalized cut measure, defined next, is a stan-
problem can be reduced to an eigenvalue problem which
dard way to measure the degree of dissimilarity between
can be solved using spectral techniques as well. Ericks- ¯
two pieces of an image. For S ⊆ V, let S denote V S.
son et al. [11] generalize the set of constraints to U T x = b.
We will see in Section 5 that enforcing constraints in this ¯ def
Let cut(S, S) = i∈S,j∈S w(i, j) be the weight of edges
¯
manner is not robust when the constraints are noisy. The ¯
crossing the cut (S, S). Then, the normalized cut value cor-
computational complexity of these approaches is also sig- ¯
responding to (S, S) is defined as
nificantly higher than that of solving the basic normalized
def
¯ ¯
cut(S, S) cut(S, S)
cuts problem. Ncut(S) = +
In the last decade the most popular approach to inter- vol(S) ¯ .
vol(S)
active image segmentation in computer vision and graph- ¯
Noting that vol(S) + vol(S) = vol(G), it can be seen that
ics has followed in the steps of the work of Boykov and
this notion of a normalized cut is the same as that of the
Jolly [7] who find an image segmentation by computing a
traditional graph conductance defined for a set S as
min-cut/max-flow on a graph which encodes both the user
constraints and pairwise pixel similarity. This line of work def
¯
cut(S, S)
has been further investigated by Blake and Rother [3] where φ(S) = vol(G) · ¯ .
vol(S) · vol(S)
they experiment with ways to model the foreground and
background regions. In the GrabCut framework [17], the def
The conductance of the graph G is φ(G) = minS⊆V φ(S)
process of segmentation and foreground/background mod- and is an extremely well studied quantity in graph theory,
eling is repeated till convergence. With the advances in the computer science and machine learning. It is NP-hard to
min-cut/max-flow algorithms like [6], these methods have compute exactly and one of the earliest and most popular
become computationally attractive and can often be used in approach to compute an approximation to it is to write down
an interactive mode. However, these methods fail when the a relaxation which boils down to computing the second
constraints are sparse making it difficult to construct good eigenvalue of the Laplacian matrix associated to a graph.
foreground/background models and these methods tend to We discuss this next.
produce isolated cuts.
From a theoretical perspective, there has been a signif-
icant interest in the cut improvement problem: given as Graphs and Laplacians. For a graph G = (V, E) with
input a graph and a cut, find a subset that is a better cut. edge weights function w, let AG ∈ RV ×V denote its ad-
This started with Gallo, Grigoriadis and Tarjan [14] and jacency matrix with AG (i, j) = w(i, j); DG denotes the
has attained more recent attention in the work of Ander- diagonal degree matrix of G, i.e., DG (i, i) = j∈V w(i, j)
def
sen and Lang [2], who gave a general algorithm that uses a DG (i, j) = 0, for all i = j; LG = DG − AG will
2058
3. Figure 2. Top Row: Input Image and the top 4 eigenvectors computed using the intervening contour cue [16]. Bottom Row: Biased
normalized cuts for various seed sets T . The marked points in each image are the set of points in T .
def
denote the (combinatorial) Laplacian of G; and LG = Biased Normalized Cuts. Now we move on to incorpo-
−1/2 −1/2 rating the prior information given to us about the image to
DG L G DG will denote the normalized Laplacian of
G. We will assume G is connected, in which case the define the notion of biased normalized cuts. Recall that
eigenvalues of LG are 0 = λ1 < λ2 ≤ · · · ≤ λn . our problem is: we are given a region of interest in the
We will denote by λ2 (G) as this second eigenvalue of the image and we would like to segment the image so that
normalized Laplacian of G. If u1 , . . . , un are the corre- the segment is biased towards the specified region. A re-
def −1 gion is modeled as a subset T ⊆ V, of the vertices of the
sponding eigenvectors of LG , then we define vi = DG 2 ui ¯
image. We would be interested in cuts (S, S) which not
and think of them as the associated eigenvectors of LG ;
only minimize the normalized cut value but, at the same
LG vi = λi DG vi . The spectral decomposition of LG can
time, have sufficient correlation with the region specified
be written as n λi ui uT . We can also define the Moore-
i=2 i
def n
by T. To model this, we will first associate a vector sT to
Penrose pseudo-inverse of LG as L+ =
G
1 T
i=2 λi ui ui . ¯
vol(T )vol(T ) 1
the set T as follows: sT (i) = vol(G) · vol(T ) , if
¯
i ∈ T , and sT (i) = − vol(T )vol(T ) · vol(T ) if i ∈ T ; or
vol(G)
1
¯
¯
The Spectral Relaxation to Computing Normalized def ¯
vol(T )vol(T ) 1T 1T¯
Cuts. Spectral methods approximate the solution to nor- equivalently, sT = vol(G) vol(T ) − vol(T ) . We
¯
malized cuts by trying to find a x ∈ RV which minimizes have defined it in a way such that i∈V sT (i)di = 0 and
2
xT LG x subject to i,j∈V di dj (xi − xj )2 = vol(G). To i∈V sT (i) di = 1.
see why this is a relaxation first, for a subset S of ver- This notion of biased normalized cuts is quite natural and
tices, let 1S be the indicator vector of S in RV . Then motivated from the theory of local graph partitioning where
¯
1T LG 1S = ij∈E wi,j (1S (i) − 1S (j))2 = cut(S, S) and the goal is to find a low-conductance cut well correlated
S
2 ¯ Hence, if
i,j∈V di dj (1S (i) − 1S (j)) = vol(S)vol(S). with a specified input set. The correlation is specified by a
vol(G) parameter κ ∈ (0, 1). This allows us to explore the possi-
we let x = ¯ · 1S , then x satisfies the above
vol(S)·vol(S) bility of image segments which are well-correlated with the
constraints and has objective value exactly φ(S). Hence, the prior information which may have much less normalized cut
minimum of the quadratic program above can be at-most the value than T itself and, hence, refine the initial guess. In
normalized cut value of G. particular, we consider the spectral relaxation in Figure 3 to
In fact, it is easy to see that the optimal value of this a κ-biased normalized cut around T .
optimization problem, if G is connected, is λ2 (G) and the Note that x = sT , is a feasible solution to this spectral
−1
optimal vector to this program is DG 2 u2 , where u2 is the relaxation. Also note that if v2 satisfies the correlation con-
eigenvector corresponding to the smallest non-zero eigen- straint with sT , then it will be the optimal to this program.
value (λ2 (G)) of LG . In particular, the optimal vector x⋆ What is quite interesting is that one can characterize the op-
has the property that i∈V x⋆ di = 0.
i timal solution to this spectral program under mild condi-
2059
4. Algorithm 1 Biased Normalized Cuts (G, w, sT , γ)
minimize xT LG x Require: Graph G = (V, E), edge weight function w, seed
s.t. di dj (xi − xj )2 = vol(G) sT and a correlation parameter γ ∈ (−∞, λ2 (G))
1: AG (i, j) ← w(i, j), DG (i, i) ← j w(i, j)
i,j∈V
−1/2 −1/2
2: LG ← DG − AG , LG ← DG LG DG
( xi sT (i)di )2 ≥ κ
3: Compute u1 , u2 , . . . , uK the eigenvectors of LG corre-
i∈V
Figure 3. BiasedNcut(G, T, κ)- Spectral relaxation to compute sponding to the K smallest eigenvalues λ1 , λ2 , . . . , λK .
κ-biased normalized cuts around T
uT DG sT
4: wi ← i
λi −γ , for i = 2, . . . , K
K
tions on the graph G and the set T and, as it turns out, one 5: Obtain the biased normalized cut, x∗ ∝ i=2 wi ui
has to do very little effort to compute the optimal solution if
one has already computed the spectrum of LG . This is cap-
tured by the following theorem which is due to [15]. We of a second [8]. Our method can be faster than the min-
include a proof of this in the appendix for completeness. cut/max-flow cut based approaches in an interactive setting
as these eigenvectors need to be computed just once. In ad-
Theorem 3.1. Let G be a connected graph and T be such dition the real valued solution like the one shown in Figure 2
that i∈V sT (i)v2 (i)di = 0. Further, let 1 ≥ κ ≥ 0 be a might provide the user better guidance than a hard segmen-
correlation parameter. Then, there is an optimal solution, tation produced by a min-cut algorithm.
x⋆ , to the spectral relaxation to the κ-biased normalized
n
cuts around T such that x⋆ = c i=2 λi1 ui uT DG sT for
−γ i 5. Constrained Normalized Cuts
some γ ∈ (−∞, λ2 (G)) and a constant c.
We compare our approach to the constrained normalized
4. Algorithm cut formulation of Yu and Shi [21] and Ericksson et al. [11].
The later generalized the linear constraints to U T P x = b,
Theorem 3.1 shows that the final solution is a weighted where b could be an arbitrary non-zero vector. We consider
combination of the eigenvectors and the weight of each a toy example similar to the one used by Yu and Shi. The au-
eigenvector is proportional to the “correlation” with the thors observe that when the set of constraints are small it is
seed, uT DG sT and inversely proportional to λi − γ. In-
i better to use “conditioned constraints”, U T P x = 0, where
tuitively the eigenvectors that are well correlated with the P = D−1 W , instead of the original constraints, U T x = 0.
seed vector are up weighted, while those that are inversely The ”conditioned constraints” propagate the constraints to
correlated have their sign flipped. the neighbors of the constrained points avoiding solutions
Often for images the λi − γ grows quickly and one can that are too close to the unconstrained one.
obtain a good approximation by considering eigenvectors To illustrate our approach, let us consider the points
for the K smallest eigenvalues. In our experiments with p1 , p2 , . . . pn grouped into three sets S1 , S2 and S3 from
natural images we set K = 26, i.e. use the top 25 eigen- left to right, as shown in Figure 4. We construct a graph
vectors ignoring the all ones vector. We also set γ pa- with edge weights between points pi and pj
rameter which controls the amount of correlation implic-
itly to γ = −τ × λavg , where τ = 1 and λavg is the av- wij = exp −d(pi , pj )2 /2σ 2 (1)
erage of the top K eigenvalues. This could also be user
defined parameter in an interactive setting. Algorithm 1 with σ = 3, where d(x, y) is the Euclidean distance be-
describes our method for computing the biased normalized tween the points x and y. The unconstrained solution to
cuts. Steps 1, 2, 3 are also the steps for computing the seg- the normalized cuts correctly groups the points as shown in
mentations using normalized cuts which involve computing Figure 4.
the K smallest eigenvectors of the normalized graph lapla- Now suppose we want to group S1 and S3 together. This
cian LG . The biased normalized cut for any seed vector can be done by adding a constraint that the circled points
sT is the weighted combination of eigenvectors where the belong to the same group and can encoded as a linear con-
weights are computed in step 4. In the interactive setting straint of the form, xi −xj = 0, where i and j are the indices
only steps 4 and 5 need to be done when the seed vector of the constrained points. The constrained cut formulation
changes which is very quick. is able to correctly separate S1 and S3 from S2 as shown in
The time taken by the algorithm is dominated by the time Figure 5(a). For our approach we construct the vector sT
taken to compute the eigenvectors. In an interactive setting with i, j ∈ T defined earlier and use the top 16 generalized
one can use special purpose hardware accelerated methods eigenvectors. The biased cut solution separates S1 and S3
to compute the eigenvectors of typical images in fraction from S2 as well.
2060
5. Input Points Normalized Cut Image Pb Prior Biased Ncut
9 0.6
8
0.4
7
6 0.2
5 0
4 −0.2
3
−0.4
2
−0.6
1
−0.8
5 10 15 20 50 100 150 200 250 300
Figure 4. Input points (left) clustered using normalized cuts (right).
Next we increase the number of such constraints by ran-
domly sampling points from S1 ∪ S3 and adding constraints
that they belong together. Given a set of n points we gen-
erate n − 1 constraints by ordering the points and adding a
constraint for each consecutive pair similar to the approach
of [21]. Instead of improving the solution, the solution to
the constrained cuts deteriorates when the number of con-
straints are large as the solution that separates S1 and S3
from S2 is no longer feasible. Our method on the other hand
gets better with more constraints as seen in Figure 5(b). Figure 8. Biased normalized cuts using top-down priors from an
object detector.
We also test the robustness of the algorithm to outliers
by adding some points from S2 into the constraints. The
constrained cut solution deteriorates even when there is one Bias from object detectors. Although in our formulation
outlier as seen in Figure 5(c). Our method on the other hand the seed vector sT was discrete, it is not necessary. In par-
remains fairly robust even when there are two outliers Fig- ticular we can use the probability estimates produced by an
ure 5(d). object detector as a bias. Figure 8 shows some examples
of a top down segmentation mask generated by the detector
6. Qualitative Evaluation of [4] used as a seed vector directly after normalization.
We present qualitative results on images taken from 7. Conclusion
PASCAL VOC 2010 dataset [12]. For all our experiments We present a simple and effective method for comput-
we use the intervening contour cue [16] for computing the ing the biased normalized cuts of the images to incorporate
weight matrix and 25 eigenvectors to compute the biased top-down priors or user input. The formulation is attractive
normalized cut. We refrain from doing quantitive compari- as it allows one to incorporate these priors without addi-
son on segmentation benchmarks as the goal of this paper is tional overhead. Linear combinations of eigenvectors nat-
to introduce a new algorithm for computing biased normal- urally “regularizes” the combinatorial space of segmenta-
ized cuts, which is only one of the components of a good tions. Code for computing biased normalized cuts interac-
segmentation engine. tively on images can be downloaded at the author’s website.
Acknowledgements: Thanks to Pablo Arbelaez. Subhransu Maji is sup-
Effect of γ. The correlation parameter γ when set smaller ported by a fellowship from Google Inc. and ONR MURI N00014-06-1-
values increases the correlation κ with the seed vector. Fig- 0734. The work was done when the authors were at Microsoft Research
ure 6 shows the effect of γ on the result of the biased nor- India.
malized cut. One can obtain tighter cuts around the seed by
setting γ to smaller values. In an interactive setup this could A. Appendix
be adjusted by the user. Proof. [of Theorem 3.1] To start off, note that
BiasedNcut(G, T, κ) is a non-convex program. One
Bias from user interaction. We first test the algorithm can relax it to SDPp (G, T, κ) of Figure 9. In the same
in the interactive setting by automatically sampling a set of figure the dual of this SDP appears: SDPd (G, T, κ). Here,
points inside the figure mask. Figure 7 shows the several ex- def
ample outputs. All these images are taken from the animal for matrices A, B ∈ Rn×n , A ◦ B = i,j Ai,j Bi,j .
def 1
categories of the Pascal VOC 2010 detection dataset. Also Ln = DG − vol(G) DG JDG , where J is the matrix
2061
6. 1 contraint Biased Ncut Yu & Shi 19 contraints Biased Ncut Yu & Shi
0.04
0.2 2
8 0.02 8 0.1
6 0 6
0 0
−0.02 0
4 −0.04 4
−0.2 −2
2 −0.06 2
−0.1
−0.4 −0.08
5 10 15 20 100 200 300 100 200 300 5 10 15 20 100 200 300 100 200 300
(a) One point each sampled from S2 and S3 (b) 10 points each sampled from S2 and S3
10 contraints Biased Ncut Yu & Shi 11 contraint Biased Ncut Yu & Shi
1 1
8 0.05 8 0.2
0.5
0
6 0 6 0 0.1
−0.05
4 4 −0.5
−0.1 0
2 −1 −0.15 2 −1
−0.1
5 10 15 20 100 200 300 100 200 300 5 10 15 20 100 200 300 100 200 300
(c) 5 points each from sets S1 and S3 with 1 outlier from S2 (d) 5 points each from sets S1 and S3 with 2 outliers from S2
Figure 5. Comparison of our biased normalized cut approach to constrained cut approach of [21, 11] for various constraints shown by
red circled points. When the number of constraints is large (b) or there are outliers (c, d), our method produces better solutions than the
solution of constrained cuts.
Figure 6. Effect of γ. Input image and biased cuts for decreasing values of γ from left to right. The local cuts are more and more correlated
with the seed set (shown as dots) as γ decreases.
with Ji,j = 1 for all i, j. Verifying Slater’s condition,
one can observe that strong duality holds for this SDP minimize LG ◦ X
relaxation. Then, using strong duality and the comple- subject to Ln ◦ X = 1
mentary slackness conditions implied by it, we will argue
that the SDPp (G, T, κ) has a rank one optimal solution (DG sT )(DG sT )T ◦ X ≥ κ
under the conditions of the theorem. This implies that X 0
the optimal solution of SDPp (G, T, κ) is the same as the
optimal solution of BiasedNcut. Combining this with the maximize α + κβ
complementary slackness condition obtained from the dual
subject to LG αLn + β(DG sT )(DG sT )T
SDPd (G, T, κ), one can derive that the optimal rank one
solution has, up to a constant, the desired form promised α ∈ R, β ≥ 0
by the theorem. Now we expand the above steps in claims. Figure 9. Top: SDPp (G, T, κ) Primal SDP relaxation. Bottom:
SDPd (G, T, κ) Dual SDP
Claim A.1. The primal SDPp (G, T, κ) is a relaxation of
the vector program BiasedNcut(G, T, κ).
for this primal SDP. Consider X = sT sT T . Then,
Proof. Consider a vector x that is a feasible solution to (DG sT )(DG sT )T ◦ sT sT T = (sT T DG sT )2 = 1 > κ.
BiasedNcut(G, T, κ), and note that X = xxT is a feasible
solution to SDPp (G, T, κ). Claim A.3. The feasibility and complementary slackness
conditions for a primal-dual pair X ⋆ , α⋆ , β ⋆ listed in Fig-
Claim A.2. Strong duality holds between SDPp (G, T, κ)
ure 10 are sufficient for them to be an optimal solution.
and SDPd (G, T, κ).
Proof. Since SDPp (G, T, κ) is convex, if suffices to ver- Proof. This follows from the convexity of SDPp (G, T, κ)
ify that Slater’s constraint qualification condition is true and Slater’s condition [5].
2062
7. Image Pb Biased Ncut Image Pb Biased Ncut
Figure 7. Example biased cuts. Images shown with the “probability of boundary” map (Pb) computed using [16] and the biased normalized
cut for various seed sets (marked as red dots).
2063
8. References
Ln ◦ X ⋆ = 1
[1] R. Andersen, F. Chung, and K. Lang. Local graph partition-
(DG sT )(DG sT )T ◦ X ⋆ ≥ κ ing using PageRank vectors. In FOCS, pages 475–486, 2006.
LG − α⋆ Ln − β ⋆ (DG sT )(DG sT )T 0 2058
[2] R. Andersen and K. Lang. An algorithm for improving graph
β⋆ ≥ 0
partitions. In SODA, pages 651–660, 2008. 2058
[3] A. Blake, C. Rother, M. Brown, P. Perez, and P. Torr. Inter-
α⋆ (Ln ◦ X ⋆ − 1) = 0 active image segmentation using an adaptive gmmrf model.
β ⋆ ((DG sT )(DG sT )T ◦ X ⋆ − κ) = 0 In ECCV, pages 428–441, 2004. 2058
[4] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people
X ⋆ ◦ (LG − α⋆ Ln − β ⋆ (DG sT )(DG sT )T ) = 0
using mutually consistent poselet activations. In ECCV, sep
2010. 2061
Figure 10. Top: Feasibility conditions. Bottom: Complementary
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cam-
slackness conditions.
bridge University Press, Cambridge, UK, 2004. 2062
[6] Y. Boykov and V. Kolmogorov. An experimental comparison
Claim A.4. These feasibility and complementary slackness of min-cut/max-flow algorithms for energy minimization in
conditions, coupled with the assumptions of the theorem, vision. TPAMI, 26:1124–1137, September 2004. 2058
imply that X ⋆ must be rank 1 and β ⋆ > 0. [7] Y. Y. Boykov and M. P. Jolly. Interactive graph cuts for opti-
mal boundary & region segmentation of objects in N-D im-
Now we complete the proof of the theorem. From Claim ages. In ICCV, 2001. 2058
A.4 it follows that, X ⋆ = x⋆ x⋆T where x⋆ satisfies the [8] B. Catanzaro, B.-Y. Su, N. Sundaram, Y. Lee, M. Murphy,
equation (LG −α⋆ Ln −β ⋆ (DG sT )(DG sT )T )x⋆ = 0. From and K. Keutzer. Efficient, high-quality image contour detec-
the second complementary slackness condition in Figure tion. In ICCV, pages 2381 –2388, 2009. 2060
10, and the fact that β ⋆ > 0, we obtain that i x⋆ sT (i)di = [9] F. Chung. Spectral Graph Theory. American Mathematical
√ √ i
Society, 1997. 2058
± κ. Thus, x⋆ = ±β ⋆ κ(LG − α⋆ Ln )+ DG sT . This
[10] W. E. Donath and A. J. Hoffman. Lower bounds for the parti-
proves the theorem.
tioning of graphs. IBM J. Res. Dev., 17:420–425, September
A.1. Proof of Claim A.4 1973. 2058
[11] A. Eriksson, C. Olsson, and F. Kahl. Normalized cuts revis-
Proof. We start by stating two facts. The second is trivial. ited: A reformulation for segmentation with linear grouping
constraints. In ICCV, oct. 2007. 2058, 2060, 2062
Fact A.5. α⋆ ≤ λ2 (G). Moreover if λ2 = α⋆ then
[12] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn,
i v2 (i)sT (i)di = 0. and A. Zisserman. The pascal visual object classes (voc)
−1 challenge. IJCV, 88(2):303–338, June 2010. 2061
Proof. Recall that v2 = DG 2 u2 where u2 is the unit length [13] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak
eigenvector corresponding to λ2 (G) of LG . Plugging in v2 Mathematical Journal, 23(98):298–305, 1973. 2058
in Equation 3 from the feasibility conditions, we obtain that [14] G. Gallo, M. D. Grigoriadis, and R. E. Tarjan. A fast para-
v2 LG v2 −α⋆ −β ⋆ ( i v2 (i)sT (i)di )2 ≥ 0. But v2 LG v2 =
T T
metric maximum flow algorithm and applications. SIAM J.
⋆ ⋆
λ2 (G) and β ≥ 0. Hence, λ2 (G) ≥ α . It follows that if Comput., 18(1):30–55, 1989. 2058
λ2 = α⋆ then i v2 (i)sT (i)di = 0. [15] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A
spectral algorithm for improving graph partitions. CoRR,
Fact A.6. We may assume that the optimal X ⋆ satisfies abs/0912.0681, 2009. 2057, 2060
1 1
1T DG X ⋆ DG 1 = 0, where 1 is the all ones vector.
2 2 [16] M. Maire, P. Arbelaez, C. Fowlkes, and J. Malik. Using con-
tours to detect and localize junctions in natural images. In
Now we return to the proof of Claim A.4. If we assume
⋆ CVPR, pages 1 –8, 2008. 2058, 2059, 2061, 2063
i v2 (i)sT (i)di = 0, then we know that α < λ2 (G) from [17] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interac-
Fact A.5. Note that since G is connected and α⋆ < λ2 (G), tive foreground extraction using iterated graph cuts. ACM
LG − α⋆ Ln has rank exactly n − 1. From the complemen- Transactions on Graphics, 23:309–314, 2004. 2058
tary slackness condition 2 we can deduce that the image of [18] J. Shi and J. Malik. Normalized cuts and image segmenta-
X ⋆ is in the kernel of LG − α⋆ Ln − β ⋆ (DG sT )(DG sT )T . tion. In CVPR, jun. 1997. 2057, 2058
But β ⋆ (DG sT )(DG sT )T is a rank one matrix and since [19] J. Shi and J. Malik. Normalized cuts and image segmenta-
⋆
i sT (i)di = 0, it reduces the rank of LG − α Ln by one tion. TPAMI, 22(8):888 –905, Aug. 2000. 2058
⋆ ⋆ ⋆
precisely when β > 0. If β = 0 then X must be 0 which [20] D. Spielman and S.-H. Teng. Nearly-linear time algorithms
is not possible if SDPp (G, T, κ) is feasible. Hence, the rank for graph partitioning, graph sparsification, and solving lin-
of LG −α⋆ Ln −β ⋆ (DG sT )(DG sT )T must be exactly n−2 ear systems. In STOC, pages 81–90, 2004. 2058
and since X ⋆ cannot have any component along the all ones [21] S. X. Yu and J. Shi. Grouping with bias. In NIPS. MIT Press,
2001. 2058, 2060, 2061, 2062
vector, X ⋆ must be rank one. This proves the claim.
2064