We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.
This document summarizes two algorithms for computing properties of high-dimensional polytopes given access to certain oracle functions:
1. An algorithm for computing the edge-skeleton of a polytope in oracle polynomial-time using an oracle that returns the vertex maximizing a linear function.
2. A randomized algorithm for approximating the volume of a polytope by generating random points within it using a hit-and-run process, and estimating the volume from these points. The algorithm runs in oracle polynomial-time and provides an approximation with high probability.
Experimental results show the volume algorithm can approximate volumes of polytopes up to 100 dimensions within 1% error in under 2 hours, outperforming exact
The document discusses harmonic maps from the Riemann surface M=S1×R or CP1\{0,1,∞} into the complex projective space CPn. It presents the DPW method for constructing harmonic maps using loop groups. Specifically, it constructs equivariant harmonic maps in CPn from degree one potentials in the loop algebra Λgσ, relating these to whether the maps are isotropic, weakly conformal, or non-conformal. It then considers the system of ODEs and scalar ODE that must be solved to generate the harmonic maps using this method.
Slides: The Centroids of Symmetrized Bregman DivergencesFrank Nielsen
This document discusses centroids and barycenters in the context of Bregman divergences. It begins by reviewing centroids and barycenters in Euclidean geometry, then introduces Bregman divergences as a generalization. Bregman divergences are defined based on a strictly convex generator function. Right-sided, left-sided, and symmetrized centroids/barycenters are defined for Bregman divergences based on minimizing average divergence. The right-sided centroid is shown to be the center of mass, while the left-sided centroid depends on the gradient of the generator function. The symmetrized centroid is characterized as the point minimizing the sum of divergences to the left and right centroids.
This document presents a method for finding the best polynomial approximation of degree one for a function over a given interval using the method of least parallelograms. It introduces the problem, outlines the main results which include theorems on the existence and properties of the least parallelogram and its relationship to the minimax polynomial. Examples are also provided to illustrate the method.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
The Mean Value Theorem is the most important theorem in calculus. It is the first theorem which allows us to infer information about a function from information about its derivative. From the MVT we can derive tests for the monotonicity (increase or decrease) and concavity of a function.
This document contains past paper questions from the IITJEE mathematics exam in 2010. It includes 4 sections with different types of questions - multiple choice, integer answer, paragraph and matrix matching. The questions cover topics in mathematics like planes, functions, trigonometry, complex numbers and matrices. Some questions provide contexts involving triangles, signals, parallelograms and other geometric shapes.
This document summarizes two algorithms for computing properties of high-dimensional polytopes given access to certain oracle functions:
1. An algorithm for computing the edge-skeleton of a polytope in oracle polynomial-time using an oracle that returns the vertex maximizing a linear function.
2. A randomized algorithm for approximating the volume of a polytope by generating random points within it using a hit-and-run process, and estimating the volume from these points. The algorithm runs in oracle polynomial-time and provides an approximation with high probability.
Experimental results show the volume algorithm can approximate volumes of polytopes up to 100 dimensions within 1% error in under 2 hours, outperforming exact
The document discusses harmonic maps from the Riemann surface M=S1×R or CP1\{0,1,∞} into the complex projective space CPn. It presents the DPW method for constructing harmonic maps using loop groups. Specifically, it constructs equivariant harmonic maps in CPn from degree one potentials in the loop algebra Λgσ, relating these to whether the maps are isotropic, weakly conformal, or non-conformal. It then considers the system of ODEs and scalar ODE that must be solved to generate the harmonic maps using this method.
Slides: The Centroids of Symmetrized Bregman DivergencesFrank Nielsen
This document discusses centroids and barycenters in the context of Bregman divergences. It begins by reviewing centroids and barycenters in Euclidean geometry, then introduces Bregman divergences as a generalization. Bregman divergences are defined based on a strictly convex generator function. Right-sided, left-sided, and symmetrized centroids/barycenters are defined for Bregman divergences based on minimizing average divergence. The right-sided centroid is shown to be the center of mass, while the left-sided centroid depends on the gradient of the generator function. The symmetrized centroid is characterized as the point minimizing the sum of divergences to the left and right centroids.
This document presents a method for finding the best polynomial approximation of degree one for a function over a given interval using the method of least parallelograms. It introduces the problem, outlines the main results which include theorems on the existence and properties of the least parallelogram and its relationship to the minimax polynomial. Examples are also provided to illustrate the method.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
The Mean Value Theorem is the most important theorem in calculus. It is the first theorem which allows us to infer information about a function from information about its derivative. From the MVT we can derive tests for the monotonicity (increase or decrease) and concavity of a function.
This document contains past paper questions from the IITJEE mathematics exam in 2010. It includes 4 sections with different types of questions - multiple choice, integer answer, paragraph and matrix matching. The questions cover topics in mathematics like planes, functions, trigonometry, complex numbers and matrices. Some questions provide contexts involving triangles, signals, parallelograms and other geometric shapes.
This document discusses different geometric structures and distances that can be used for clustering probability distributions that live on the probability simplex. It reviews four main geometries: Fisher-Rao Riemannian geometry based on the Fisher information metric, information geometry based on Kullback-Leibler divergence, total variation distance and l1-norm geometry, and Hilbert projective geometry based on the Hilbert metric. It compares how k-means clustering performs using distances derived from these different geometries on the probability simplex.
The document introduces the concept of generalized quasi-nonexpansive (GQN) maps. Some key results are:
1) GQN maps generalize quasi-nonexpansive maps but the fixed point set may not always be closed or convex.
2) If a subset satisfies certain conditions, it is a GQN-retract of the space.
3) Under these conditions, the class of GQN-retracts is closed under intersection and the common fixed point set of an increasing sequence of GQN maps is a GQN-retract.
A Commutative Alternative to Fractional Calculus on k-Differentiable FunctionsMatt Parker
This document presents a method for creating a commutative operator that acts parallel to fractional calculus operators on continuous functions. It defines spaces Ck that contain images of continuous functions and combines these into a space Cdiff that contains a subset isomorphic to the space of continuous functions C(R). An operator Dk is defined on Cdiff that commutes with itself and acts equivalently to fractional derivatives on C(R) up to the differentiability of the function. This provides a commutative alternative to fractional calculus on continuous functions.
This document discusses canonical-Laplace transforms and various testing function spaces. It begins by defining the canonical-Laplace transform and establishes some testing function spaces using Gelfand-Shilov technique, including CLa,b,γ, CLab,β, CLγa,b,β, CLa,b,β,n, and CLγa,,m,β,n. It then presents results on countable unions of s-type spaces, proving that various spaces can be expressed as countable unions and discussing topological properties. The document concludes by stating that canonical-Laplace transforms are generalized in a distributional sense and results on countable unions of s-type spaces are discussed, along with the topological structure
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
This document presents algorithms for maintaining a topological ordering of vertices in a directed graph as edges are incrementally added. It begins by describing a simple "limited search" approach that takes O(m^2) time over all edge additions, where m is the number of edges. It then introduces several improved algorithms. A "two-way limited search" performs searches in both directions and takes O(m^{3/2} log n) time. A "semi-ordered search" relaxes constraints on the search order while still taking O(m^{3/2}) time. For dense graphs with m=Ω(n^2), a "topological search" approach balances vertices instead of edges and reorders vertices
Clustering in Hilbert geometry for machine learningFrank Nielsen
- The document discusses different geometric approaches for clustering multinomial distributions, including total variation distance, Fisher-Rao distance, Kullback-Leibler divergence, and Hilbert cross-ratio metric.
- It benchmarks k-means clustering using these four geometries on the probability simplex, finding that Hilbert geometry clustering yields good performance with theoretical guarantees.
- The Hilbert cross-ratio metric defines a non-Riemannian Hilbert geometry on the simplex with polytopal balls, and satisfies information monotonicity properties desirable for clustering distributions.
Polyhedral computations in computational algebraic geometry and optimizationVissarion Fisikopoulos
The document summarizes a talk on polyhedral computations in computational algebraic geometry and optimization. It discusses algorithms for enumerating vertices of resultant polytopes and 2-level polytopes. Applications include support computation for implicit equations and computing resultants and discriminants. Open problems include finding the maximum number of faces of 4-dimensional resultant polytopes and explaining symmetries in their maximal f-vectors.
This document discusses canonical forms for representing Boolean functions. It defines sum of products (SOP) and product of sums (POS) forms, which are standard representations. Minterms and maxterms are also defined as product and sum terms involving all variables. The canonical SOP form is defined as the logical sum of minterms where the function value is 1. Canonical POS form is the logical product of maxterms where the function value is 0. Procedures to convert between SOP, POS and canonical forms are presented. Canonical forms provide a unique representation and can be used to determine equivalency between functions.
This document discusses finding the minimum spanning tree of a fuzzy weighted rough graph. Key points:
- A rough graph model is used where edges have fuzzy weights represented by triangular fuzzy numbers.
- The Boruvka algorithm is used to find the minimum spanning tree on this fuzzy weighted rough graph.
- Signed distance ranking is used to rank fuzzy numbers and determine the lightest edge weights for the algorithm.
- An example application of the algorithms for finding the lower and upper approximations of the rough graph and determining the minimum spanning tree is provided.
This document discusses logical effort and how it can be used to analyze multistage logic networks. It explains that logical effort can be generalized to account for branching in multistage paths. When branching is considered, the path effort is equal to the product of logical effort, electrical effort, and branching effort. Logical effort analysis can also be used to determine the optimal size of each stage and number of stages to minimize delay in a multistage network.
1. The junction tree algorithm is used to perform inference in graphical models. It involves constructing a junction tree from a graph, where cliques in the graph become nodes in the tree.
2. Message passing is then performed on the junction tree to calculate marginal and joint probabilities. Messages are passed from parent to child nodes involving updating beliefs based on the intersection of variable sets.
3. Common message passing algorithms include Shafer-Shenoy and Lauritzen-Spiegelhalter, which differ in how messages are computed and passed between nodes.
The document discusses using the Fast Fourier Transform (FFT) algorithm to multiply polynomials in faster than quadratic time. It explains that the FFT represents polynomials in a point-value representation using complex roots of unity, which allows multiplication to be performed pointwise in linear time. The FFT algorithm recursively decomposes the polynomial multiplication problem into smaller subproblems of half the size, using divide and conquer, to compute the discrete Fourier transform in O(n log n) time rather than the naive O(n^2) time. Interpolation can also be performed in similar time to convert back from the point-value representation to coefficients. Overall the FFT provides a faster algorithm for polynomial multiplication and convolution.
This document contains a summary of 8 mathematics questions on the topic of sets. Each question contains 1-4 parts asking students to shade regions on Venn diagrams, list set elements, or calculate set properties like union and intersection. The document also provides the answers to each question in point form for easy reference.
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
This document discusses temporal graphs, which are graphs where nodes and edges are active for specific time instances. It provides examples of temporal graphs and compares them to non-temporal graphs. It then covers temporal graph traversal methods like depth-first search and breadth-first search, accounting for temporal constraints. Various path types in temporal graphs are defined, such as foremost paths, latest-departure paths, and fastest paths. Algorithms for finding these paths using a stream representation or graph transformation approach are outlined.
This document discusses the use of approximate Bayesian computation (ABC) to perform Bayesian inference when the likelihood function is intractable. It presents François and Pudlo's (F&P) ABC method, which uses a summary statistic S(·), an importance proposal g(·), a kernel K(·), and a bandwidth h. The document outlines the approximations and errors involved in F&P's ABC and discusses calibrating and optimizing the choice of summary statistic and bandwidth. It notes that while the optimal summary statistic is the expectation E(θ|y), this is usually unavailable, so F&P suggest estimating it using simulated data from a pilot run.
SUSY Q-Balls and Boson Stars in Anti-de Sitter space-time - Cancun talk 2012Jurgen Riedel
This document presents an overview of Q-balls and boson stars. It begins with introductions to solitons, topological solitons like vortices and domain walls, and non-topological solitons like Q-balls. Q-balls are described as localized rotating solutions stabilized by a conserved Noether charge. Boson stars are discussed as gravitating solutions composed of scalar fields minimally coupled to gravity. Different models of boson stars are presented, including those with self-interactions and gauge couplings. Rotating versions of both Q-balls and boson stars are also summarized.
Practical volume estimation of polytopes by billiard trajectories and a new a...Apostolos Chalkis
A new randomized method to approximate the volume of a convex polytope based on simulated annealing for cooling convex bodies and MCMC sampling with geometric random walks
14th Athens Colloquium on Algorithms and Complexity (ACAC19)Apostolos Chalkis
This document presents a new method for estimating the volume of convex polytopes called practical volume estimation by a new annealing schedule. It uses a multiphase Monte Carlo approach with a sequence of concentric convex bodies to approximate the volume. A new simulated annealing method constructs a sparser sequence of bodies. Billiard walk sampling is used for volume-represented and zonotope polytopes. The method scales to dimensions of 100 in an hour for random V-polytopes and zonotopes, outperforming previous methods with theoretical complexity of O*(d^3).
This document discusses different geometric structures and distances that can be used for clustering probability distributions that live on the probability simplex. It reviews four main geometries: Fisher-Rao Riemannian geometry based on the Fisher information metric, information geometry based on Kullback-Leibler divergence, total variation distance and l1-norm geometry, and Hilbert projective geometry based on the Hilbert metric. It compares how k-means clustering performs using distances derived from these different geometries on the probability simplex.
The document introduces the concept of generalized quasi-nonexpansive (GQN) maps. Some key results are:
1) GQN maps generalize quasi-nonexpansive maps but the fixed point set may not always be closed or convex.
2) If a subset satisfies certain conditions, it is a GQN-retract of the space.
3) Under these conditions, the class of GQN-retracts is closed under intersection and the common fixed point set of an increasing sequence of GQN maps is a GQN-retract.
A Commutative Alternative to Fractional Calculus on k-Differentiable FunctionsMatt Parker
This document presents a method for creating a commutative operator that acts parallel to fractional calculus operators on continuous functions. It defines spaces Ck that contain images of continuous functions and combines these into a space Cdiff that contains a subset isomorphic to the space of continuous functions C(R). An operator Dk is defined on Cdiff that commutes with itself and acts equivalently to fractional derivatives on C(R) up to the differentiability of the function. This provides a commutative alternative to fractional calculus on continuous functions.
This document discusses canonical-Laplace transforms and various testing function spaces. It begins by defining the canonical-Laplace transform and establishes some testing function spaces using Gelfand-Shilov technique, including CLa,b,γ, CLab,β, CLγa,b,β, CLa,b,β,n, and CLγa,,m,β,n. It then presents results on countable unions of s-type spaces, proving that various spaces can be expressed as countable unions and discussing topological properties. The document concludes by stating that canonical-Laplace transforms are generalized in a distributional sense and results on countable unions of s-type spaces are discussed, along with the topological structure
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
This document presents algorithms for maintaining a topological ordering of vertices in a directed graph as edges are incrementally added. It begins by describing a simple "limited search" approach that takes O(m^2) time over all edge additions, where m is the number of edges. It then introduces several improved algorithms. A "two-way limited search" performs searches in both directions and takes O(m^{3/2} log n) time. A "semi-ordered search" relaxes constraints on the search order while still taking O(m^{3/2}) time. For dense graphs with m=Ω(n^2), a "topological search" approach balances vertices instead of edges and reorders vertices
Clustering in Hilbert geometry for machine learningFrank Nielsen
- The document discusses different geometric approaches for clustering multinomial distributions, including total variation distance, Fisher-Rao distance, Kullback-Leibler divergence, and Hilbert cross-ratio metric.
- It benchmarks k-means clustering using these four geometries on the probability simplex, finding that Hilbert geometry clustering yields good performance with theoretical guarantees.
- The Hilbert cross-ratio metric defines a non-Riemannian Hilbert geometry on the simplex with polytopal balls, and satisfies information monotonicity properties desirable for clustering distributions.
Polyhedral computations in computational algebraic geometry and optimizationVissarion Fisikopoulos
The document summarizes a talk on polyhedral computations in computational algebraic geometry and optimization. It discusses algorithms for enumerating vertices of resultant polytopes and 2-level polytopes. Applications include support computation for implicit equations and computing resultants and discriminants. Open problems include finding the maximum number of faces of 4-dimensional resultant polytopes and explaining symmetries in their maximal f-vectors.
This document discusses canonical forms for representing Boolean functions. It defines sum of products (SOP) and product of sums (POS) forms, which are standard representations. Minterms and maxterms are also defined as product and sum terms involving all variables. The canonical SOP form is defined as the logical sum of minterms where the function value is 1. Canonical POS form is the logical product of maxterms where the function value is 0. Procedures to convert between SOP, POS and canonical forms are presented. Canonical forms provide a unique representation and can be used to determine equivalency between functions.
This document discusses finding the minimum spanning tree of a fuzzy weighted rough graph. Key points:
- A rough graph model is used where edges have fuzzy weights represented by triangular fuzzy numbers.
- The Boruvka algorithm is used to find the minimum spanning tree on this fuzzy weighted rough graph.
- Signed distance ranking is used to rank fuzzy numbers and determine the lightest edge weights for the algorithm.
- An example application of the algorithms for finding the lower and upper approximations of the rough graph and determining the minimum spanning tree is provided.
This document discusses logical effort and how it can be used to analyze multistage logic networks. It explains that logical effort can be generalized to account for branching in multistage paths. When branching is considered, the path effort is equal to the product of logical effort, electrical effort, and branching effort. Logical effort analysis can also be used to determine the optimal size of each stage and number of stages to minimize delay in a multistage network.
1. The junction tree algorithm is used to perform inference in graphical models. It involves constructing a junction tree from a graph, where cliques in the graph become nodes in the tree.
2. Message passing is then performed on the junction tree to calculate marginal and joint probabilities. Messages are passed from parent to child nodes involving updating beliefs based on the intersection of variable sets.
3. Common message passing algorithms include Shafer-Shenoy and Lauritzen-Spiegelhalter, which differ in how messages are computed and passed between nodes.
The document discusses using the Fast Fourier Transform (FFT) algorithm to multiply polynomials in faster than quadratic time. It explains that the FFT represents polynomials in a point-value representation using complex roots of unity, which allows multiplication to be performed pointwise in linear time. The FFT algorithm recursively decomposes the polynomial multiplication problem into smaller subproblems of half the size, using divide and conquer, to compute the discrete Fourier transform in O(n log n) time rather than the naive O(n^2) time. Interpolation can also be performed in similar time to convert back from the point-value representation to coefficients. Overall the FFT provides a faster algorithm for polynomial multiplication and convolution.
This document contains a summary of 8 mathematics questions on the topic of sets. Each question contains 1-4 parts asking students to shade regions on Venn diagrams, list set elements, or calculate set properties like union and intersection. The document also provides the answers to each question in point form for easy reference.
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
This document discusses temporal graphs, which are graphs where nodes and edges are active for specific time instances. It provides examples of temporal graphs and compares them to non-temporal graphs. It then covers temporal graph traversal methods like depth-first search and breadth-first search, accounting for temporal constraints. Various path types in temporal graphs are defined, such as foremost paths, latest-departure paths, and fastest paths. Algorithms for finding these paths using a stream representation or graph transformation approach are outlined.
This document discusses the use of approximate Bayesian computation (ABC) to perform Bayesian inference when the likelihood function is intractable. It presents François and Pudlo's (F&P) ABC method, which uses a summary statistic S(·), an importance proposal g(·), a kernel K(·), and a bandwidth h. The document outlines the approximations and errors involved in F&P's ABC and discusses calibrating and optimizing the choice of summary statistic and bandwidth. It notes that while the optimal summary statistic is the expectation E(θ|y), this is usually unavailable, so F&P suggest estimating it using simulated data from a pilot run.
SUSY Q-Balls and Boson Stars in Anti-de Sitter space-time - Cancun talk 2012Jurgen Riedel
This document presents an overview of Q-balls and boson stars. It begins with introductions to solitons, topological solitons like vortices and domain walls, and non-topological solitons like Q-balls. Q-balls are described as localized rotating solutions stabilized by a conserved Noether charge. Boson stars are discussed as gravitating solutions composed of scalar fields minimally coupled to gravity. Different models of boson stars are presented, including those with self-interactions and gauge couplings. Rotating versions of both Q-balls and boson stars are also summarized.
Practical volume estimation of polytopes by billiard trajectories and a new a...Apostolos Chalkis
A new randomized method to approximate the volume of a convex polytope based on simulated annealing for cooling convex bodies and MCMC sampling with geometric random walks
14th Athens Colloquium on Algorithms and Complexity (ACAC19)Apostolos Chalkis
This document presents a new method for estimating the volume of convex polytopes called practical volume estimation by a new annealing schedule. It uses a multiphase Monte Carlo approach with a sequence of concentric convex bodies to approximate the volume. A new simulated annealing method constructs a sparser sequence of bodies. Billiard walk sampling is used for volume-represented and zonotope polytopes. The method scales to dimensions of 100 in an hour for random V-polytopes and zonotopes, outperforming previous methods with theoretical complexity of O*(d^3).
Practical Volume Estimation of Zonotopes by a new Annealing Schedule for Cool...Apostolos Chalkis
This document describes a new algorithm for approximating the volume of zonotopes using a Multiphase Monte Carlo method. The algorithm uses a sequence of scaled copies of a convex body to bound the zonotope, allowing efficient volume ratio estimation. It was shown to perform volume computations for zonotopes in up to 100 dimensions within an hour, providing the first practical algorithm for high-dimensional zonotope volumes. The algorithm and its open source implementation enable applications in fields like robotics that require zonotope representations.
Computing the volume of a convex body is a fundamental problem in computational geometry and optimization. In this talk we discuss the computational complexity of this problem from a theoretical as well as practical point of view. We show examples of how volume computation appear in applications ranging from combinatorics to algebraic geometry.
Next, we design the first practical algorithm for polytope volume approximation in high dimensions (few hundreds).
The algorithm utilizes uniform sampling from a convex region and efficient boundary polytope oracles.
Interestingly, our software provides a framework for exploring theoretical advances since it is believed, and our experiments provide evidence for this belief, that the current asymptotic bounds are unrealistically high.
This document summarizes the Ph.D. work of Apostolos Chalkis on efficient geometric random walks for high-dimensional sampling from convex bodies. The goals are to develop algorithms and implementations of random walks like reflective Hamiltonian Monte Carlo and the billiard walk to sample from log-concave distributions restricted to polytopes, zonotopes, and spectrahedra. The work has applications in areas like computational geometry, computational finance, and systems biology. Open-source software packages in C++, R, and Python have been created to implement the sampling algorithms.
This document contains solutions to 4 problems from an IMC 2020 Online event. Solution 1 computes the number of words of length n over an alphabet that satisfy certain properties, finding the number is 4n-1+2n-1. Solution 2 uses a recurrence relation approach to derive the same formula. Solution 3 uses generating functions to show the number is also 4n-1+2n-1. Solution 4 shows that if a polynomial p satisfies p(x+1)-p(x)=x100 for all x, then p(1-t)≤p(t) for 0≤t≤1/2.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
The document discusses algorithms for computing volumes of polytopes. It notes that exactly computing volumes is hard, but randomized polynomial-time algorithms can approximate volumes with high probability. It describes two algorithms: Random Directions Hit-and-Run (RDHR), which generates random points within a polytope via random walks; and Multiphase Monte Carlo, which approximates a polytope's volume by sampling points within a sequence of enclosing balls. RDHR mixes in O(d^3) steps and these algorithms can compute volumes of high-dimensional polytopes that exact algorithms cannot handle.
We experimentally study the fundamental problem of computing the volume of a convex polytope given as an intersection of linear inequalities. We implement and evaluate practical randomized algorithms for accurately approximating the polytope’s volume in high dimensions (e.g. one hundred). To carry out this efficiently we experimentally correlate the effect of parameters, such as random walk length and number of sample points, on accuracy andruntime. Moreover, we exploit the problem’s geometry by implementing an iterative rounding procedure, computing partial generations of random points and designing fast polytope boundary oracles. Our publicly available code is significantly faster than exact computation and more accurate than existing approximation methods. We provide volume approximations for the Birkhoff polytopes B11,...,B15, whereas exact methods have only computed that ofB10.
This document discusses algorithms for computing the volume of high-dimensional convex bodies. It begins by introducing the problem and some challenges in high dimensions. It then describes various randomized algorithms that use sampling techniques like Markov chain Monte Carlo (MCMC) to estimate volumes. Specific algorithms discussed include multiphase Monte Carlo, hit-and-run sampling, and billiard walks. The document reviews the theoretical and practical complexity of these algorithms. It also presents applications of volume computation in fields like engineering, biology, and machine learning.
This document discusses self-similar sets and fractals, and provides examples of the von Koch curve and Peano's space-filling curve. It introduces the concept of Hausdorff dimension and measure as a way to quantify the dimension of fractal objects that fall between integer dimensions. Specifically, it shows that the von Koch curve has Hausdorff dimension of log(4)/log(3), explaining its properties of having infinite length but Lebesgue measure of zero.
Analytic construction of points on modular elliptic curvesmmasdeu
1. The document discusses analytic constructions of points on modular elliptic curves over number fields.
2. It introduces Heegner points, which provide a tool for verifying the Birch and Swinnerton-Dyer conjecture when the number field is an imaginary quadratic field.
3. Later work has generalized these constructions to some real quadratic fields and cubic fields of signature (1,1) by using Hilbert modular forms and automorphic forms on hyperbolic 3-space.
The Newton polytope of the resultant, or resultant polytope, characterizes the resultant polynomial more precisely than total degree. The combinatorics of resultant polytopes are known in the Sylvester case [Gelfand et al.90] and up to dimension 3 [Sturmfels 94]. We extend this work by studying the combinatorial characterization of 4-dimensional resultant polytopes, which show a greater diversity and involve computational and combinatorial challenges. In particular, our experiments, based on software respol for computing resultant polytopes, establish lower bounds on the maximal number of faces. By studying mixed subdivisions, we obtain tight upper bounds on the maximal number of facets and ridges, thus arriving at the following maximal f-vector: (22,66,66,22), i.e. vector of face cardinalities. Certain general features emerge, such as the symmetry of the maximal f-vector, which are intriguing but still under investigation. We establish a result of independent interest, namely that the f-vector is maximized when the input supports are sufficiently generic, namely full dimensional and without parallel edges. Lastly, we offer a classification result of all possible 4-dimensional resultant polytopes.
The computation of automorphic forms for a group Gamma is
a major problem in number theory. The only known way to approach the higher rank cases is by computing the action of Hecke operators on the cohomology.
Henceforth, we consider the explicit computation of the cohomology by using cellular complexes. We then explain how the rational elements can be made to act on the complex when it originate from perfect forms. We illustrate the results obtained for the symplectic Sp4(Z) group.
The document describes problems from the 52nd International Mathematical Olympiad held in Amsterdam in 2011. It includes 8 problems in Algebra with solutions and 7 problems each in Combinatorics, Geometry, and Number Theory. The problems cover a range of mathematical topics and skills. The document provides high-level summaries of selected problems along with their full solutions for use in future Olympiads.
The document is the problem shortlist from the 52nd International Mathematical Olympiad held in Amsterdam, The Netherlands in 2011. It contains 8 problems each in the areas of Algebra, Combinatorics, Geometry, and Number Theory for a total of 32 problems. An important note states that the problems are to be kept confidential until IMO 2012. The problem selection committee that chose the problems is also listed.
Volesti is an open source C++ library with an R package for high-dimensional sampling and volume computation. It provides three volume estimation algorithms and four sampling algorithms that use geometric random walks. These algorithms allow for volume computation and uniform sampling in thousands of dimensions where other approaches fail due to the curse of dimensionality. Volesti has applications in multivariate integration, crisis detection in financial markets, and was involved in Google Summer of Code 2020 for several related projects.
This document summarizes Dorian Ehrlich's thesis investigating the quantum case of Horn's question. Horn's question asks for which sequences of real numbers γ can there exist Hermitian matrices A, B, and C=A+B with eigenvalues α, β, and γ. Knutson and Tao provided a solution using intersections of Schubert varieties. Ehrlich seeks to generalize this to intersections linked by smooth curves, known as the quantum case. The document provides background on Hermitian matrices, Schubert varieties, and Knutson and Tao's solution. It then outlines Ehrlich's plan to prove a "quantum analogue" of Knutson and Tao's saturation conjecture to address the quantum case of Horn's question.
This document provides solved assignments for MCS-013 from Ignou in 2011. It includes truth tables and answers to questions about conditional connectives, proofs, rational and irrational numbers, Boolean algebra, logic circuits, sets, Venn diagrams, and counterexamples. Specifically, it gives examples of direct proofs, finds whether 17 is rational or irrational, explains how Boolean algebra is used in logic circuit design, draws Venn diagrams, and illustrates the use of a counterexample.
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document contains instructions for a scholarship test consisting of 60 multiple choice questions across three sections: Algebra, Analysis, and Geometry. The instructions specify that candidates are to write their answers in the spaces provided at the end of each question in the form of words, numbers, or simple mathematical expressions. Certain questions require identifying true statements from multiple choices, for which candidates should write the label(s) of the true statement(s).
Similar to A new practical algorithm for volume estimation using annealing of convex bodies (20)
The document discusses meshing periodic surfaces in CGAL. It describes adapting the CGAL surface meshing algorithm to work with periodic triangulations by modifying point insertion and refinement criteria. Examples meshing various periodic minimal surfaces like the gyroid and schwarz P surface are shown for different criteria values. Future work includes improving the refinement criteria to handle all cases and proving algorithm correctness and termination.
This document discusses regular triangulations and resultant polytopes. It introduces the concepts of mixed subdivisions, the Cayley trick relating triangulations to mixed subdivisions, and how mixed subdivisions relate to the Newton polytope of the resultant. It defines i-mixed cells and how they correspond to vertices of the resultant polytope. The document outlines an algorithm to enumerate the vertices of the resultant polytope based on enumerating regular triangulations using i-mixed cell configurations and cubical flips between mixed subdivisions. It provides examples and complexity analysis and discusses future work on fully enumerating resultant polytopes.
The document describes an incremental algorithm for computing the Birkhoff resultant polytope Π of a system of n+1 polynomials in n variables. The algorithm takes as input the supports of the polynomials and incrementally constructs an inner approximation Q of Π by calling an oracle to extend illegal facets. At each step Q is refined until all facets are legal, at which point Q = Π. The algorithm outputs the H-representation and V-representation of Π.
"An output-sensitive algorithm for computing projections of resultant polytop...Vissarion Fisikopoulos
This document summarizes an incremental algorithm for computing projections of resultant polytopes. The algorithm uses an oracle that, given a direction vector, computes a vertex of the resultant polytope that is extremal in that direction. It starts with an initial inner approximation of the resultant polytope and incrementally extends facets by calling the oracle until all facets are legal, meaning they support facets of the true resultant polytope. This provides an output-sensitive algorithm that computes only as many triangulations as needed to represent the resultant polytope.
"Faster Geometric Algorithms via Dynamic Determinant Computation." Vissarion Fisikopoulos
The document proposes faster algorithms for geometric problems by using dynamic determinant computation. Many geometric algorithms involve computing determinants of matrices to evaluate geometric predicates. Computing determinants directly is expensive, especially for high-dimensional problems. The document presents an algorithm for dynamically updating determinants when a column of the matrix is changed in O(d^2) time, faster than recomputing from scratch. This dynamic determinant computation can speed up algorithms that require repeated predicate evaluations, such as the incremental convex hull algorithm, by updating determinants instead of recomputing them.
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesVissarion Fisikopoulos
The document discusses efficient algorithms for computing volume and edge skeletons of polytopes defined implicitly by optimization oracles. It presents an algorithm to compute the edge skeleton of a polytope in oracle calls and arithmetic operations. It also describes using geometric random walks and optimization oracles to approximate polytope volume, which is more efficient than exact computation for high dimensions. Experimental results show the approach computes volume within minutes for polytopes up to dimension 12 with less than 2% error.
The document discusses resultant polytopes, which are the Newton polytopes of the resultant polynomial for a system of polynomials. It focuses on analyzing the face vectors of 4-dimensional resultant polytopes. The main results characterize the possible face vectors in three cases: (1) when all supports have size 2 except one of size 5, (2) when all supports have size 2 except two of sizes 3 and 4, and (3) when all supports have size 2 except three of size 3. Lower bounds are proved for the number of faces in the third case. Computation of resultant polytopes uses software that implements algorithms utilizing polytope computations and tropical geometry.
Efficient Edge-Skeleton Computation for Polytopes Defined by OraclesVissarion Fisikopoulos
This document summarizes algorithms for computing the edge skeleton of a polytope defined by oracle functions. It first describes an existing algorithm for vertex enumeration in the oracle model that works by computing an initial simplex and recursively querying the oracle. It then presents a new algorithm for computing the edge skeleton that takes as input the oracle functions and a superset of edge directions, and works by generating candidate edge segments and validating them with the oracle. The runtime of this edge skeleton algorithm is polynomial in parameters of the polytope representation.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
This document summarizes a PhD thesis defense about algorithms and computations involving high-dimensional polytopes defined by oracles. It introduces polytope representations, oracle definitions, and discusses resultant polytopes arising in algebraic geometry. It outlines an output-sensitive algorithm for computing projections of resultant polytopes using mixed subdivisions. It also describes work on edge-skeleton computations, a volume algorithm, 4D resultant polytope combinatorics, and high-dimensional predicate software.
Recently, there is a growing interest in geospatial trajectory computing. We call trajectories the sequences of time-stamped locations. As the technology for tracking moving objects becomes cheaper and more accurate, massive amounts of spatial trajectories are generated nowadays by smartphones, infrastructure, computer games, natural phenomena, and many other sources.
In this talk we will present the set of tools available in Boost Geometry to work with trajectories highlighting latest as well as older library developments. Starting with more basic operations like length, distance and closest points computations between trajectories we move forward to more advanced operations like compression or simplification as well as the conceptually opposite operation of densify by interpolating or generating random points on a given trajectory. We conclude with the important topic of similarity measurements between trajectories.
All implemented algorithms are parameterized by using the Boost Geometry's strategy mechanism that control the accuracy-efficiency trade-off and work for 3 different coordinate systems (namely, cartesian, spherical and ellipsoidal) each of which comes with its own advantages and limitations.
This document summarizes a presentation on computing polytopes via a vertex oracle. It discusses:
1. Using a vertex oracle that takes a direction vector as input and outputs a vertex of the resultant polytope that is extremal in that direction.
2. An incremental algorithm that starts with an inner approximation of the resultant polytope and iteratively calls the oracle to extend illegal facets until the approximation equals the resultant polytope.
3. The oracle works by lifting the point set to construct a regular subdivision, then refining to a triangulation to extract the vertex of the resultant polytope extremal in the given direction.
The document describes an algorithm for enumerating 2-level polytopes in fixed dimensions. A 2-level polytope has vertices that are contained in two parallel hyperplanes. The algorithm takes as input a list of (d-1)-dimensional 2-level polytopes and extends each one to d dimensions, computing the closed sets of vertices to obtain new d-dimensional 2-level polytopes. Experimental results show the numbers of 2-level polytopes enumerated for dimensions up to 6. Open questions ask for a more output-sensitive enumeration algorithm and whether the number of d-dimensional 2-level polytopes is exponential in d.
The figure of the Earth can be modelled either by a cartesian plane, a sphere or an (oblate) ellipsoid, in decreasing order with respect to the approximation quality. The shortest path between two points on such a surface is called a geodesic. Studying geodesic problems on ellipsoids dates back to Newton. However, the majority of open-source GIS systems today use methods on the cartesian plane. The main advantages of those approaches are simplicity of implementation and performance. On the other hand, those approaches come with a handicap: accuracy.
We experimentally study the accuracy-performance trade-offs of various methods for distance computation (as well as similar geodesic problems such as azimuth and area computation). We test projections paired with cartesian computations, spherical-trigonometric computations and a number of ellipsoidal methods such as [Andoyer'65] and [Thomas'70] formulas, [Vincenty'75] iterative method, great elliptic arc's method, and [Karney'15] series approximation. We also show that some methods from the bibliography (e.g. [Tseng'15]) are neither faster nor more accurate compared to the above list of methods and thus become redundant. For our experiments we use the open source libraries Boost Geometry and GeographicLib.
Our results are of independent interest since we are not aware of a similar experimental study. More interestingly, they can be used as a reference for practitioners that want to use the most efficient method with respect to some given accuracy.
Geodesic computations (such as distance computations) apart from being a fundamental problem in computational geometry and geography/geodesy are also building blocks for many higher level algorithms such as k-nearest neighbour problems, line interpolation, densification of geometries, area and buffer, to name a few.
# References
* Some experimental results can be found here: https://github.com/vissarion/geometry/wiki/Accuracy-and-performance-of-geographic-algorithms
* A related talk (with some graphs on performance and accuracy) can be found here https://fosdem.org/2019/schedule/event/geo_boostgeometry
* The source code of most of the algorithms of the study is in Boost Geometry: https://github.com/boostorg/geometry but we contain to our study GeographicLib https://geographiclib.sourceforge.io
The Earth is not flat; but it's not round either (Geography on Boost.Geometry)Vissarion Fisikopoulos
What is a great circle, a loxodrome or a geodesic? What are the differences between them and which one is more suitable for each GIS application? This talk addresses this kind of questions and how geography is implemented in Boost.Geometry. The library that is currently being used to provide GIS support to MySQL.
Following up the introductory talk on Boost.Geometry we discuss the algorithmic, the implementation as well as the user perspective of the development of geography in Boost.Geometry. We define basic geometric objects such as geodesics, and the modeling of the Earth as a sphere or ellipsoid. We try to understand the effect of different Earth models to the accuracy and speed of fundamental geometric algorithms (such as length, area, intersection etc.) by showing particular examples. Finally, we are having a look towards the future of geography in Boost.Geometry.
Presentation of the paper "An output-sensitive algorithm for computing (projections of) resultant polytopes" in the Annual Symposium on Computational Geometry (SoCG 2012)
Presentation of the paper "Combinatorics of 4-dimensional Resultant Polytopes" in International Symposium on Symbolic and Algebraic Computation (ISSAC) 2013
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
A new practical algorithm for volume estimation using annealing of convex bodies
1. Practical volume estimation by a new annealing
schedule for cooling convex bodies
Vissarion Fisikopoulos
Joint work with A.Chalkis, I.Z.Emiris
Dept. of Informatics & Telecommunications, University of Athens
ULB, 2019
3. Our problem
Given P a convex polytope in Rd compute the volume of P.
Some elementary polytopes (simplex, cube) have simple
determinantal formulas.
1 2 1
3 6 1
6 1 1
/2! = 11
2 5
4 0
= 20
4. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
5. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
S ⊆ E, χS
e = {1 if e ∈ S, 0 otherwise}
6. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
S ⊆ E, χS
e = {1 if e ∈ S, 0 otherwise}
Bn = conv{χM | M is a perfect matching of Kn,n}
7. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
S ⊆ E, χS
e = {1 if e ∈ S, 0 otherwise}
Bn = conv{χM | M is a perfect matching of Kn,n}
8. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
S ⊆ E, χS
e = {1 if e ∈ S, 0 otherwise}
Bn = conv{χM | M is a perfect matching of Kn,n}
# faces of B3: 6, 15, 18, 9; vol(B3) = 9/8
9. Birkhoff polytopes
Given the complete bipartite graph Kn,n = (V , E) a perfect
matching is M ⊆ E s.t. every vertex meets exactly one member
of M
S ⊆ E, χS
e = {1 if e ∈ S, 0 otherwise}
Bn = conv{χM | M is a perfect matching of Kn,n}
# faces of B3: 6, 15, 18, 9; vol(B3) = 9/8
there exist formulas for the volume [deLoera et al ’07] but
values only known for n ≤ 10 after 1yr of parallel computing
[Beck et al ’03]
10. Volumes and counting
Given n elements & partial order; order poly-
tope PO ⊆ [0, 1]n coordinates of points satisfies the partial order
c
a
b a, b, c
partial order: a < b
3 linear extensions: abc, acb, cab
11. Volumes and counting
Given n elements & partial order; order poly-
tope PO ⊆ [0, 1]n coordinates of points satisfies the partial order
c
a
b a, b, c
partial order: a < b
3 linear extensions: abc, acb, cab
# linear extensions = volume of order polytope · n!
[Stanley’86]
12. Volumes and counting
Given n elements & partial order; order poly-
tope PO ⊆ [0, 1]n coordinates of points satisfies the partial order
c
a
b a, b, c
partial order: a < b
3 linear extensions: abc, acb, cab
# linear extensions = volume of order polytope · n!
[Stanley’86]
Counting linear extensions is #P-hard [Brightwell’91]
13. Mixed volume
Let P1, P2, . . . , Pd be polytopes in Rd then the mixed volume is
M(P1, . . . , Pd ) =
I⊆{1,2,...,d}
(−1)(d−|I|)
· Vol(
i∈I
Pi )
where the sum is the Minkowski sum.
14. Mixed volume
Let P1, P2, . . . , Pd be polytopes in Rd then the mixed volume is
M(P1, . . . , Pd ) =
I⊆{1,2,...,d}
(−1)(d−|I|)
· Vol(
i∈I
Pi )
where the sum is the Minkowski sum.
Example
For d = 2: M(P1, P2) = Vol(P1 + P2) − Vol(P1) − Vol(P2)
P1 P2 P1 + P2
15. Volumes and algebraic geometry
Toric geometry
If P is an integral d-dimensional polytope, then d! times the volume
of P is the degree of the toric variety associated to P.
BKK bound (Bernshtein, Khovanskii, Kushnirenko)
Let f1, . . . , fn be polynomials in C[x1, . . . , xn]. Let N(fj ) denote the
Newton polytope of fj , i.e. the convex hull of it’s exponent vectors.
If f1, . . . , fn are generic, then the number of solutions of the
polynomial system of equations
f1 = 0, . . . , fn = 0
with no xi = 0 is equal to the normalized mixed volume
n!M(N(f1), . . . , N(fn))
16. Applications
Volume of zonotopes is used to test methods for order
reduction which is important in several areas: autonomous
driving, human-robot collaboration and smart grids [Althoff et
al.]
Volumes of intersections of polytopes are used in bio-geography
to compute biodiversity and related measures e.g. [Barnagaud,
Kissling, Tsirogiannis, Fisikopoulos, Villeger, Sekercioglu’17]
17. First thoughts for volume computation
Triangulation (or sign decomposition) methods – exponential
size in d
18. First thoughts for volume computation
Triangulation (or sign decomposition) methods – exponential
size in d
Sampling/rejections techniques (sample from bounding box)
fail in high dimensions
volume(unit cube) = 1
volume(unit ball) ∼ (c/d)d/2
–drops exponentially with d
19. Our setting
Given P a convex polytope in Rd compute the volume of P.
H-polytope : P = {x | Ax ≤ b, A ∈ Rq×d , b ∈ Rq}
V-polytope : P is the convex hull of a set of points in Rd
zonotope : Minkowski sum of segments (eq. linear projections of
d-cubes)
20. Our setting
Given P a convex polytope in Rd compute the volume of P.
H-polytope : P = {x | Ax ≤ b, A ∈ Rq×d , b ∈ Rq}
V-polytope : P is the convex hull of a set of points in Rd
zonotope : Minkowski sum of segments (eq. linear projections of
d-cubes)
Complexity
#P-hard for both representations [DyerFrieze’88]
21. Our setting
Given P a convex polytope in Rd compute the volume of P.
H-polytope : P = {x | Ax ≤ b, A ∈ Rq×d , b ∈ Rq}
V-polytope : P is the convex hull of a set of points in Rd
zonotope : Minkowski sum of segments (eq. linear projections of
d-cubes)
Complexity
#P-hard for both representations [DyerFrieze’88]
open if both representations available
22. Our setting
Given P a convex polytope in Rd compute the volume of P.
H-polytope : P = {x | Ax ≤ b, A ∈ Rq×d , b ∈ Rq}
V-polytope : P is the convex hull of a set of points in Rd
zonotope : Minkowski sum of segments (eq. linear projections of
d-cubes)
Complexity
#P-hard for both representations [DyerFrieze’88]
open if both representations available
no deterministic poly-time algorithm can compute the volume
with less than exponential relative error (oracle
model) [Elekes’86]
25. State-of-the-art
Authors-Year Complexity Algorithm
(oracle steps)
[Dyer, Frieze, Kannan’91] O∗(d23) Sequence of balls + grid walk
[Kannan, Lovasz, Simonovits’97] O∗(d5) Sequence of balls + ball walk + isotropy
[Lovasz, Vempala’03] O∗(d4) Annealing + hit-and-run
[Cousins, Vempala’15] O∗(d3) Gaussian cooling (* well-rounded)
[Lee, Vempala’18] O∗(Fd
2
3 ) Hamiltonian walk (** H-polytopes)
Software:
[Emiris, F’14] Sequence of balls + coordinate hit-and-run
[Cousins, Vempala’16] Gaussian cooling + hit-and-run
Limitation: Efficient only for H-polytopes (scale up-to few hundred
dims in hrs), for V-polytopes or zonotopes:
oracles are expensive (solution of an LP)
[Cousins, Vempala’16] needs a bound on the number of facets
26. State-of-the-art
Authors-Year Complexity Algorithm
(oracle steps)
[Dyer, Frieze, Kannan’91] O∗(d23) Sequence of balls + grid walk
[Kannan, Lovasz, Simonovits’97] O∗(d5) Sequence of balls + ball walk + isotropy
[Lovasz, Vempala’03] O∗(d4) Annealing + hit-and-run
[Cousins, Vempala’15] O∗(d3) Gaussian cooling (* well-rounded)
[Lee, Vempala’18] O∗(Fd
2
3 ) Hamiltonian walk (** H-polytopes)
Software:
[Emiris, F’14] Sequence of balls + coordinate hit-and-run
[Cousins, Vempala’16] Gaussian cooling + hit-and-run
Limitation: Efficient only for H-polytopes (scale up-to few hundred
dims in hrs), for V-polytopes or zonotopes:
oracles are expensive (solution of an LP)
[Cousins, Vempala’16] needs a bound on the number of facets
Goal: Efficient algorithm for V-polytopes and zonotopes
27. Methodology
Volume algorithms parts
1. Multiphase Monte Carlo: e.g. Sequence of balls, Annealing of
functions
2. Sampling: grid-walk, ball-walk, hit-and-run, Hamiltonian walk
Typically MMC (1) at each phase solves a sampling problem (2)
28. Methodology
Volume algorithms parts
1. Multiphase Monte Carlo: e.g. Sequence of balls, Annealing of
functions
2. Sampling: grid-walk, ball-walk, hit-and-run, Hamiltonian walk
Typically MMC (1) at each phase solves a sampling problem (2)
Our approach:
MMC using convex bodies (balls or H-polytopes that ”fit well”
and are ”easy” to sample from)
A new annealing method to construct the sequence of bodies
A new convergence criterion
29. Multiphase Monte Carlo
Construct a sequence of convex bodies C1 ⊇ · · · ⊇ Cm intersecting
the given polytope P, then:
vol(P) =
vol(Pm)
vol(Cm)
vol(Pi )
vol(P0)
vol(Pi+1)
vol(Pi )
· · · vol(Pm)
vol(Pm−1)
vol(Cm)
where P0 = P and Pi = Ci ∩ P for i = 1, . . . , m.
30. Annealing Schedule
How we compute the sequence of bodies
Inputs: Polytope P, error , cooling parameters r, δ > 0 and a
significance level (s.l.) α > 0 s.t. 0 < r + δ < 1.
Output: A sequence of convex bodies C1 ⊇ · · · ⊇ Cm s.t.
vol(Pi+1)/vol(Pi ) ∈ [r, r + δ] with high probability
where Pi = Ci ∩ P, i = 1, . . . , m and P0 = P.
31. t-test
one tailed
Let ν observations from a r.v. X ∼ N(µ, σ2) with unknown
variance σ2.
t-test checks the null hypothesis that the population mean
exceeds a specified value µ0 using the statistic
t = ¯x−µ0
s/
√
ν
∼ tν−1, where ¯x: sample’s mean, s: sample’s st.d.,
tν−1: t-student distribution with ν − 1 degrees of freedom
Given a s.l. α > 0 we test the null hypothesis for the mean
value of the population, H0 : µ ≤ µ0 against H1 : µ > µ0
We say that we reject H0 if,
t ≥ tν−1,α ⇒ ¯x ≥ µ0 + tν−1,αs/
√
ν
which implies Pr(H0 true | reject H0) = α (tν−1,α is the critical
value). Otherwise we fail to reject H0
32. t-test
for the ratio of volumes of convex bodies
Given convex bodies Pi ⊇ Pi+1, s.l. α, and cooling parameters
r, δ > 0, 0 < r + δ < 1, we define two t-tests:
testL(Pi , Pi+1, r, δ, α, ν, N): testR(Pi , Pi+1, r, α, ν, N):
H0 : vol(Pi+1)/vol(Pi ) ≤ r + δ H0 : vol(Pi+1)/vol(Pi ) ≤ r
H1 : vol(Pi+1)/vol(Pi ) ≥ r + δ H1 : vol(Pi+1)/vol(Pi ) ≥ r
Successful if we fail to reject H0 Successful if we reject H0
testL and testR are used by annealing schedule to restrict
each ratio ri = vol(Pi+1)/vol(Pi ) in the interval [r, r + δ].
33. t-test
Compute testL, testR for Pi ⊇ Pi+1
1. sample νN points from Pi and split the sample into ν sublists
of length N.
2. The ν mean values are experimental values that follow
N(ri , (ri (1 − ri ))/N).
3. Use those values to perform two t-tests with null hypotheses:
testL H0 : ri ≤ r + δ testR H0 : ri ≤ r
Assuminga
the following holds,
r + δ + tν−1,α
s√
ν
≥ ˆµ ≥ r + tν−1,α
s√
ν
then ri is restricted to [r, r + δ] with high probability.
a
for the mean ˆµ of the ν mean values
34. Initialization of Annealing Schedule
Given: polytope P, convex body C s.t. C ∩ P = ∅, an interval
[qmin, qmax] and parameters r, δ, α, ν, N.
binary search for q ∈ [qmin, qmax] s.t. both testL(qC, qC ∩ P)
and testR(qC, qC ∩ P) are successful.
sample from qC, rejection to P
we call the successful qC, C
35. 1st iteration of the initialization step
[qmin, qmax] = [0.14, 0.38], q = qmin+qmax
2 = 0.26
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ = r (1)+r (2)···+r (10)
ν = 684
1200 = 0.57, s = 0.06
testL→ H0 : volqC∩P
vol(qC)
≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : volqC∩P
vol(qC)
≤ r ⇒ failed (H0 not rejected).
qC of 1st iteration is too big (the ratio is too small).
36. 2nd iteration of the initialization step
[qmin, qmax] = [0.14, 0.26], q = qmin+qmax
2 = 0.20
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ = r (1)+r (2)···+r (10)
ν = 914
1200 = 0.76, s = 0.04
testL→ H0 : volqC∩P
vol(qC)
≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : volqC∩P
vol(qC)
≤ r ⇒ failed (H0 not rejected).
qC of 2nd iteration is too big (the ratio is too small).
37. 3rd iteration of the initialization step
[qmin, qmax] = [0.14, 0.20], q = qmin+qmax
2 = 0.17
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ = r (1)+r (2)···+r (10)
ν = 1076
1200 = 0.90, s = 0.02
testL→ H0 : volqC∩P
vol(qC)
≤ r + δ ⇒ failed (H0 rejected).
testR→ H0 : volqC∩P
vol(qC)
≤ r ⇒ succeed (H0 rejected).
qC of 3rd iteration is too small (the ratio is too big).
38. 4th iteration of the initialization step
[qmin, qmax] = [0.17, 0.20], q = qmin+qmax
2 = 0.185
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ = r (1)+r (2)···+r (10)
ν = 993
1200 = 0.83, s = 0.04
testL→ H0 : volqC∩P
vol(qC)
≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : volqC∩P
vol(qC)
≤ r ⇒ succeed (H0 rejected).
Set C = qC.
39. Annealing Schedule
Iteration & Stopping criterion
In step i of the annealing schedule sample νN points from Pi
to perform testR(Pi , C ∩ P).
The schedule stops if volume ratio vol(C ∩ P)/vol(Pi ) is large
enough according to testR.
Stop in step i if testR(Pi , C ∩ P) succeeds.
Set m = i + 1 and Pm = C ∩ P, Cm = C .
Otherwise, construct Pi+1: binary search for qC s.t. succeed
testL(Pi , Pi+1), testR(Pi , Pi+1) using the same sample in Pi
40. Check stopping criterion
i = 0, P0 = P, sample νN points from P0.
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ =
r
(1)
0 +r
(2)
0 ···+r
(10)
0
ν = 535
1200 = 0.45, s = 0.08
testR→ H0 : r0 ≤ r ⇒ failed (H0 not rejected).
Define P1.
41. Define next convex body in MMC
i = 0, P0 = P, binary search for q ∈ [0.185, 0.38]
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
q = 0.36, ˆµ =
r
(1)
0 +r
(2)
0 ···+r
(10)
0
ν = 998
1200 = 0.83, s = 0.07
testL→ H0 : r0 ≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : r0 ≤ r ⇒ succeed (H0 rejected).
Set C1 = qC.
42. Check stopping criterion
i = 1, P1 = C1 ∩ P, sample νN points from P1.
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ =
r
(1)
1 +r
(2)
1 ···+r
(10)
1
ν = 535
1200 = 0.57, s = 0.07
testR→ H0 : r1 ≤ r ⇒ failed (H0 not rejected).
Define P2.
43. Define next convex body in MMC
i = 1, P1 = C1 ∩ P, binary search for q ∈ [0.185, 0.36]
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
q = 0.28, ˆµ =
r
(1)
1 +r
(2)
1 ···+r
(10)
1
ν = 998
1200 = 0.82, s = 0.06
testL→ H0 : r1 ≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : r1 ≤ r ⇒ succeed (H0 rejected).
Set C2 = qC.
44. Check stopping criterion
i = 2, P2 = C2 ∩ P, sample νN points from P2.
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ =
r
(1)
2 +r
(2)
2 ···+r
(10)
2
ν = 833
1200 = 0.69, s = 0.08
testR→ H0 : r2 ≤ r ⇒ failed (H0 not rejected).
Define P3.
45. Define next convex body in MMC
i = 2, P2 = C2 ∩ P, binary search for q ∈ [0.185, 0.28]
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
q = 0.21, ˆµ =
r
(1)
2 +r
(2)
2 ···+r
(10)
2
ν = 976
1200 = 0.81, s = 0.05
testL→ H0 : r2 ≤ r + δ ⇒ succeed (H0 not rejected).
testR→ H0 : r2 ≤ r ⇒ succeed (H0 rejected).
Set C3 = qC.
46. Check stopping criterion
i = 3, P3 = C3 ∩ P, sample νN points from P3.
r = 0.8, δ = 0.05, α = 0.1, ν = 10, νN = 1200 ⇒ N = 120
ˆµ =
r
(1)
3 +r
(2)
3 ···+r
(10)
3
ν = 1026
1200 = 0.86, s = 0.04
testR→ H0 : r3 ≤ r ⇒ succeed (H0 rejected).
Set m = 4, C4 = C .
47. Estimate Ratio
Sliding window
For i = 1, . . . , m (m computed by schedule) estimate
ri = vol(Pi+1)/vol(Pi ).
Generate points in Pi . For each new point keep the value of ri .
Store the last n values in a queue called sliding window W .
Update W for each new sample point, by inserting the new
value of ri and by popping out the oldest ratio value.
. . .
rj
i
. . .
rj+n−1
i
48. Convergence criterion
W is a set of n values following Gaussian distribution
Consider the mean value ˆµ, the st.d. s of W and
Pr = m+1
3/4. For each ri assign a value i (is the maximum
allowed error for ri ), s.t. i
2
i = 2.
Using p = (1 + Pr)/2 and the interval [ˆµ − zps, ˆµ + zps]:
if
(ˆµ + zps) − (ˆµ − zps)
ˆµ + zps
=
2zps
ˆµ + zps
≤ i /2, then declare convergence.
49. Convergence criterion
W is a set of n values following Gaussian distribution
Consider the mean value ˆµ, the st.d. s of W and
Pr = m+1
3/4. For each ri assign a value i (is the maximum
allowed error for ri ), s.t. i
2
i = 2.
Using p = (1 + Pr)/2 and the interval [ˆµ − zps, ˆµ + zps]:
if
(ˆµ + zps) − (ˆµ − zps)
ˆµ + zps
=
2zps
ˆµ + zps
≤ i /2, then declare convergence.
Remark:
If we assume (perfect) uniform sampling from Pi :
1. There is n = O(1) s.t. ri ∈ [ˆµ − zps, ˆµ + zps] with
Pr = m+1
3/4.
2. method estimates vol(P) with probability 3/4 and error ≤ .
In practice it is open how to bound n
50. The volume algorithm
Algorithm 1 VolumeAlgorithm (P, , r, δ, α, ν, N, n)
Construct C ⊆ Rd s.t. C ∩ P = ∅ and set [qmin, qmax]
{P0, . . . , Pm, Cm}=AnnealingSchedule(P, C, r, δ, α, ν, N, qmin, qmax)
m = /2
√
m + 1, = 4(m + 1) − 1/2
√
m + 1
Set i = /
√
m, i = 0, . . . , m − 1
for i = 0, . . . m do
if i < m then
ri = EstimateRatio (Pi , Pi+1, i , m, n)
else
rm = EstimateRatio (Cm, Pm, m, m, n)
end if
end for
return vol(Cm)/r0/ · · · /rm−1 · rm
51. Bounds on #phases
Let the power of a i-th t-test
powi := Pr(H0 false | reject H0) = 1 − βi .
Proposition
Given P ⊂ Rd and cooling parameters r, δ s.t. r + δ < 1/2 and a
s.l. α, let m be the number of balls in MMC. Then m < d lg(d)
with probability p ≥ 1 − max{βi } = min{powi }, i = 0, . . . , m.
Proposition
Given P, the body C that minimizes the expected number of phases
in MMC, for given cooling parameters r, δ and s.l. α is the one that
maximizes vol(C ∩ P) in the annealing schedule initialization
52. Implementation
VolEsti: sampling and volume estimation
C++, R-interface, python bindings (in review)
open source
since 2014, CGAL (not any more), Eigen, LPSolve
design: mix of obj oriented + templates, C++03
main developers: VF, Tolis Chalkis
https://github.com/GeomScale/volume_approximation
Contributions welcome!
53. C++ VolEsti example
template <typename NT , class RNGType , class Polytope >
void test_volume (Polytope &HP)
{
// Setup the parameters
int n = HP.dimension ();
int walk_len=10 + n/10;
int nexp=1, n_threads=1;
NT e=1, err=0.0000000001;
int rnum = std::pow(e,-2) * 400 * n * std :: log(n);
unsigned seed = std:: chrono :: system_clock ::now()
. time_since_epoch (). count ();
RNGType rng(seed );
boost :: normal_distribution <> rdist(0,1);
boost :: random :: uniform_real_distribution <>( urdist );
boost :: random :: uniform_real_distribution <> urdist1(-1,1);
vars <NT , RNGType > var(rnum ,n,walk_len ,n_threads ,err ,e,0,0,0,0,
rng ,urdist ,urdist1,-1.0,false ,false ,false ,false ,false ,fals
// Compute chebychev ball and volume //
typedef typename Polytope :: PolytopePoint Point;
std ::pair <Point ,NT > CheBall;
CheBall = HP. ComputeInnerBall ();
std :: cout << volume(HP ,var ,var ,CheBall) << std:: endl;
}
54. Experiments
Experimentally determine the parameters
Anneling schedule: set r = 0.1 and δ = 0.05 in order to
define vol(Pi+1 to be about 10% of vol(Pi ).
Set νN = 1200 + 2d2 and ν = 10.
Sliding window: set window length n = 2d2 + 250.
Hit-and-Run: step equal to 1.
55. Zonotopes
Number of phases
Two types of bodies: balls (left) symmetric H-polytopes (right)
For low order (i.e. #generators/d) zonotopes, ≤ 4, the number
of bodies is smaller than the case of using balls
For balls the number of phases reduces as the order increases
for a fixed dimension
56. Zonotopes
Performance
Experimental results for zonotopes.
z-d-k Body order Vol m steps time(sec)
z-5-500 Ball 100 4.63e+13 1 0.1250e+04 22.26
z-20-2000 Ball 100 2.79e+62 1 0.2000e+04 1428
z-50-65 Hpoly 1.3 1.42e+62 1 1.487e+04 173.9
z-100-130 Hpoly 1.3 1.37e+138 3 17.19e+04 6073
z-50-75 Hpoly 1.5 2.96e+66 2 1.615e+04 253.6
z-100-150 Hpoly 1.5 2.32+149 3 15.43e+04 10060
z-70-140 Hpoly 2 8.71e+111 2 5.059e+04 2695
z-100-200 Hpoly 2 5.27e+167 3 15.25e+04 34110
z-d-k: random zonotope in dimension d with k generators;
Body: the type of body used in MMC; m: number of bodies in
MMC
Used to evaluate zonotope approximation methods in
engineering [Kopetzki’17]
57. Comparison with state-of-the-art
Zonotopes
The number of steps for random zonotopes of order 2
Our method is asymptotically better
CoolingGaussian and SeqOfBalls takes > 2hr for d > 15.
for higher orders the difference is larger
58. V-polytopes
P Vol m steps error time ex. Vol ex. time
cross-60 1.27e-64 1 20.6e+03 0.08 60.1 −− −−
cross-100 1.51e-128 2 94.2e+03 0.11 406 −− −−
∆-60 1.08e-82 10 77.4e+04 0.1 1442 1.203-82 0.02
∆-80 1.30e-119 13 187e+04 0.07 6731 1.39e-119 0.07
cube-10 1052.4 1 1851 0.03 54.39 −− −−
cube-13 7538.2 1 2127 0.08 2937 −− −−
rv-10-80 3.74e-03 1 0.185e+04 0.08 3.35 3.46e-03 6.8
rv-10-160 1.59e-02 1 0.140e+04 0.06 5.91 1.50e-03 59
rv-15-30 2.73e-10 1 0.235e+04 0.02 3.46 2.79e-10 2.1
rv-15-60 4.41e-08 1 0.235e+04 −− 6.46 −− −−
rv-20-2000 2.89e-07 1 0.305e+04 −− 457 −− −−
rv-80-160 5.84e-106 3 11.3e+04 −− 3250 −− −−
rv-100-200 1.08e-141 4 24.5e+04 −− 13300 −− −−
time: the average time in seconds; ex. Vol: the exact volume; ex. time:
the time in seconds for the exact volume computation i.e. qhull in R
(package geometry); m is the number of phases. −− implies that the
execution failed due to memory issues or exceeded 1 hr.
59. H-polytopes
The number of steps for unit cubes in H-representation
for d = 100: > 2 faster than CoolingGaussianand ∼ 100x
than SeqOfBalls
62. Future directions
Better (practical) sampling methods (candidate: Hamiltonian
walk) + implementation
Study convex body families for MMC
Applications:
zonotope approximation in engineering
finance [Cales, Chalkis, Emiris, F, SoCG’18]
63. Future directions
Better (practical) sampling methods (candidate: Hamiltonian
walk) + implementation
Study convex body families for MMC
Applications:
zonotope approximation in engineering
finance [Cales, Chalkis, Emiris, F, SoCG’18]
Google Summer of Code ’19 call for internships! (contact me :)
64. Future directions
Better (practical) sampling methods (candidate: Hamiltonian
walk) + implementation
Study convex body families for MMC
Applications:
zonotope approximation in engineering
finance [Cales, Chalkis, Emiris, F, SoCG’18]
Google Summer of Code ’19 call for internships! (contact me :)
65. GSOC19: Project 1. Sampling scalability
empirical study of random walks for convex polytopes (mainly
given by a set of linear inequalities).
Currently variations of hit-and-run random walks are used but
there are methods with better mixing time; e.g. hamiltonian
walk https://arxiv.org/pdf/1710.06261.pdf.
expectation: dramatic effect in the scaling of the underlying
algorithms
ultimate goal: “scale from a few hundred dimensions to a few
thousand”
We can divide the coding project in the following steps:
Understand the code structure and design of VolEsti.
Create prototypes for new sampling algorithms (the main focus
will be in Hamiltonian walk but we may investigate others too).
Implement the best representatives from the previous step in
VolEsti and create R interfaces.
Write tests and documentation.
66. GSOC19: Project 2. Sampling and volume of spectahedra
non-linear extension for VolEsti
Spectahedra are feasible regions of semidefinite programs and
are known to be convex.
important role in optimization: the “next more understandable”
convex objects after polytopes
algorithms for sampling and volume computation will shed
more light towards their study.
We can divide the coding project in the following steps:
Understand the code structure and design of VolEsti.
Understand the basics for spectahedra from bibliography.
Implement a new convex body type and boundary oracle for
spectahedra.
Work on extensions of the problem such as replacing
spectahedra by a spectahedral shadow.
Write tests and documentation.
67. GSOC19: Project 3. Convex optimization with randomized
algorithms
Design and implementation of optimization algorithms (available in
relevant bibliography) in VolEsti that utilize sampling (already
available in the library) as a main subroutine.
We can divide the coding project in the following steps:
Understand the code structure and design of VolEsti.
Implement optimization algorithms. A good place to start is
Dabbene ”A Randomized Cutting Plane Method with
Probabilistic Geometric Convergence” Siam JOPT 2010
Test implementations with various random walks available in
VolEsti
Write tests and documentation