Voronoi diagrams and their duals, Delaunay triangulations, are used in many areas of computing and the sciences. Starting in 3-dimensions, there is a substantial (i.e. polynomial) difference between the best case and the worst case complexity of these objects when starting with n points. This motivates the search for algorithms that are output-senstiive rather than relying only on worst-case guarantees. In this talk, I will describe a simple, new algorithm for computing Voronoi diagrams in d-dimensions that runs in O(f log n log spread) time, where f is the output size and the spread of the input points is the ratio of the diameter to the closest pair distance. For a wide range of inputs, this is the best known algorithm. The algorithm is novel in the that it turns the classic algorithm of Delaunay refinement for mesh generation on its head, working backwards from a quality mesh to the Delaunay triangulation of the input. Along the way, we will see instances of several other classic problems for which no higher-dimensional results are known, including kinetic convex hulls and splitting Delaunay triangulations.
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsDon Sheehy
We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \spread)$ where $f$ is the output complexity of the Voronoi diagram and $\spread$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.
On the higher order Voronoi diagram of line-segments (ISAAC2012)Maksym Zavershynskyi
We analyze structural properties of the order-k Voronoi dia- gram of line segments, which surprisingly has not received any attention in the computational geometry literature. We show that order-k Voronoi regions of line segments may be disconnected; in fact a single order- k Voronoi region may consist of Ω(n) disjoint faces. Nevertheless, the structural complexity of the order-k Voronoi diagram of non-intersecting segments remains O(k(n − k)) similarly to points. For intersecting line segments the structural complexity remains O(k(n − k)) for k ≥ n/2.
This document presents algorithms for maintaining a topological ordering of vertices in a directed graph as edges are incrementally added. It begins by describing a simple "limited search" approach that takes O(m^2) time over all edge additions, where m is the number of edges. It then introduces several improved algorithms. A "two-way limited search" performs searches in both directions and takes O(m^{3/2} log n) time. A "semi-ordered search" relaxes constraints on the search order while still taking O(m^{3/2}) time. For dense graphs with m=Ω(n^2), a "topological search" approach balances vertices instead of edges and reorders vertices
1) The document discusses various linear transformations including reflection, rotation, and shear applied to geometric shapes like triangles and rectangles.
2) Reflection is illustrated for a triangle across the x-axis and y=-x line. Rotation is shown counterclockwise for different triangles around the origin.
3) Clockwise rotation and shear transformations in the x-direction are also demonstrated through examples using matrices applied to triangle and rectangle vertices. Plots of the transformed shapes are provided.
This document discusses isomorphism, one-to-one and onto linear transformations, and provides examples. It defines isomorphism as when two vector spaces are said to be isomorphic if there exists a linear transformation between them that is both one-to-one and onto. It provides theorems relating the dimension of isomorphic spaces and characterizations of one-to-one and onto linear transformations in terms of the kernel, rank, and nullity. Examples are given to determine whether linear transformations are one-to-one and/or onto based on their rank and nullity. Finally, several vector spaces are stated to be isomorphic to each other.
The document discusses algorithms for finding the convex hull of a set of points in two dimensions. It describes the Jarvis march (also called the gift wrapping algorithm) and the Graham scan algorithm. The Jarvis march finds the convex hull by iterating through points and finding the next point that forms the smallest interior angle with the last two points on the hull. The Graham scan sorts points by angle and then iterates through, eliminating any point that forms an obtuse angle with the last three points. Both algorithms run in O(n log n) time.
This document provides an introduction to Fourier analysis and Fourier series. It begins with an overview of sinusoidal waves and Fourier series representations. It then gives examples of using Fourier series to represent non-sinusoidal waves like sawtooth waves and square waves. The document discusses how signals can be broken down into their component frequencies using Fourier analysis. It also explores shifting between the time domain and frequency domain. Finally, applications of Fourier analysis in fields like image and audio processing are mentioned.
The document discusses the z-transform, which is useful for analyzing discrete-time signals and systems. It defines the z-transform as the sum of a discrete-time signal multiplied by z-n, where z is a complex variable. The region of convergence for a z-transform consists of the values of z where this sum converges. The z-transform has properties like linearity, time shifting, and convolution that allow discrete-time signals and systems to be analyzed in the complex z-domain. Key topics covered include the definition of the z-transform, properties like region of convergence and poles/zeros, and properties like linearity, time shifting, and the convolution theorem.
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsDon Sheehy
We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \spread)$ where $f$ is the output complexity of the Voronoi diagram and $\spread$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.
On the higher order Voronoi diagram of line-segments (ISAAC2012)Maksym Zavershynskyi
We analyze structural properties of the order-k Voronoi dia- gram of line segments, which surprisingly has not received any attention in the computational geometry literature. We show that order-k Voronoi regions of line segments may be disconnected; in fact a single order- k Voronoi region may consist of Ω(n) disjoint faces. Nevertheless, the structural complexity of the order-k Voronoi diagram of non-intersecting segments remains O(k(n − k)) similarly to points. For intersecting line segments the structural complexity remains O(k(n − k)) for k ≥ n/2.
This document presents algorithms for maintaining a topological ordering of vertices in a directed graph as edges are incrementally added. It begins by describing a simple "limited search" approach that takes O(m^2) time over all edge additions, where m is the number of edges. It then introduces several improved algorithms. A "two-way limited search" performs searches in both directions and takes O(m^{3/2} log n) time. A "semi-ordered search" relaxes constraints on the search order while still taking O(m^{3/2}) time. For dense graphs with m=Ω(n^2), a "topological search" approach balances vertices instead of edges and reorders vertices
1) The document discusses various linear transformations including reflection, rotation, and shear applied to geometric shapes like triangles and rectangles.
2) Reflection is illustrated for a triangle across the x-axis and y=-x line. Rotation is shown counterclockwise for different triangles around the origin.
3) Clockwise rotation and shear transformations in the x-direction are also demonstrated through examples using matrices applied to triangle and rectangle vertices. Plots of the transformed shapes are provided.
This document discusses isomorphism, one-to-one and onto linear transformations, and provides examples. It defines isomorphism as when two vector spaces are said to be isomorphic if there exists a linear transformation between them that is both one-to-one and onto. It provides theorems relating the dimension of isomorphic spaces and characterizations of one-to-one and onto linear transformations in terms of the kernel, rank, and nullity. Examples are given to determine whether linear transformations are one-to-one and/or onto based on their rank and nullity. Finally, several vector spaces are stated to be isomorphic to each other.
The document discusses algorithms for finding the convex hull of a set of points in two dimensions. It describes the Jarvis march (also called the gift wrapping algorithm) and the Graham scan algorithm. The Jarvis march finds the convex hull by iterating through points and finding the next point that forms the smallest interior angle with the last two points on the hull. The Graham scan sorts points by angle and then iterates through, eliminating any point that forms an obtuse angle with the last three points. Both algorithms run in O(n log n) time.
This document provides an introduction to Fourier analysis and Fourier series. It begins with an overview of sinusoidal waves and Fourier series representations. It then gives examples of using Fourier series to represent non-sinusoidal waves like sawtooth waves and square waves. The document discusses how signals can be broken down into their component frequencies using Fourier analysis. It also explores shifting between the time domain and frequency domain. Finally, applications of Fourier analysis in fields like image and audio processing are mentioned.
The document discusses the z-transform, which is useful for analyzing discrete-time signals and systems. It defines the z-transform as the sum of a discrete-time signal multiplied by z-n, where z is a complex variable. The region of convergence for a z-transform consists of the values of z where this sum converges. The z-transform has properties like linearity, time shifting, and convolution that allow discrete-time signals and systems to be analyzed in the complex z-domain. Key topics covered include the definition of the z-transform, properties like region of convergence and poles/zeros, and properties like linearity, time shifting, and the convolution theorem.
The document discusses half range Fourier series representations of functions defined on an interval (0, L). It explains that a periodic extension F(x) of period 2L can be constructed from the function f(x) defined on (0, L). This extended function F(x) is then expanded into either a Fourier sine series or cosine series. The coefficients of these series represent the half range Fourier sine or cosine series for the original function f(x) defined on the interval (0, L).
This document provides an overview of polar coordinates and complex numbers. It defines the polar coordinate system with a fixed point called the pole (O) and a fixed ray called the polar axis (OA). It explains that a point P has polar coordinates (r, θ) where r is the length of OP and θ is the angle measured from the polar axis. It shows examples of graphing points in the rθ plane and representing the same point in different ways based on the values of r and θ. Finally, it states that there are two basic types of polar equations: θ = k which represents a line and r = c which represents a circle.
Algorithm Design and Complexity - Course 12Traian Rebedea
This document provides an overview of algorithm design and complexity, specifically focusing on flow networks. It defines key concepts related to flow networks including positive flow, net flow, flow properties, maximum flow problems, and minimum cut. It also describes the Ford-Fulkerson method and Edmonds-Karp algorithm for solving maximum flow problems using the concept of a residual network given an initial flow.
Transformation matrices can be used to describe that at what angle the servos need to be to reach the desired position in space or may be an underwater autonomous vehicle needs to reach or align itself with several different obstacles inside the water.
The document discusses solids of revolution and calculating their volumes. It explains that for a solid obtained by rotating a region bounded by a function y=f(x) around the x-axis from x=a to x=b, the cross-sections are circular discs with radius f(x). Therefore, the cross-sectional area is π[f(x)]2 and the volume can be calculated as the integral from a to b of π[f(x)]2 dx.
OpenGL uses model-view and projection matrices to apply transformations like translation, rotation, and scaling. The document discusses constructing transformation matrices for different types of transformations, including translation, rotation around fixed points and arbitrary axes, scaling, and shearing. It also covers combining multiple transformations using matrix multiplication and storing transformations in the current transformation matrix.
This document discusses 2D geometric transformations including translation, rotation, and scaling. It provides the mathematical definitions and matrix representations for each transformation. Translation moves an object along a straight path, rotation moves it along a circular path, and scaling changes its size. All transformations can be represented by 3x3 matrices using homogeneous coordinates to allow combinations of multiple transformations. The inverse of each transformation matrix is also defined.
This document discusses Fourier series and Parseval's theorem. It explains that Parseval's theorem gives the relationship between Fourier coefficients. Specifically, it states that if a Fourier series converges uniformly, the integral of the square of the original function over its domain is equal to the sum of the square of the Fourier coefficients. The document also provides an example of using Parseval's theorem to find the total square error of a Fourier approximation and proving an identity.
Matrices can be used to represent 2D geometric transformations such as translation, scaling, and rotation. Translations are represented by adding a translation vector to coordinates. Scaling is represented by multiplying coordinates by scaling factors. Rotations are represented by premultiplying coordinates with a rotation matrix. Multiple transformations can be combined by multiplying their respective matrices. Using homogeneous coordinates allows all transformations to be uniformly represented as matrix multiplications.
The document defines key concepts in vector spaces including vector space, subspace, span of a set of vectors, and basis. It provides examples to illustrate these concepts. Specifically:
- A vector space is a set of objects called vectors that can be added together and multiplied by scalars, satisfying certain properties.
- A subspace is a subset of a vector space that is itself a vector space under the operations of the original space.
- The span of a set of vectors S is the set of all possible linear combinations of the vectors in S.
- A basis is a set of vectors that spans a vector space and is linearly independent. It provides a standard representation for vectors in the space.
The document discusses graphs and graph algorithms. It defines graphs and their representations using adjacency lists and matrices. It then describes the Breadth-First Search (BFS) and Depth-First Search (DFS) algorithms for traversing graphs. BFS uses a queue to explore the neighbors of each vertex level-by-level, while DFS uses a stack to explore as deep as possible first before backtracking. Pseudocode and examples are provided to illustrate how the algorithms work.
This document discusses 2D geometric transformations including translation, rotation, scaling, and composite transformations. It provides definitions and formulas for each type of transformation. Translation moves objects by adding offsets to coordinates without deformation. Rotation rotates objects around an origin by a certain angle. Scaling enlarges or shrinks objects by multiplying coordinates by scaling factors. Composite transformations apply multiple transformations sequentially by multiplying their matrices. Homogeneous coordinates are also introduced to represent transformations in matrix form.
This document discusses vector spaces and subspaces. It begins by defining a vector space as a set V with two operations, vector addition and scalar multiplication, that satisfy certain properties. Examples of vector spaces include R2 and the space of real polynomials of degree n or less.
It then defines a subspace as a subset of a vector space that is itself a vector space under the inherited operations. For a subset to be a subspace, it must be closed under vector addition and scalar multiplication, and contain the zero vector. Examples given include lines and planes through the origin in R3.
The span of a set S of vectors is defined as the set of all linear combinations of the vectors in S, and it
This document introduces vector fields and defines scalar and vector fields. It defines a vector field F over a planar region D as a function that assigns a vector to each point, given by the components P and Q. Similarly, a vector field over a spatial region E is given by components P, Q, and R. Examples of vector fields include velocity fields and gravitational and electric force fields. It also defines the gradient of a scalar field, divergence and curl of a vector field, and discusses integral calculations for scalar and vector fields along curves.
The document discusses OTTER, a theorem prover that uses resolution style proofs. It provides background on OTTER, describing it as a resolution style theorem prover and discussing its clause representation and main strategies like restriction strategies, direction strategies, and look-ahead strategies. Examples are given of using OTTER to solve problems like the 15 puzzle and analyzing its complexity.
This document defines key graph concepts like paths, cycles, degrees of vertices, and different types of graphs like trees, forests, and directed acyclic graphs. It also describes common graph representations like adjacency matrices and lists. Finally, it covers graph traversal algorithms like breadth-first search and depth-first search, outlining their time complexities and providing examples of their process.
This document discusses the maximum flow and minimum cut problems in networks. It begins by defining key terms like flow, cut, source and sink nodes. It then presents the max-flow min-cut theorem, which states that the maximum flow in a network equals the minimum cut. The document provides examples and proofs of this theorem. It also discusses algorithms for finding the maximum flow in polynomial time and analyzes their running time complexity. The applications of these network flow problems in various domains are also briefly mentioned.
This document introduces network flows and provides examples of networks where flows occur, such as liquids in pipes and goods transported on roads. It defines key concepts like sources, sinks, flows, and cuts. It describes how to find a maximum flow and minimum cut in a network and presents theorems relating feasible flows and cut capacities. The maximum flow problem is to determine the largest amount of flow that can pass from sources to sinks in a network respecting edge capacities.
This document discusses the Laplace transform of periodic functions. It begins by defining a periodic function f(t) and derives the Laplace transform of f(t) as an infinite sum. Several examples are then worked through to demonstrate finding the Laplace transform of square waves, triangular waves, and other periodic functions. Transform properties and techniques like using gate functions are employed in the examples. Reference books on circuit analysis and Laplace transforms are also listed.
The document discusses the time complexity analysis of common algorithms and data structures. It provides the worst case, average case, and best case time complexities for operations like searching, inserting, deleting, etc. for arrays, strings, stacks, queues, linked lists, trees, graphs, maps, and heaps. It also includes space complexity analysis for these algorithms and data structures.
Mesh Generation and Topological Data AnalysisDon Sheehy
The document discusses mesh generation as a preprocessing step for topological data analysis (TDA). It describes how mesh generation can be used to decompose a domain into simple elements to approximate functions and compute persistence diagrams. Specifically, generating a quality Voronoi mesh allows the Voronoi filtration to approximate the sublevel filtration of the function and provide a good approximation of the persistence diagram. While meshing may not seem like an obvious approach for TDA, the document argues it can provide the necessary geometric and topological guarantees to make it a valid preprocessing step.
The document discusses half range Fourier series representations of functions defined on an interval (0, L). It explains that a periodic extension F(x) of period 2L can be constructed from the function f(x) defined on (0, L). This extended function F(x) is then expanded into either a Fourier sine series or cosine series. The coefficients of these series represent the half range Fourier sine or cosine series for the original function f(x) defined on the interval (0, L).
This document provides an overview of polar coordinates and complex numbers. It defines the polar coordinate system with a fixed point called the pole (O) and a fixed ray called the polar axis (OA). It explains that a point P has polar coordinates (r, θ) where r is the length of OP and θ is the angle measured from the polar axis. It shows examples of graphing points in the rθ plane and representing the same point in different ways based on the values of r and θ. Finally, it states that there are two basic types of polar equations: θ = k which represents a line and r = c which represents a circle.
Algorithm Design and Complexity - Course 12Traian Rebedea
This document provides an overview of algorithm design and complexity, specifically focusing on flow networks. It defines key concepts related to flow networks including positive flow, net flow, flow properties, maximum flow problems, and minimum cut. It also describes the Ford-Fulkerson method and Edmonds-Karp algorithm for solving maximum flow problems using the concept of a residual network given an initial flow.
Transformation matrices can be used to describe that at what angle the servos need to be to reach the desired position in space or may be an underwater autonomous vehicle needs to reach or align itself with several different obstacles inside the water.
The document discusses solids of revolution and calculating their volumes. It explains that for a solid obtained by rotating a region bounded by a function y=f(x) around the x-axis from x=a to x=b, the cross-sections are circular discs with radius f(x). Therefore, the cross-sectional area is π[f(x)]2 and the volume can be calculated as the integral from a to b of π[f(x)]2 dx.
OpenGL uses model-view and projection matrices to apply transformations like translation, rotation, and scaling. The document discusses constructing transformation matrices for different types of transformations, including translation, rotation around fixed points and arbitrary axes, scaling, and shearing. It also covers combining multiple transformations using matrix multiplication and storing transformations in the current transformation matrix.
This document discusses 2D geometric transformations including translation, rotation, and scaling. It provides the mathematical definitions and matrix representations for each transformation. Translation moves an object along a straight path, rotation moves it along a circular path, and scaling changes its size. All transformations can be represented by 3x3 matrices using homogeneous coordinates to allow combinations of multiple transformations. The inverse of each transformation matrix is also defined.
This document discusses Fourier series and Parseval's theorem. It explains that Parseval's theorem gives the relationship between Fourier coefficients. Specifically, it states that if a Fourier series converges uniformly, the integral of the square of the original function over its domain is equal to the sum of the square of the Fourier coefficients. The document also provides an example of using Parseval's theorem to find the total square error of a Fourier approximation and proving an identity.
Matrices can be used to represent 2D geometric transformations such as translation, scaling, and rotation. Translations are represented by adding a translation vector to coordinates. Scaling is represented by multiplying coordinates by scaling factors. Rotations are represented by premultiplying coordinates with a rotation matrix. Multiple transformations can be combined by multiplying their respective matrices. Using homogeneous coordinates allows all transformations to be uniformly represented as matrix multiplications.
The document defines key concepts in vector spaces including vector space, subspace, span of a set of vectors, and basis. It provides examples to illustrate these concepts. Specifically:
- A vector space is a set of objects called vectors that can be added together and multiplied by scalars, satisfying certain properties.
- A subspace is a subset of a vector space that is itself a vector space under the operations of the original space.
- The span of a set of vectors S is the set of all possible linear combinations of the vectors in S.
- A basis is a set of vectors that spans a vector space and is linearly independent. It provides a standard representation for vectors in the space.
The document discusses graphs and graph algorithms. It defines graphs and their representations using adjacency lists and matrices. It then describes the Breadth-First Search (BFS) and Depth-First Search (DFS) algorithms for traversing graphs. BFS uses a queue to explore the neighbors of each vertex level-by-level, while DFS uses a stack to explore as deep as possible first before backtracking. Pseudocode and examples are provided to illustrate how the algorithms work.
This document discusses 2D geometric transformations including translation, rotation, scaling, and composite transformations. It provides definitions and formulas for each type of transformation. Translation moves objects by adding offsets to coordinates without deformation. Rotation rotates objects around an origin by a certain angle. Scaling enlarges or shrinks objects by multiplying coordinates by scaling factors. Composite transformations apply multiple transformations sequentially by multiplying their matrices. Homogeneous coordinates are also introduced to represent transformations in matrix form.
This document discusses vector spaces and subspaces. It begins by defining a vector space as a set V with two operations, vector addition and scalar multiplication, that satisfy certain properties. Examples of vector spaces include R2 and the space of real polynomials of degree n or less.
It then defines a subspace as a subset of a vector space that is itself a vector space under the inherited operations. For a subset to be a subspace, it must be closed under vector addition and scalar multiplication, and contain the zero vector. Examples given include lines and planes through the origin in R3.
The span of a set S of vectors is defined as the set of all linear combinations of the vectors in S, and it
This document introduces vector fields and defines scalar and vector fields. It defines a vector field F over a planar region D as a function that assigns a vector to each point, given by the components P and Q. Similarly, a vector field over a spatial region E is given by components P, Q, and R. Examples of vector fields include velocity fields and gravitational and electric force fields. It also defines the gradient of a scalar field, divergence and curl of a vector field, and discusses integral calculations for scalar and vector fields along curves.
The document discusses OTTER, a theorem prover that uses resolution style proofs. It provides background on OTTER, describing it as a resolution style theorem prover and discussing its clause representation and main strategies like restriction strategies, direction strategies, and look-ahead strategies. Examples are given of using OTTER to solve problems like the 15 puzzle and analyzing its complexity.
This document defines key graph concepts like paths, cycles, degrees of vertices, and different types of graphs like trees, forests, and directed acyclic graphs. It also describes common graph representations like adjacency matrices and lists. Finally, it covers graph traversal algorithms like breadth-first search and depth-first search, outlining their time complexities and providing examples of their process.
This document discusses the maximum flow and minimum cut problems in networks. It begins by defining key terms like flow, cut, source and sink nodes. It then presents the max-flow min-cut theorem, which states that the maximum flow in a network equals the minimum cut. The document provides examples and proofs of this theorem. It also discusses algorithms for finding the maximum flow in polynomial time and analyzes their running time complexity. The applications of these network flow problems in various domains are also briefly mentioned.
This document introduces network flows and provides examples of networks where flows occur, such as liquids in pipes and goods transported on roads. It defines key concepts like sources, sinks, flows, and cuts. It describes how to find a maximum flow and minimum cut in a network and presents theorems relating feasible flows and cut capacities. The maximum flow problem is to determine the largest amount of flow that can pass from sources to sinks in a network respecting edge capacities.
This document discusses the Laplace transform of periodic functions. It begins by defining a periodic function f(t) and derives the Laplace transform of f(t) as an infinite sum. Several examples are then worked through to demonstrate finding the Laplace transform of square waves, triangular waves, and other periodic functions. Transform properties and techniques like using gate functions are employed in the examples. Reference books on circuit analysis and Laplace transforms are also listed.
The document discusses the time complexity analysis of common algorithms and data structures. It provides the worst case, average case, and best case time complexities for operations like searching, inserting, deleting, etc. for arrays, strings, stacks, queues, linked lists, trees, graphs, maps, and heaps. It also includes space complexity analysis for these algorithms and data structures.
Mesh Generation and Topological Data AnalysisDon Sheehy
The document discusses mesh generation as a preprocessing step for topological data analysis (TDA). It describes how mesh generation can be used to decompose a domain into simple elements to approximate functions and compute persistence diagrams. Specifically, generating a quality Voronoi mesh allows the Voronoi filtration to approximate the sublevel filtration of the function and provide a good approximation of the persistence diagram. While meshing may not seem like an obvious approach for TDA, the document argues it can provide the necessary geometric and topological guarantees to make it a valid preprocessing step.
The document discusses part-of-speech tagging through supervised natural language processing. It involves collecting labeled text examples, extracting features from the text, training statistical models on the labeled examples and features, and evaluating the trained models. The document provides details on the feature extraction process, which involves identifying features from neighboring words, filtering low frequency features, assigning unique IDs to labels and features, and representing the data in a sparse column format for training softmax regression models to predict part-of-speech tags.
The document discusses how computer scientists measure the cost of algorithms using asymptotic analysis. It introduces the big-O, Omega, and Theta notations to classify algorithms based on how their running time grows relative to the size of the input. Big-O represents an upper bound, Omega a lower bound, and Theta a tight bound of an algorithm's growth rate. The analysis allows abstracting away machine-specific details to understand an algorithm's fundamental performance properties.
This document discusses big-O notation and asymptotic analysis of algorithms. It defines big-O notation as describing an upper bound on the running time of an algorithm. The key properties of big-O notation are explained, including that the fastest growing term dominates and that hierarchies of functions exist based on their growth rates. Examples are provided to demonstrate calculating the big-O of expressions and classifying algorithms based on whether their growth is constant, logarithmic, polynomial, exponential, or factorial.
The document discusses the first results from experiments using Pol-InSAR techniques with TanDEM-X data at X-band. It explores different TanDEM-X acquisition modes and presents initial coherence and interferometric results over various forest sites. The findings demonstrate the potential of Pol-InSAR at X-band to characterize forest structure by analyzing volume decorrelation effects and phase center location sensitivity. Ongoing work involves understanding and modeling high-resolution X-band Pol-InSAR data to extract physical parameters and generate forest information products.
Interference Management for Multi-hop NetworksMohamed Seif
The document summarizes research on interference management techniques for multi-hop networks. It describes a two-user two-hop network model and discusses degrees of freedom (DoF) as a way to characterize the network capacity at high signal-to-noise ratios. An aligned interference neutralization scheme is presented that achieves the maximum DoF of 2 for this network using signal alignment across multiple channel extensions. Variations of this scheme for untrusted relays, confidential messages, and imperfect channel state information are also summarized.
The document discusses computational aspects of stochastic phase-field models. It begins by motivating the inclusion of thermal noise in phase-field simulations through examples of dendrite formation. It then provides background on the deterministic phase-field and Allen-Cahn models before introducing the stochastic Allen-Cahn equation with additive white noise. The remainder of the document discusses the importance of studying this problem both theoretically and computationally, as well as outlining the topics to be covered in more depth.
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
M. Dimitrijević, Noncommutative models of gauge and gravity theoriesSEENET-MTP
- The document describes a talk on noncommutative geometry and gravity theories given at a workshop in Serbia.
- Noncommutative geometry arises in string theory and could help address problems in quantum gravity and the standard model. The talk presents an approach using a star product to represent noncommutative algebras.
- Actions for noncommutative gauge theory and gravity are discussed. For gravity, a deformation of the MacDowell-Mansouri action is proposed based on the Seiberg-Witten map. This leads to modified field equations and corrections to the Einstein-Hilbert and cosmological constant terms.
The document summarizes research on magnetic monopoles in noncommutative spacetime. It begins by motivating noncommutative spacetime as a way to incorporate quantum gravitational effects. It then shows that attempting to quantize spacetime by imposing noncommutativity of coordinates leads to inconsistencies when trying to define a Wu-Yang magnetic monopole in this framework. Specifically, the potentials describing the monopole fail to simultaneously satisfy Maxwell's equations and transform correctly under gauge transformations when expanded to second order in the noncommutativity parameter. This suggests the Dirac quantization condition cannot be satisfied in noncommutative spacetime. Possible reasons for this failure and directions for future work are discussed.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
1. This document contains the text and work for exercises from Chapter 12 on three-dimensional space and vectors. It includes problems finding points and lines in 3D space, calculating distances, properties of spheres, triangles, and equations of spheres and ellipsoids.
2. Several questions involve finding distances between points in 3D space using the Pythagorean theorem, identifying right triangles, and calculating areas of triangles. Properties calculated include radii, diameters, centers and equations of spheres and ellipsoids.
3. Methods demonstrated include identifying perpendicular and parallel lines; locating points and lines in planes; using vectors; and applying equations, distances, areas and Pythagorean theorem to analyze three-dimensional geometric shapes and problems.
Geometric Separators and the Parabolic LiftDon Sheehy
We present a simplification of the geometric separator algorithm of Miller and Thurston that uses parabolic lifting rather than stereographic projection. The result entirely eliminates the middle phase of that algorithm, which finds a conformal transformation to arrange the points nicely on the sphere.
- An NFA can be converted to an equivalent DFA using the subset construction. This involves constructing states of the DFA that correspond to subsets of states of the NFA.
- The subset construction results in an exponential blow-up in the number of states between the NFA and DFA. There is an NFA with n+1 states that requires a DFA with at least 2^n states.
- An epsilon-NFA (ε-NFA) allows epsilon transitions and can be converted to an equivalent DFA using a similar subset construction approach.
Fuqin Li_A physics-based atmospheric and BRDF correction for Landsat data ove...TERN Australia
1) A physics-based model was developed to correct Landsat data for atmospheric effects and bidirectional reflectance distribution function (BRDF) effects over mountainous terrain.
2) The model removes topographic illumination effects like brighter slopes facing the sun and darker slopes facing away. It also detects deep shadows.
3) Validation shows the corrections reduce decorrelation with incident angle and improve separability for land cover classification of slopes with different aspects.
4) The quality of correction depends on the quality, resolution, and co-registration of the digital surface/elevation model used.
Hidden gates in universe: Wormholes UCEN 2017 by Dr. Ali Ovgun
Gravity at UCEN 2017: Black holes and Cosmology, November 22, 23 and 24, 2017
The meeting take place at Universidad Central de Chile.
http://www2.udec.cl/~juoliva/gravatucen2017.html
This document provides an overview of geometrical optics and ray optics. It discusses key concepts like light rays, ray vectors, ray matrices, and matrix representations of optical components including lenses, mirrors, and interfaces. Specific components covered include thin lenses, curved interfaces, and laser cavities. The lens maker's formula and properties of lenses like focal length are also summarized. Geometrical optics provides a simplified approximation of light propagation using rays, while neglecting diffraction and interference effects.
Session02 review-laplace and partial fractionlAhmad Essam
This document provides an introduction to automatic control and Laplace transforms. It begins by reviewing elementary functions such as exponential, logarithmic, trigonometric, and complex functions. It then introduces the Laplace transform, including its definition and properties. Examples are provided to demonstrate how to take the Laplace transform of different functions and signals. The document also discusses inverse Laplace transforms and using partial fraction expansions to solve Laplace transform problems. The goal is to help understand transfer functions, which are important for analyzing control systems.
Similar to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations (20)
In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples acn be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.
Characterizing the Distortion of Some Simple Euclidean EmbeddingsDon Sheehy
This talk addresses some upper and lower bounds techniques for bounding the distortion between mappings between Euclidean metric spaces including circles, spheres, pairs of lines, triples of planes, and the union of a hyperplane and a point.
Sensors and Samples: A Homological ApproachDon Sheehy
In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that its possible to certify the coverage of a coordinate free sensor network even with very minimal knowledge of the space to be covered. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This factoring reveals an interesting new connection between the topological coverage condition and the notion of weak feature size in geometric sampling theory. We then apply this connection to the problem of showing that for a given scale, if one knows the number of connected components and the distance to the boundary, one can also infer the higher betti numbers or provide strong evidence that more samples are needed. This is in contrast to previous work which merely assumed a good sample and gives no guarantees if the sampling condition is not met.
Persistent Homology and Nested DissectionDon Sheehy
The document discusses using nested dissection and geometric separators to speed up computations of persistent homology. It proposes combining mesh filtrations, geometric separators, nested dissection, and output-sensitive persistence algorithms. This would allow beating the matrix multiplication time bound for computing persistent homology of functions defined on well-spaced point clouds. The technique exploits properties of meshes and separators to allow choosing a pivot order that improves the computation time.
The Persistent Homology of Distance Functions under Random ProjectionDon Sheehy
Given n points P in a Euclidean space, the Johnson-Lindenstrauss lemma guarantees that the distances between pairs of points is preserved up to a small constant factor with high probability by random projection into O(log n) dimensions. In this paper, we show that the persistent homology of the distance function to P is also preserved up to a comparable constant factor. One could never hope to preserve the distance function to P pointwise, but we show that it is preserved sufficiently at the critical points of the distance function to guarantee similar persistent homology. We prove these results in the more general setting of weighted k-th nearest neighbor distances, for which k=1 and all weights equal to zero gives the usual distance to P.
In this talk, I give a gentle introduction to geometric and topological data analysis and then segue into some natural questions that arise when one combines the topological view with the perhaps more well-studied linear algebraic view.
The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm". One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.
SOCG: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
The document describes the Vietoris-Rips filtration, which encodes the topology of a metric space when viewed at different scales. It introduces two tricks to create a linear-size approximation of the Vietoris-Rips filtration: 1) embedding the zigzag filtration in a topologically equivalent standard filtration, and 2) perturbing the metric so that the persistence module does not zigzag. The result is that given a metric space with n points, there exists a zigzag filtration of size O(n) whose persistence diagram approximates that of the Rips filtration. It then describes how to construct this approximating zigzag filtration using net trees and projecting the metric space onto a Delone set.
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Don Sheehy
This document describes a method for approximating the Vietoris-Rips filtration of a finite metric space using a zigzag filtration. The method involves two key steps: 1) embedding the zigzag filtration in a topologically equivalent standard filtration, and 2) perturbing the metric so that the persistence module does not zigzag at the homology level. The result is that given a metric space with n points, there exists a zigzag filtration of size O(n) whose persistence diagram (1+ε)-approximates that of the Rips filtration. This provides a linear-size approximation to compute topological persistence for large data sets.
Often, high dimensional data lie
close to a low-dimensional submanifold and it is of interest to understand the geometry of these submanifolds.
The homology groups of a manifold are important topological invariants that provide an algebraic summary of the manifold.
These groups contain rich topological information, for instance, about the connected components, holes, tunnels and sometimes the dimension of the manifold.
In this paper, we consider the statistical problem of estimating the homology of a manifold from noisy samples under several different noise models.
We derive upper and lower bounds on the minimax risk for this problem.
Our upper bounds are based on estimators which are constructed from a union of balls of appropriate radius around carefully selected points.
In each case we establish complementary lower bounds using Le Cam's lemma.
A Multicover Nerve for Geometric InferenceDon Sheehy
We show that filtering the barycentric decomposition of a Cech complex by the cardinality of the vertices captures precisely the topology of k-covered regions among a collection of balls for all values of k.
Moreover, we relate this result to the Vietoris-Rips complex to get an approximation in terms of the persistent homology.
ATMCS: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
1) The paper presents a method to approximate the Vietoris-Rips filtration using a zigzag filtration of linear size.
2) It embeds the zigzag filtration into an equivalent standard filtration and perturbs the metric so that the zigzag does not occur at the homology level.
3) This results in a zigzag filtration of size O(n) whose persistence diagram provides a (1+ε)-approximation of the persistence diagram for the Vietoris-Rips filtration.
New Bounds on the Size of Optimal MeshesDon Sheehy
The document discusses mesh generation, which involves decomposing a domain into simple elements like triangles or tetrahedra. An optimal mesh has good element quality, conforms to the input domain, and uses the minimum number of points needed to make all Voronoi cells sufficiently "fat" or well-shaped according to metrics like radius-edge ratios. The talk presents analysis showing that the optimal mesh size is determined by the "feature size measure" of the input points, which involves the distance to each point's second nearest neighbor.
In this talk, we will be looking at a basic primitive in computational geometry, the flip. Also known as bistellar flips, edge-flips, rotations, and Pachner moves, this local change operation has been discovered and rediscovered in a variety of fields (thus the many names) and has proven useful both as an algorithmic tool as well as a proof technology. For algorithm designers working outside of computational geometry, one can consider the flip move as a higher dimensional analog of the tree rotations used in binary trees. I will survey some of the most important results about flips with an emphasis on developing a general geometric intuition that has led to many advances.
Beating the Spread: Time-Optimal Point MeshingDon Sheehy
We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality.
Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds.
It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.
The previous best results in the comparison model depended on the log of the <b>spread</b> of the input, the ratio of the largest to smallest pairwise distance among input points.
We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
Here's a toy problem: What is the SMALLEST number of unit balls you can fit in a box such that no more will fit?
In this talk, I will show how just thinking about a naive greedy approach to this problem leads to a simple derivation of several of the most important theoretical results in the field of mesh generation.
We'll prove classic upper and lower bounds on both the number of balls and the complexity of their interrelationships.
Then, we'll relate this problem to a similar one called the Fat Voronoi Problem, in which we try to find point sets such that every Voronoi cell is fat
(the ratio of the radii of the largest contained to smallest containing ball is bounded).
This problem has tremendous promise in the future of mesh generation as it can circumvent the classic lowerbounds presented in the first half of the talk.
Unfortunately the simple approach no longer works.
In the end we will show that the number of neighbors of any cell in a Fat Voronoi Diagram in the plane is bounded by a constant
(if you think that's obvious, spend a minute to try to prove it).
We'll also talk a little about the higher dimensional version of the problem and its wide range of applications.
What is the difference between a mesh and a net?
What is the difference between a metric space epsilon-net and a range space epsilon-net?
What is the difference between geometric divide-and-conquer and combinatorial divide-and-conquer?
In this talk, I will answer these questions and discuss how these different ideas come together to finally settle the question of how to compute conforming point set meshes in optimal time. The meshing problem is to discretize space into as few pieces as possible and yet still capture the underlying density of the input points. Meshes are fundamental in scientific computing, graphics, and more recently, topological data analysis.
This is joint work with Gary Miller and Todd Phillips
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
Geometry, Topology, and all of Your Wildest Dreams Will Come TrueDon Sheehy
In this light talk, I give a high level view of some of my recent research in using ideas from mesh generation to lower the complexity of computing persistent homology in geomemtric settings.
Because this talk is for a general audience, I will focus on three related applications (where related is interpreted loosely) that I think have the widest appeal.
The applications are:
1. Winning Nobel Peace Prizes
2.Winning Olympic Gold Medals
3.Finding True Love
5. Voronoi Diagrams
The Voronoi cell of p is the set of reverse nearest neighbors of p.
The Voronoi diagram is dual to the Delaunay triangulation.
6. Voronoi Diagrams
The Voronoi cell of p is the set of reverse nearest neighbors of p.
The Voronoi diagram is dual to the Delaunay triangulation.
7. Voronoi Diagrams
The Voronoi cell of p is the set of reverse nearest neighbors of p.
The Voronoi diagram is dual to the Delaunay triangulation.
8. Voronoi Diagrams
The Voronoi cell of p is the set of reverse nearest neighbors of p.
The Voronoi diagram is dual to the Delaunay triangulation.
Voronoi k-face Delaunay (d-k)-face
10. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
11. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
12. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
13. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
14. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
Thiessen 1911 (meteorology)
15. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
Thiessen 1911 (meteorology)
Geographical Information Systems
16. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
Thiessen 1911 (meteorology)
Geographical Information Systems
Graphics
17. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
Thiessen 1911 (meteorology)
Geographical Information Systems
Graphics
Topological Data Analysis
18. Voronoi Diagrams and Delaunay
triangulations are used everywhere.
Descartes 1644
Dirichlet 1850
Snow 1854 (probably a myth, but a fun one)
Voronoi 1908
Thiessen 1911 (meteorology)
Geographical Information Systems
Graphics
Topological Data Analysis
Mesh Generation
24. Delaunay triangulations and Voronoi
diagrams are projections of polyhedra.
Parabolic lifting into d+1 dimensions.
The Delaunay triangulation is the projection of lower hull.
Other liftings yield weighted Delaunay triangulations.
25. Delaunay triangulations and Voronoi
diagrams are projections of polyhedra.
Parabolic lifting into d+1 dimensions.
The Delaunay triangulation is the projection of lower hull.
Other liftings yield weighted Delaunay triangulations.
26. Delaunay triangulations and Voronoi
diagrams are projections of polyhedra.
Parabolic lifting into d+1 dimensions.
The Delaunay triangulation is the projection of lower hull.
Other liftings yield weighted Delaunay triangulations.
27. Delaunay triangulations and Voronoi
diagrams are projections of polyhedra.
Parabolic lifting into d+1 dimensions.
The Delaunay triangulation is the projection of lower hull.
Other liftings yield weighted Delaunay triangulations.
28. Delaunay triangulations and Voronoi
diagrams are projections of polyhedra.
Parabolic lifting into d+1 dimensions.
The Delaunay triangulation is the projection of lower hull.
Other liftings yield weighted Delaunay triangulations.
35. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
36. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
37. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
2
Seidel O(n + f log n)
38. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
2
Seidel O(n + f log n)
Matousek and 2− 2
O(1)
Schwartzkopf O(n
d/2 +1 log n + f log n)
39. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
2
Seidel O(n + f log n)
Matousek and 2− 2
O(1)
Schwartzkopf O(n
d/2 +1 log n + f log n)
1
1− O(1)
Chan O(n log f + (nf ) d/2 +1 log n)
40. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
2
Seidel O(n + f log n)
Matousek and 2− 2
O(1)
Schwartzkopf O(n
d/2 +1 log n + f log n)
1
1− O(1)
Chan O(n log f + (nf ) d/2 +1 log n)
Chan, 1 2
1− 1− O(1)
Snoeyink, O((n + (nf ) d/2 + fn d/2 ) log n)
Yap
41. (Convex Hull)
A brief history of Voronoi Diagram algorithms.
d/2
Chazelle O(n log n + n )
Avis O(nf )
2
Seidel O(n + f log n)
Matousek and 2− 2
O(1)
Schwartzkopf O(n
d/2 +1 log n + f log n)
1
1− O(1)
Chan O(n log f + (nf ) d/2 +1 log n)
Chan, 1 2
1− 1− O(1)
Snoeyink, O((n + (nf ) d/2 + fn d/2 ) log n)
Yap
Miller and Sheehy O(f log n log ∆)
(today’s talk)
47. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
48. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
49. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
50. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
51. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
Voronoi Diagram
52. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
Voronoi Diagram OutRadius/InRadius < const
53. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
X ✓
Voronoi Diagram OutRadius/InRadius < const
54. Mesh Generation
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
X ✓
Voronoi Diagram OutRadius/InRadius < const
55. Meshing Points
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
56. Meshing Points
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
57. Meshing Points
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
58. Meshing Points
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
59. Meshing Points
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
Counterintuitive Fact about Meshing:
It’s sometimes easier to build the Voronoi/Delaunay
of a superset of the input, than of the input alone.
60. Meshing Guarantees
Aspect Ratio (quality): v
Rv
Rv rv
≤τ
rv
Cell Sizing:
lfs(x) := d(x, P {NN(x)})
1
lfs(v) ≤ Rv ≤ Klfs(v)
K
Constant Local Complexity:
Each cell has at most a constant number of faces.
Optimality and Running time:
|M | = Θ(|Optimal|)
Running time: O(n log n + |M |)
61. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
62. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
63. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
64. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
65. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
66. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
67. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
68. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
69. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
70. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
71. Mesh Generation in reverse?
Build a quality mesh.
Increase the weights
of the input points.
Update the structure
for each local change.
79. Local changes are flips.
Flips correspond to intersections of the Voronoi diagram of the
mesh and the Voronoi diagram of the input points.
80. Computing the flip times only requires a
single determinant computation.
81. Computing the flip times only requires a
single determinant computation.
p1 ··· pd+2
d+2 points cospherical iff det 1 ··· 1 =0
2 2
p1 ··· pd+2
82. Computing the flip times only requires a
single determinant computation.
p1 ··· pd+2
d+2 points cospherical iff det 1 ··· 1 =0
2 2
p1 ··· pd+2
d+2 weighted points on a common orthosphere iff
p1 ··· pd+2
det 1 ··· 1 =0
p1 2 − w 1 · · · pd+2 2 − wd+2
83. Computing the flip times only requires a
single determinant computation.
p1 ··· pd+2
d+2 points cospherical iff det 1 ··· 1 =0
2 2
p1 ··· pd+2
d+2 weighted points on a common orthosphere iff
p1 ··· pd+2
det 1 ··· 1 =0
p1 2 − w 1 · · · pd+2 2 − wd+2
t if pi ∈ P
Define wi =
0 otherwise
84. Computing the flip times only requires a
single determinant computation.
p1 ··· pd+2
d+2 points cospherical iff det 1 ··· 1 =0
2 2
p1 ··· pd+2
d+2 weighted points on a common orthosphere iff
p1 ··· pd+2
det 1 ··· 1 =0
p1 2 − w 1 · · · pd+2 2 − wd+2
t if pi ∈ P
Define wi =
0 otherwise
p1 ··· pd+2
So, det 1 ··· 1 is linear in t.
p1 2 − w 1 ··· pd+2 2 − wd+2
87. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
88. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
Keep potential flips on a heap ordered by flip time.
89. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
Keep potential flips on a heap ordered by flip time.
(flip time is the weight of the input points when the flip happens)
90. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
Keep potential flips on a heap ordered by flip time.
(flip time is the weight of the input points when the flip happens)
Repeatedly pop a flip, attempt to do it, and update.
91. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
Keep potential flips on a heap ordered by flip time.
(flip time is the weight of the input points when the flip happens)
Repeatedly pop a flip, attempt to do it, and update.
(at most O(1) new potential flips are added to the heap)
92. The Algorithm
Add a bounding box around the points.
Build a quality mesh of the points.
Keep potential flips on a heap ordered by flip time.
(flip time is the weight of the input points when the flip happens)
Repeatedly pop a flip, attempt to do it, and update.
(at most O(1) new potential flips are added to the heap)
When the heap is empty, remove the bounding box and
all incident Delaunay faces.
98. A summary of the analysis.
Full dimensional mesh cells intersect output faces at
most log(spread) times.
99. A summary of the analysis.
Full dimensional mesh cells intersect output faces at
most log(spread) times.
Since each mesh cell has only a constant number of
faces, we only get O(f log(spread)) total flips.
100. A summary of the analysis.
Full dimensional mesh cells intersect output faces at
most log(spread) times.
Since each mesh cell has only a constant number of
faces, we only get O(f log(spread)) total flips.
Each flip generates at most O(1) new flips on the heap.
101. A summary of the analysis.
Full dimensional mesh cells intersect output faces at
most log(spread) times.
Since each mesh cell has only a constant number of
faces, we only get O(f log(spread)) total flips.
Each flip generates at most O(1) new flips on the heap.
The heap operations require O(log n) time each.
102. A summary of the analysis.
Full dimensional mesh cells intersect output faces at
most log(spread) times.
Since each mesh cell has only a constant number of
faces, we only get O(f log(spread)) total flips.
Each flip generates at most O(1) new flips on the heap.
The heap operations require O(log n) time each.
Total running time is O(f log n log (spread)).
104. Summary
A new output-sensitive algorithm for Voronoi diagrams and
Delaunay triangulations in d-dimensions.
105. Summary
A new output-sensitive algorithm for Voronoi diagrams and
Delaunay triangulations in d-dimensions.
Start with a quality mesh and then remove the Steiner points.
106. Summary
A new output-sensitive algorithm for Voronoi diagrams and
Delaunay triangulations in d-dimensions.
Start with a quality mesh and then remove the Steiner points.
Use geometry to bound the combinatorial changes.
107. Summary
A new output-sensitive algorithm for Voronoi diagrams and
Delaunay triangulations in d-dimensions.
Start with a quality mesh and then remove the Steiner points.
Use geometry to bound the combinatorial changes.
Running time: O(f log n log ∆)
108. Summary
A new output-sensitive algorithm for Voronoi diagrams and
Delaunay triangulations in d-dimensions.
Start with a quality mesh and then remove the Steiner points.
Use geometry to bound the combinatorial changes.
Running time: O(f log n log ∆)
Thank you.