New Bounds
        on the
Size of Optimal Meshes

       Don Sheehy

        Geometrica
          INRIA
Mesh Generation
Mesh Generation

1  Decompose a volume
into simplices.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.

3 Output should
conform to input.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.

3 Output should
conform to input.
Mesh Generation
Mesh Generation

Uses:
Mesh Generation

Uses:
   PDEs via FEM
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
   Sliver Removal
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
   Sliver Removal
   Surface Reconstruction
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms

Pros:
Local Refinement Algorithms

Pros:
  Easy to implement
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?   Yes.
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?   Yes.
 Accumulations? No.
 How many points?
Local Refinement Algorithms

 Pros:
   Easy to implement
   Often Parallel

 Cons:
  Termination?   Yes.
  Accumulations? No.
  How many points?



This is what we’ll answer.
The size of an optimal mesh is given by
       the feature size measure.
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




                                     x
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




                                            s( x)
                                     x lf
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




            x
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .

                                              dx
    Optimal Mesh Size = Θ                Ω lfs(x)d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
                           number of vertices
                                                dx
    Optimal Mesh Size = Θ                  Ω lfs(x)d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
                                 number of vertices
                                                      dx
    Optimal Mesh Size = Θ                        Ω lfs(x)d
         hides simple exponential in d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
   lfsP (x) := Distance to second nearest neighbor in P .
                                  number of vertices
                                                       dx
     Optimal Mesh Size = Θ                        Ω lfs(x)d
          hides simple exponential in d
                                                              dx
 The Feature Size Measure: µP (Ω) =                        lfsP (x)d
                                                       Ω




             x lfs(
                    x)
The size of an optimal mesh is given by
       the feature size measure.
   lfsP (x) := Distance to second nearest neighbor in P .
                                  number of vertices
                                                       dx
     Optimal Mesh Size = Θ                        Ω lfs(x)d
          hides simple exponential in d
                                                              dx
 The Feature Size Measure: µP (Ω) =                        lfsP (x)d
                                                       Ω




      When is µP (Ω) = O(n)?
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
The feature size measure can be
bounded in terms of the pacing.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
                         a = pi − NN(pi )
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
                         a = pi − NN(pi )


                    b = pi − 2NN(pi )
The feature size measure can be
bounded in terms of the pacing.
                     pi
 Order the points.
                          a = pi − NN(pi )


                     b = pi − 2NN(pi )



                                         b
The pacing of the ith point is φi =      a .
The feature size measure can be
   bounded in terms of the pacing.
                          pi
   Order the points.
                               a = pi − NN(pi )


                         b = pi − 2NN(pi )



                                              b
  The pacing of the ith point is φi =         a .
Let φ be the geometric mean, so     log φi = n log φ.
The feature size measure can be
   bounded in terms of the pacing.
                           pi
   Order the points.
                                a = pi − NN(pi )


                          b = pi − 2NN(pi )



                                               b
  The pacing of the ith point is φi =          a .
Let φ be the geometric mean, so      log φi = n log φ.

          φ is the pacing of the ordering.
The trick is to write the feature size
  measure as a telescoping sum.
The trick is to write the feature size
  measure as a telescoping sum.
           Pi = {p1 , . . . , pi }
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3

                             effect of adding the ith point.
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3

                             effect of adding the ith point.


        µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +         µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ
   i=3
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +         µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ          Θ(n + n log φ)
   i=3
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +          µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ          Θ(n + n log φ)
   i=3

                                                                    d
                                  Previous bound: O(n + φ n).
Pacing analysis has already led to
          new results.
Pacing analysis has already led to
             new results.

The Scaffold Theorem (SODA 2009)
   Given n points well-spaced on
   a surface, the volume mesh
   has size O(n).
Pacing analysis has already led to
             new results.

The Scaffold Theorem (SODA 2009)
   Given n points well-spaced on
   a surface, the volume mesh
   has size O(n).

Time-Optimal Point Meshing (SoCG 2011)
   Build a mesh in O(n log n + m) time.
   Algorithm explicitly computes the pacing for each insertion.
Some takeaway messages:
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.

   2  Point sets that admit linear size meshes are exactly those
   with constant pacing.
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.

   2  Point sets that admit linear size meshes are exactly those
   with constant pacing.



                      Thank you.
Mesh Generation   13
Mesh Generation        13
Decompose a domain
into simple elements.
Mesh Generation        13
Decompose a domain
into simple elements.
Mesh Generation                              13
Decompose a domain
                          Mesh Quality
into simple elements.




                        Radius/Edge < const
Mesh Generation                               13
Decompose a domain
                            Mesh Quality
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const




   Voronoi Diagram
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




   Voronoi Diagram      OutRadius/InRadius < const
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




                              X          ✓
   Voronoi Diagram      OutRadius/InRadius < const
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




                              X          ✓
   Voronoi Diagram      OutRadius/InRadius < const
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
How to prove a meshing algorithm is optimal.16
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




              x
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




              x
                  fP (x)
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




                                          x
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




                                                 ( x)
                                          x fP
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”           “Optimal Size Output”
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”           “Optimal Size Output”




                                    v
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)            m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”            “Optimal Size Output”




                                    v
                                    fM (v)
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                                  dx
  For all v ∈ M, fM (v) ≥ KfP (v)                 m=Θ
                                                            Ω   fP (x)d

 “No 2 points too close together”                 “Optimal Size Output”




                                         v
                                  ( v)   fM (v)
                             fP

New Bounds on the Size of Optimal Meshes

  • 1.
    New Bounds on the Size of Optimal Meshes Don Sheehy Geometrica INRIA
  • 2.
  • 3.
    Mesh Generation 1 Decompose a volume into simplices.
  • 4.
    Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality.
  • 5.
    Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality.
  • 6.
    Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality. 3 Output should conform to input.
  • 7.
    Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality. 3 Output should conform to input.
  • 8.
  • 9.
  • 10.
  • 11.
    Mesh Generation Uses: PDEs via FEM Data Analysis
  • 12.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes:
  • 13.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle
  • 14.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL
  • 15.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen
  • 16.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees:
  • 17.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees: Sliver Removal
  • 18.
    Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees: Sliver Removal Surface Reconstruction
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel
  • 43.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons:
  • 44.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination?
  • 45.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 46.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 47.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 48.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations? How many points?
  • 49.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations? How many points?
  • 50.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? How many points?
  • 51.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? No. How many points?
  • 52.
    Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? No. How many points? This is what we’ll answer.
  • 53.
    The size ofan optimal mesh is given by the feature size measure.
  • 54.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 55.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 56.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 57.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x
  • 58.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . s( x) x lf
  • 59.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x
  • 60.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x lfs( x)
  • 61.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . dx Optimal Mesh Size = Θ Ω lfs(x)d x lfs( x)
  • 62.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d x lfs( x)
  • 63.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d x lfs( x)
  • 64.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d dx The Feature Size Measure: µP (Ω) = lfsP (x)d Ω x lfs( x)
  • 65.
    The size ofan optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d dx The Feature Size Measure: µP (Ω) = lfsP (x)d Ω When is µP (Ω) = O(n)?
  • 66.
    A canonical badcase for meshing is two points in a big empty space.
  • 67.
    A canonical badcase for meshing is two points in a big empty space.
  • 68.
    A canonical badcase for meshing is two points in a big empty space.
  • 69.
    A canonical badcase for meshing is two points in a big empty space.
  • 70.
    A canonical badcase for meshing is two points in a big empty space.
  • 71.
    A canonical badcase for meshing is two points in a big empty space.
  • 72.
    A canonical badcase for meshing is two points in a big empty space.
  • 73.
    A canonical badcase for meshing is two points in a big empty space.
  • 74.
    A canonical badcase for meshing is two points in a big empty space.
  • 75.
    A canonical badcase for meshing is two points in a big empty space.
  • 76.
    The feature sizemeasure can be bounded in terms of the pacing.
  • 77.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 78.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 79.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 80.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 81.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 82.
    The feature sizemeasure can be bounded in terms of the pacing. Order the points.
  • 83.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points.
  • 84.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi )
  • 85.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi )
  • 86.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a .
  • 87.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a . Let φ be the geometric mean, so log φi = n log φ.
  • 88.
    The feature sizemeasure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a . Let φ be the geometric mean, so log φi = n log φ. φ is the pacing of the ordering.
  • 89.
    The trick isto write the feature size measure as a telescoping sum.
  • 90.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi }
  • 91.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3
  • 92.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point.
  • 93.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
  • 94.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ i=3
  • 95.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ Θ(n + n log φ) i=3
  • 96.
    The trick isto write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ Θ(n + n log φ) i=3 d Previous bound: O(n + φ n).
  • 97.
    Pacing analysis hasalready led to new results.
  • 98.
    Pacing analysis hasalready led to new results. The Scaffold Theorem (SODA 2009) Given n points well-spaced on a surface, the volume mesh has size O(n).
  • 99.
    Pacing analysis hasalready led to new results. The Scaffold Theorem (SODA 2009) Given n points well-spaced on a surface, the volume mesh has size O(n). Time-Optimal Point Meshing (SoCG 2011) Build a mesh in O(n log n + m) time. Algorithm explicitly computes the pacing for each insertion.
  • 100.
  • 101.
    Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point.
  • 102.
    Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point. 2 Point sets that admit linear size meshes are exactly those with constant pacing.
  • 103.
    Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point. 2 Point sets that admit linear size meshes are exactly those with constant pacing. Thank you.
  • 106.
  • 107.
    Mesh Generation 13 Decompose a domain into simple elements.
  • 108.
    Mesh Generation 13 Decompose a domain into simple elements.
  • 109.
    Mesh Generation 13 Decompose a domain Mesh Quality into simple elements. Radius/Edge < const
  • 110.
    Mesh Generation 13 Decompose a domain Mesh Quality into simple elements. X ✓ X Radius/Edge < const
  • 111.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 112.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 113.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 114.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 115.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const Voronoi Diagram
  • 116.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const Voronoi Diagram OutRadius/InRadius < const
  • 117.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const X ✓ Voronoi Diagram OutRadius/InRadius < const
  • 118.
    Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const X ✓ Voronoi Diagram OutRadius/InRadius < const
  • 119.
    Optimal meshing addsthe fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 120.
    Optimal meshing addsthe fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 121.
    Optimal meshing addsthe fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 122.
    Optimal meshing addsthe fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 123.
    Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 124.
    Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 125.
    Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 126.
    Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 127.
    How to provea meshing algorithm is optimal.16
  • 128.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 129.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 130.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x
  • 131.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x fP (x)
  • 132.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x
  • 133.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P ( x) x fP
  • 134.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 135.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d
  • 136.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output”
  • 137.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v
  • 138.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v fM (v)
  • 139.
    How to provea meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v ( v) fM (v) fP