new bounds on the size of optimal meshes

Post on 11-May-2015

136 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al.\ showed that if an ordered point set has pacing $\phi$, then the number of vertices in an optimal mesh will be $O(\phi^dn)$, where $d$ is the input dimension. We give a new analysis of this integral showing that the output size is only $\Theta(n + n\log \phi)$. The new analysis tightens bounds from several previous results and provides matching lower bounds. Moreover, it precisely characterizes inputs that yield outputs of size $O(n)$.

TRANSCRIPT

New Boundson the

Size of Optimal Meshes

Don Sheehy

GeometricaINRIA

Mesh Generation

Mesh Generation

1 Decompose a volume into simplices.

Mesh Generation

1 Decompose a volume into simplices.

2 Simplices should be quality.

Mesh Generation

1 Decompose a volume into simplices.

2 Simplices should be quality.

Mesh Generation

1 Decompose a volume into simplices.

2 Simplices should be quality.

3 Output should conform to input.

Mesh Generation

1 Decompose a volume into simplices.

2 Simplices should be quality.

3 Output should conform to input.

Mesh Generation

Mesh Generation

Uses:

Mesh Generation

Uses:PDEs via FEM

Mesh Generation

Uses:PDEs via FEMData Analysis

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:Triangle

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:TriangleCGAL

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:TriangleCGALTetGen

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:TriangleCGALTetGen

Theoretical Guarantees:

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:TriangleCGALTetGen

Theoretical Guarantees:Sliver Removal

Mesh Generation

Uses:PDEs via FEMData Analysis

Good Codes:TriangleCGALTetGen

Theoretical Guarantees:Sliver RemovalSurface Reconstruction

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Local Refinement Algorithms

Pros:

Local Refinement Algorithms

Pros:Easy to implement

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?How many points?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?How many points?

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?How many points?

Yes.

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?How many points?

Yes.No.

Local Refinement Algorithms

Pros:Easy to implementOften Parallel

Cons:Termination?Accumulations?How many points?

Yes.No.

This is what we’ll answer.

The size of an optimal mesh is given by the feature size measure.

The size of an optimal mesh is given by the feature size measure.

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

x

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

lfs(x)

x

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

x

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

lfs(x)x

lfsP (x) := Distance to second nearest neighbor in P .

The size of an optimal mesh is given by the feature size measure.

lfs(x)x

lfsP (x) := Distance to second nearest neighbor in P .

Optimal Mesh Size = !!

"

!dx

lfs(x)d

#

The size of an optimal mesh is given by the feature size measure.

lfs(x)x

lfsP (x) := Distance to second nearest neighbor in P .

Optimal Mesh Size = !!

"

!dx

lfs(x)d

#

number of vertices

The size of an optimal mesh is given by the feature size measure.

lfs(x)x

lfsP (x) := Distance to second nearest neighbor in P .

Optimal Mesh Size = !!

"

!dx

lfs(x)d

#

hides simple exponential in d

number of vertices

The size of an optimal mesh is given by the feature size measure.

lfs(x)x

lfsP (x) := Distance to second nearest neighbor in P .

Optimal Mesh Size = !!

"

!dx

lfs(x)d

#

hides simple exponential in d

number of vertices

µP (!) =

!!

dx

lfsP (x)dThe Feature Size Measure:

The size of an optimal mesh is given by the feature size measure.

lfsP (x) := Distance to second nearest neighbor in P .

Optimal Mesh Size = !!

"

!dx

lfs(x)d

#

hides simple exponential in d

number of vertices

µP (!) =

!!

dx

lfsP (x)dThe Feature Size Measure:

When is µP (!) = O(n)?

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

A canonical bad case for meshing is two points in a big empty space.

The feature size measure can be bounded in terms of the pacing.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.Order the points.

The feature size measure can be bounded in terms of the pacing.

piOrder the points.

The feature size measure can be bounded in terms of the pacing.

a = !pi "NN(pi)!

piOrder the points.

The feature size measure can be bounded in terms of the pacing.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

piOrder the points.

The feature size measure can be bounded in terms of the pacing.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Order the points.

The feature size measure can be bounded in terms of the pacing.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Let ! be the geometric mean, so!

log !i = n log !.

Order the points.

The feature size measure can be bounded in terms of the pacing.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Let ! be the geometric mean, so!

log !i = n log !.

! is the pacing of the ordering.

Order the points.

The trick is to write the feature size measure as a telescoping sum.

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

n!

i=3

log !i = n log !

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

n!

i=3

log !i = n log !

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

!(n+ n log !)

The trick is to write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

n!

i=3

log !i = n log !

Previous bound: O(n+ !dn).

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

!(n+ n log !)

Pacing analysis has already led to new results.

Pacing analysis has already led to new results.

The Scaffold Theorem (SODA 2009)

Given n points well-spaced on a surface, the volume mesh has size O(n).

Pacing analysis has already led to new results.

The Scaffold Theorem (SODA 2009)

Given n points well-spaced on a surface, the volume mesh has size O(n).

Time-Optimal Point Meshing (SoCG 2011)

Build a mesh in O(n log n + m) time.Algorithm explicitly computes the pacing for each insertion.

Some takeaway messages:

Some takeaway messages:

1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point.

Some takeaway messages:

1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point.

2 Point sets that admit linear size meshes are exactly those with constant pacing.

Some takeaway messages:

1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point.

2 Point sets that admit linear size meshes are exactly those with constant pacing.

Thank you.

Mesh Generation 13

Mesh GenerationDecompose a domain into simple elements.

13

Mesh GenerationDecompose a domain into simple elements.

13

Mesh Generation

Mesh Quality

Radius/Edge < const

Decompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Decompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Conforming to InputDecompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Conforming to InputDecompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Conforming to InputDecompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Conforming to InputDecompose a domain into simple elements.

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

Conforming to InputDecompose a domain into simple elements.

Voronoi Diagram

13

Mesh Generation

X X✓

Mesh Quality

Radius/Edge < const

OutRadius/InRadius < const

Conforming to InputDecompose a domain into simple elements.

Voronoi Diagram

13

Mesh Generation

X X✓

✓X

Mesh Quality

Radius/Edge < const

OutRadius/InRadius < const

Conforming to InputDecompose a domain into simple elements.

Voronoi Diagram

13

Mesh Generation

X X✓

✓X

Mesh Quality

Radius/Edge < const

OutRadius/InRadius < const

Conforming to InputDecompose a domain into simple elements.

Voronoi Diagram

13

Optimal meshing adds the fewest points to make all Voronoi cells fat.*

* Equivalent to radius-edge condition on Delaunay simplices.

Optimal meshing adds the fewest points to make all Voronoi cells fat.*

* Equivalent to radius-edge condition on Delaunay simplices.

Optimal meshing adds the fewest points to make all Voronoi cells fat.*

* Equivalent to radius-edge condition on Delaunay simplices.

Optimal meshing adds the fewest points to make all Voronoi cells fat.*

* Equivalent to radius-edge condition on Delaunay simplices.

Meshing Points 15Input: P ! Rd

Output: M " P with a “nice” Voronoi diagram

n = |P |,m = |M |

Meshing Points 15Input: P ! Rd

Output: M " P with a “nice” Voronoi diagram

n = |P |,m = |M |

Meshing Points 15Input: P ! Rd

Output: M " P with a “nice” Voronoi diagram

n = |P |,m = |M |

Meshing Points 15Input: P ! Rd

Output: M " P with a “nice” Voronoi diagram

n = |P |,m = |M |

How to prove a meshing algorithm is optimal.16

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

x

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

fP (x)x

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

x

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

fP(x)

x

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

For all v ! M, fM (v) " KfP (v) m = !

! "!!

dx

fP (x)d

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

For all v ! M, fM (v) " KfP (v)

“No 2 points too close together” “Optimal Size Output”

m = !

! "!!

dx

fP (x)d

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

For all v ! M, fM (v) " KfP (v)

“No 2 points too close together” “Optimal Size Output”

m = !

! "!!

dx

fP (x)d

v

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

For all v ! M, fM (v) " KfP (v)

“No 2 points too close together” “Optimal Size Output”

m = !

! "!!

dx

fP (x)d

fM (v)

v

How to prove a meshing algorithm is optimal.16The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

For all v ! M, fM (v) " KfP (v)

“No 2 points too close together” “Optimal Size Output”

m = !

! "!!

dx

fP (x)d

fP(v) fM (v)

v

top related