d7013e lecture 4 linear programming

32
D7013E Computational Geometry - Håkan Jonsson 1 D7013E Lecture 4 Linear Programming Half-Plane Intersection Incremental Linear Programing Plane sweep Casting Randomized Linear Programming Unbounded Programs Smallest Enclosing Discs

Upload: others

Post on 08-May-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 1

D7013E!Lecture 4!Linear !

Programming!

Half-Plane Intersection!

Incremental !Linear Programing!

Plane sweep!

Casting!

Randomized!Linear Programming!

Unbounded!Programs!

Smallest!Enclosing Discs!

Page 2: D7013E Lecture 4 Linear Programming

4.1 Casting

•  The Castability Problem: –  How do we determine whether a piece can be casted and

then removed without breaking the mold?

D7013E Computational Geometry - Håkan Jonsson 2

Page 3: D7013E Lecture 4 Linear Programming

4.1 Casting

•  Only molds of one piece. •  Only translational movements.

–  We can’t remove a screw, for instance.

D7013E Computational Geometry - Håkan Jonsson 3

Page 4: D7013E Lecture 4 Linear Programming

A somewhat easier problem

•  Assume that the orientation of the piece (and hence the mold) is given as part of the input. –  We assume there is a free and horizontal top facet. –  (This is where the liquid material is (usually) poured.)

•  We also assume that no other facet is horizontal.

D7013E Computational Geometry - Håkan Jonsson 4

Page 5: D7013E Lecture 4 Linear Programming

Angles between 3D-vectors

•  To get the angle between two vectors u and v, consider the plane they span and pick the smaller angle between the two in this plane.

D7013E Computational Geometry - Håkan Jonsson 5

u

v

Page 6: D7013E Lecture 4 Linear Programming

A removability criterion

•  Let P be a casted piece and d a direction. •  (Lemma 4.1)P can be removed by a translation in direction d iff d makes an angle of at least 90° with every facet of P. –  d is compared with the outward pointing normals of

the facets. –  Exception: The top-facet of P.

•  How do we find such a direction?

D7013E Computational Geometry - Håkan Jonsson 6

Page 7: D7013E Lecture 4 Linear Programming

Representing directions

•  We assume translations in the positive z-direction only.

•  The direction of a vector (x,y,1) is represented by the point (x,y,1) in the plane z=1. –  So, every point in the

plane z=1 now represents a unique direction.

D7013E Computational Geometry - Håkan Jonsson 7

Page 8: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 8

Finding a valid direction

•  A direction d=(dx,dy,1) makes an angle at least 90° with a outward normal n=(nx,ny,nz) of P iff the dot product n • d ≤ 0. –  The dot product is computed as nxdx + nydy + nz .

•  Notice that the inequality defines a half-plane on the plane z=1. –  Given n, the line nxdx + nydy + nz =0 splits the plane in

two parts, one of which contains all locally valid directions (dx,dy,1) in which P can be translated.

Page 9: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 9

Finding a valid direction

•  Conclusion: –  Each facet of P contributes with one half-plane of

locally valid directions. –  A globally valid direction d exists iff the intersection

of all these half-planes exists (is non-empty)!

Page 10: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 10

4.2 Half-Plane Intersection

•  So, our practical problem has turned into the geometric problem of computing the intersection of half-planes.

•  A half-plane in the plane (Euclidean, 2D) is defined by a linear constraint in two variables. –  aix + biy ≤ 0

•  Given a bunch of half-planes, we consider the problem of finding all points (x,y) that satisfy all constraints. –  The intersection of n half-planes is a convex polygonal region

bounded by at most n edges.

Page 11: D7013E Lecture 4 Linear Programming

4.2 Half-Plane Intersection

D7013E Computational Geometry - Håkan Jonsson 11

(Empty intersection) A single point

Bounded

Unbounded

Unbounded

Page 12: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 12

Computing the intersection

•  A Divide-and-Conquer algorithm is used:

Section 4.2HALF-PLANE INTERSECTION

line a half-plane lies is indicated by dark shading in the figure; the commonintersection is shaded lightly. As you can see in Figures 4.2 (ii) and (iii), the

(i) (ii) (iii)

(iv) (v)

Figure 4.2Examples of the intersection ofhalf-planes

intersection does not have to be bounded. The intersection can also degenerateto a line segment or a point, as in (iv), or it can be empty, as in (v).

We give a rather straightforward divide-and-conquer algorithm to compute theintersection of a set of n half-planes. It is based on a routine INTERSECTCON-VEXREGIONS to compute the intersection of two convex polygonal regions. Wefirst give the overall algorithm.

Algorithm INTERSECTHALFPLANES(H)Input. A set H of n half-planes in the plane.Output. The convex polygonal region C :=

⋂h∈H h.

1. if card(H) = 12. then C ← the unique half-plane h ∈ H3. else Split H into sets H1 and H2 of size #n/2$ and %n/2&.4. C1 ←INTERSECTHALFPLANES(H1)5. C2 ←INTERSECTHALFPLANES(H2)6. C ←INTERSECTCONVEXREGIONS(C1,C2)

What remains is to describe the procedure INTERSECTCONVEXREGIONS. Butwait—didn’t we see this problem before, in Chapter 2? Indeed, Corollary 2.7states that we can compute the intersection of two polygons in O(n logn +k logn) time, where n is the total number of vertices in the two polygons. Wemust be a bit careful in applying this result to our problem, because the regionswe have can be unbounded, or degenerate to a segment or a point. Hence,the regions are not necessarily polygons. But it is not difficult to modify thealgorithm from Chapter 2 so that it still works.

Let’s analyze this approach. Assume we have already computed the two regionsC1 and C2 by recursion. Since they are both defined by at most n/2 + 1 half-planes, they both have at most n/2 +1 edges. The algorithm from Chapter 2computes their overlay in time O((n + k) logn), where k is the number ofintersection points between edges of C1 and edges of C2. What is k? Look 67

Page 13: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 13

Computing the intersection

•  One version of IntersectConvexRegions was actually presented in Chapter 2! –  Corollary 2.7: The intersection of two polygons with n vertices

can be computed in O(n log n + k log n) time. –  Some adjustments needed since we need to compute with

unbounded regions (not simple polygons). –  Also, in our case, k ≤ n, since the regions are convex.

•  Is this the best we can do?

Chapter 4LINEAR PROGRAMMING

at an intersection point v between an edge e1 of C1 and an edge e2 of C2. Nomatter how e1 and e2 intersect, v must be a vertex of C1 ∩C2. But C1 ∩C2 is theintersection of n half-planes, and therefore has at most n edges and vertices. Itfollows that k ! n, so the computation of the intersection of C1 and C2 takesO(n logn) time.

ve1

e2

This gives the following recurrence for the total running time:

T (n) =

O(1), if n = 1,

O(n logn)+2T (n/2), if n > 1.

This recurrence solves to T (n) = O(n log2 n).

To obtain this result we used a subroutine for computing the intersection oftwo arbitrary polygons. The polygonal regions we deal with in INTERSECT-HALFPLANES are always convex. Can we use this to develop a more efficientalgorithm? The answer is yes, as we show next. We will assume that the regionswe want to intersect are 2-dimensional; the case where one or both of them is asegment or a point is easier and left as an exercise.

First, let’s specify more precisely how we represent a convex polygonal region

Lleft(C) = h3,h4,h5

Lright(C) = h2,h1

h1

h2

h3

h4

h5left boundary

right boundary

C. We will store the left and the right boundary of C separately, as sorted listsof half-planes. The lists are sorted in the order in which the bounding lines ofthe half-planes occur when the (left or right) boundary is traversed from top tobottom. We denote the left boundary list by Lleft(C), and the right boundarylist by Lright(C). Vertices are not stored explicitly; they can be computed byintersecting consecutive bounding lines.

To simplify the description of the algorithm, we shall assume that there areno horizontal edges. (To adapt the algorithm to deal with horizontal edges, onecan define such edges to belong to the left boundary if they bound C from above,and to the right boundary if they bound C from below. With this conventiononly a few adaptations are needed to the algorithm stated below.)

The new algorithm is a plane sweep algorithm, like the one in Chapter 2: wemove a sweep line downward over the plane, and we maintain the edges of C1and C2 intersecting the sweep line. Since C1 and C2 are convex, there are at mostfour such edges. Hence, there is no need to store these edges in a complicateddata structure; instead we simply have pointers left edge C1, right edge C1,left edge C2, and right edge C2 to them. If the sweep line does not intersectthe right or left boundary of a region, then the corresponding pointer is nil.Figure 4.3 illustrates the definitions.

How are these pointers initialized? Let y1 be the y-coordinate of the topmostvertex of C1; if C1 has an unbounded edge extending upward to infinity thenwe define y1 = ∞. Define y2 similarly for C2, and let ystart = min(y1,y2). Tocompute the intersection of C1 and C2 we can restrict our attention to the partof the plane with y-coordinate less than or equal to ystart. Hence, we let thesweep line start at ystart, and we initialize the edges left edge C1, right edge C1,left edge C2, and right edge C2 as the ones intersecting the line y = ystart.68

Chapter 4LINEAR PROGRAMMING

at an intersection point v between an edge e1 of C1 and an edge e2 of C2. Nomatter how e1 and e2 intersect, v must be a vertex of C1 ∩C2. But C1 ∩C2 is theintersection of n half-planes, and therefore has at most n edges and vertices. Itfollows that k ! n, so the computation of the intersection of C1 and C2 takesO(n logn) time.

ve1

e2

This gives the following recurrence for the total running time:

T (n) =

O(1), if n = 1,

O(n logn)+2T (n/2), if n > 1.

This recurrence solves to T (n) = O(n log2 n).

To obtain this result we used a subroutine for computing the intersection oftwo arbitrary polygons. The polygonal regions we deal with in INTERSECT-HALFPLANES are always convex. Can we use this to develop a more efficientalgorithm? The answer is yes, as we show next. We will assume that the regionswe want to intersect are 2-dimensional; the case where one or both of them is asegment or a point is easier and left as an exercise.

First, let’s specify more precisely how we represent a convex polygonal region

Lleft(C) = h3,h4,h5

Lright(C) = h2,h1

h1

h2

h3

h4

h5left boundary

right boundary

C. We will store the left and the right boundary of C separately, as sorted listsof half-planes. The lists are sorted in the order in which the bounding lines ofthe half-planes occur when the (left or right) boundary is traversed from top tobottom. We denote the left boundary list by Lleft(C), and the right boundarylist by Lright(C). Vertices are not stored explicitly; they can be computed byintersecting consecutive bounding lines.

To simplify the description of the algorithm, we shall assume that there areno horizontal edges. (To adapt the algorithm to deal with horizontal edges, onecan define such edges to belong to the left boundary if they bound C from above,and to the right boundary if they bound C from below. With this conventiononly a few adaptations are needed to the algorithm stated below.)

The new algorithm is a plane sweep algorithm, like the one in Chapter 2: wemove a sweep line downward over the plane, and we maintain the edges of C1and C2 intersecting the sweep line. Since C1 and C2 are convex, there are at mostfour such edges. Hence, there is no need to store these edges in a complicateddata structure; instead we simply have pointers left edge C1, right edge C1,left edge C2, and right edge C2 to them. If the sweep line does not intersectthe right or left boundary of a region, then the corresponding pointer is nil.Figure 4.3 illustrates the definitions.

How are these pointers initialized? Let y1 be the y-coordinate of the topmostvertex of C1; if C1 has an unbounded edge extending upward to infinity thenwe define y1 = ∞. Define y2 similarly for C2, and let ystart = min(y1,y2). Tocompute the intersection of C1 and C2 we can restrict our attention to the partof the plane with y-coordinate less than or equal to ystart. Hence, we let thesweep line start at ystart, and we initialize the edges left edge C1, right edge C1,left edge C2, and right edge C2 as the ones intersecting the line y = ystart.68

Page 14: D7013E Lecture 4 Linear Programming

Number of intersections between two convex polygons •  Let A and B be two

convex polygons with a total of n vertices. –  Their upper and lower

hulls are x-monotone. –  Claim: Two monotone

chains (P and Q in the sketch) with n vertices in total intersect properly at most n-1 times.

•  By the claim follows that A and B intersect at most O(n) times. –  In fact, at most n, if we

are careful when we count them.

D7013E Computational Geometry - Håkan Jonsson 14

Q

P

(Proof sketch for the claim.)

Page 15: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 15

Computing the intersection

•  There is, in fact, a faster version of IntersectConvexRegions that runs in O(n) time. –  This implies that the complexity gets lowered from

O(n log2 n) to O(n log n).

•  This version is based on plane sweep. •  While the sweep line is moved downwards over

the regions using vertices as event points, we keep track of its intersections with the boundaries of the two regions. –  The status.

Page 16: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 16

Computing the intersection •  Now, since the regions are convex, the number of

intersections is at most 4. –  So, changing the status takes constant time!

•  Moreover, the boundaries are already ordered, so there is no need for the initial Ω(n log n)-time sorting step. –  This, together with , means the actual sweep takes just O(n) time!

Section 4.2HALF-PLANE INTERSECTION

C2C1

left edge C1

right edge C1 = nilleft edge C2

right edge C2

Figure 4.3The edges maintained by the sweep linealgorithm

In a plane sweep algorithm one normally also needs a queue to store the events.In our case the events are the points where edges of C1 or of C2 start or stop tointersect the sweep line. This implies that the next event point, which determinesthe next edge to be handled, is the highest of the lower endpoints of the edgesintersecting the sweep line. (Endpoints with the same y-coordinate are handledfrom left to right. If two endpoints coincide then the leftmost edge is treatedfirst.) Hence, we don’t need an event queue; the next event can be found inconstant time using the pointers left edge C1, right edge C1, left edge C2, andright edge C2.

At each event point some new edge e appears on the boundary. To handlethe edge e we first check whether e belongs to C1 or to C2, and whether it is onthe left or the right boundary, and then call the appropriate procedure. We shallonly describe the procedure that is called when e is on the left boundary of C1.The other procedures are similar.

Let p be the upper endpoint of e. The procedure that handles e will discover threepossible edges that C might have: the edge with p as upper endpoint, the edgewith e∩ left edge C2 as upper endpoint, and the edge with e∩ right edge C2as upper endpoint. It performs the following actions.

First we test whether p lies in between left edge C2 and right edge C2. Ifthis is the case, then e contributes an edge to C starting at p. We then addthe half-plane whose bounding line contains e to the list Lleft(C).

Next we test whether e intersects right edge C2. If this is the case, then theintersection point is a vertex of C. Either both edges contribute an edge toC starting at the intersection point—this happens when p lies to the rightof right edge C2, as in Figure 4.4(i)—or both edges contribute an edgeending there—this happens when p lies to the left of right edge C2, as inFigure 4.4(ii). If both edges contribute an edge starting at the intersectionpoint, then we have to add the half-plane defining e to Lleft(C) and thehalf-plane defining right edge C2 to Lright(C). If they contribute an edgeending at the intersection point we do nothing; these edges have alreadybeen discovered in some other way.

Finally we test whether e intersects left edge C2. If this is the case, then the 69

Page 17: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 17

Conclusions •  Theorem 4.3:

–  The intersection of two convex polygonal regions in the plane can be computed in O(n) time, where n is the number of vertices in their boundaries.

•  Theorem 4.4: –  The common intersection of a set of n half-planes in the plane can be

computed in O(n log n) time and using linear storage. •  This gives us a solution to the casting problem where we have fixed

the orientation and the top-facet. •  Also, by looping through all n facets, we can solve the Castability

Problem in O(n2 log n) time. –  ”Can the mold be made so that the casted piece can be taken out by

translation only?” –  We turn the object around and try to use each facet as the top-facet.

•  Now, is this the best…?

Page 18: D7013E Lecture 4 Linear Programming

Lower bound for half-plane intersection •  (From my PhD-thesis, 2000):

–  Computing the intersection of n half-planes requires Ω(n log n) time.

•  Reduce the problem to the sorting problem:

–  For each xi we are to sort, construct the line through (0, xi) that is a tangent to the circle from above.

•  Each line bounds a half-plane. –  Compute the intersection of the half-

planes below the lines. •  Its boundary will be a polygonal chain.

–  Walk along the polygon and read of the y-coordinates where the lines containing the edges of the polygon intersect the y-axis.

•  We know that all steps take O(n) time above, except the computation of the intersection whose time complexity is unknown to us.

•  Since sorting requires Ω(n log n) time, we conclude that the computation of the intersection must also take Ω(n log n).

x4

x2x5x1

x3

D7013E Computational Geometry - Håkan Jonsson 18

Page 19: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 19

4.3 Incremental Linear Programming •  The intersection of half-plane gives us all valid directions, but

we just need one. –  This means we (probably) waste a lot of time.

•  Finding just one direction can be done using Linear Programming.

Section 4.3INCREMENTAL LINEARPROGRAMMING

Corollary 4.4 The common intersection of a set of n half-planes in the planecan be computed in O(n logn) time and linear storage.

The problem of computing the intersection of half-planes is intimatelyrelated to the computation of convex hulls, and an alternative algorithm can begiven that is almost identical to algorithm CONVEXHULL from Chapter 1. Therelationship between convex hulls and intersections of half-planes is discussedin detail in Sections 8.2 and 11.4. Those sections are independent of the rest oftheir chapters, so if you are curious you can already have a look.

4.3 Incremental Linear Programming

In the previous section we showed how to compute the intersection of a set ofn half-planes. In other words, we computed all solutions to a set of n linearconstraints. The running time of our algorithm was O(n logn). One can provethat this is optimal: as for the sorting problem, any algorithm that solves thehalf-plane intersection problem must take Ω(n logn) time in the worst case. Inour application to the casting problem, however, we don’t need to know allsolutions to the set of linear constraints; just one solution will do fine. It turnsout that this allows for a faster algorithm.

Finding a solution to a set of linear constraints is closely related to a well-known problem in operations research, called linear optimization or linearprogramming. (This term was coined before “programming” came to mean“giving instructions to a computer”.) The only difference is that linear program-ming involves finding one specific solution to the set of constraints, namely theone that maximizes a given linear function of the variables. More precisely, alinear optimization problem is described as follows:

Maximize c1x1 + c2x2 + · · ·+ cdxd

Subject to a1,1x1 + · · ·+a1,dxd ! b1a2,1x1 + · · ·+a2,dxd ! b2

...an,1x1 + · · ·+an,dxd ! bn

where the ci, and ai, j, and bi are real numbers, which form the input to theproblem. The function to be maximized is called the objective function, andthe set of constraints together with the objective function is a linear program.The number of variables, d, is the dimension of the linear program. We alreadysaw that linear constraints can be viewed as half-spaces in Rd . The intersectionof these half-spaces, which is the set of points satisfying all constraints, iscalled the feasible region of the linear program. Points (solutions) in this regionare called feasible, points outside are infeasible. Recall from Figure 4.2 thatthe feasible region can be unbounded, and that it can be empty. In the lattercase, the linear program is called infeasible. The objective function can beviewed as a direction in Rd ; maximizing c1x1 +c2x2 + · · ·+cdxd means finding 71

Chapter 4LINEAR PROGRAMMING

a point (x1, . . . ,xd) that is extreme in the direction!c = (c1, . . . ,cd). Hence, thesolution to the linear program is a point in the feasible region that is extremein direction !c. We let f!c denote the objective function defined by a directionvector!c.

Many problems in operations research can be described by linear programs,and a lot of work has been dedicated to linear optimization. This has resulted inmany different linear programming algorithms, several of which—the famoussimplex algorithm for instance—perform well in practice.

feasible region

!c

solution

Let’s go back to our problem. We have n linear constraints in two variablesand we want to find one solution to the set of constraints. We can do thisby taking an arbitrary objective function, and then solving the linear programdefined by the objective function and the linear constraints. For the latter stepwe can use the simplex algorithm, or any other linear programming algorithmdeveloped in operations research. However, this particular linear program isquite different from the ones usually studied: in operations research both thenumber of constraints and the number of variables are large, but in our case thenumber of variables is only two. The traditional linear programming methodsare not very efficient in such low-dimensional linear programming problems;methods developed in computational geometry, like the one described below,do better.

We denote the set of n linear constraints in our 2-dimensional linear program-ming problem by H. The vector defining the objective function is!c = (cx,cy);thus the objective function is f!c(p) = cx px + cy py. Our goal is to find a pointp∈R2 such that p∈

⋂H and f!c(p) is maximized. We denote the linear program

by (H,!c), and we use C to denote its feasible region. We can distinguish fourcases for the solution of a linear program (H,!c). The four cases are illustratedin Figure 4.5; the vector defining the objective function is vertically downwardin the examples.

Figure 4.5Different types of solutions to a linear

program.

(i) (ii) (iii) (iv)

veρ

(i) The linear program is infeasible, that is, there is no solution to the set ofconstraints.

(ii) The feasible region is unbounded in direction!c. In this case there is a rayρ completely contained in the feasible region C, such that the function f!ctakes arbitrarily large values along ρ . The solution we require in this caseis the description of such a ray.

(iii) The feasible region has an edge e whose outward normal points in thedirection!c. In this case, there is a solution to the linear program, but it isnot unique: any point on e is a feasible point that maximizes f!c(p).

(iv) If none of the preceding three cases applies, then there is a unique solution,which is the vertex v of C that is extreme in the direction!c.72

Page 20: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 20

Types of solutions

•  A linear program can have different kinds of solutions:

Chapter 4LINEAR PROGRAMMING

a point (x1, . . . ,xd) that is extreme in the direction!c = (c1, . . . ,cd). Hence, thesolution to the linear program is a point in the feasible region that is extremein direction !c. We let f!c denote the objective function defined by a directionvector!c.

Many problems in operations research can be described by linear programs,and a lot of work has been dedicated to linear optimization. This has resulted inmany different linear programming algorithms, several of which—the famoussimplex algorithm for instance—perform well in practice.

feasible region

!c

solution

Let’s go back to our problem. We have n linear constraints in two variablesand we want to find one solution to the set of constraints. We can do thisby taking an arbitrary objective function, and then solving the linear programdefined by the objective function and the linear constraints. For the latter stepwe can use the simplex algorithm, or any other linear programming algorithmdeveloped in operations research. However, this particular linear program isquite different from the ones usually studied: in operations research both thenumber of constraints and the number of variables are large, but in our case thenumber of variables is only two. The traditional linear programming methodsare not very efficient in such low-dimensional linear programming problems;methods developed in computational geometry, like the one described below,do better.

We denote the set of n linear constraints in our 2-dimensional linear program-ming problem by H. The vector defining the objective function is!c = (cx,cy);thus the objective function is f!c(p) = cx px + cy py. Our goal is to find a pointp∈R2 such that p∈

⋂H and f!c(p) is maximized. We denote the linear program

by (H,!c), and we use C to denote its feasible region. We can distinguish fourcases for the solution of a linear program (H,!c). The four cases are illustratedin Figure 4.5; the vector defining the objective function is vertically downwardin the examples.

Figure 4.5Different types of solutions to a linear

program.

(i) (ii) (iii) (iv)

veρ

(i) The linear program is infeasible, that is, there is no solution to the set ofconstraints.

(ii) The feasible region is unbounded in direction!c. In this case there is a rayρ completely contained in the feasible region C, such that the function f!ctakes arbitrarily large values along ρ . The solution we require in this caseis the description of such a ray.

(iii) The feasible region has an edge e whose outward normal points in thedirection!c. In this case, there is a solution to the linear program, but it isnot unique: any point on e is a feasible point that maximizes f!c(p).

(iv) If none of the preceding three cases applies, then there is a unique solution,which is the vertex v of C that is extreme in the direction!c.72

(Here we pick the leftmost)

(Well, no solution!)

Page 21: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 21

Adding half-planes •  We will compute the solution using an incremental algorithm in

which the constraints are considered one at a time. •  Theorem 4.5:

–  When considering another constraint (half-plane) the optimal solution point can only be effected in two ways:

Chapter 4LINEAR PROGRAMMING

By our choice of C0, each feasible region Ci has a unique optimal vertex, denotedvi. Clearly, we have

C0 ⊇C1 ⊇C2 · · ·⊇Cn = C.

This implies that if Ci = /0 for some i, then Cj = /0 for all j ! i, and the linearprogram is infeasible. So our algorithm can stop once the linear programbecomes infeasible.

The next lemma investigates how the optimal vertex changes when we add ahalf-plane hi. It is the basis of our algorithm.

Lemma 4.5 Let 1 " i " n, and let Ci and vi be defined as above. Then we have(i) If vi−1 ∈ hi, then vi = vi−1.(ii) If vi−1 $∈ hi, then either Ci = /0 or vi ∈ !i, where !i is the line bounding hi.

Proof. (i) Let vi−1 ∈ hi. Because Ci = Ci−1 ∩hi and vi−1 ∈Ci−1 this means thatvi−1 ∈ Ci. Furthermore, the optimal point in Ci cannot be better than theoptimal point in Ci−1, since Ci ⊆Ci−1. Hence, vi−1 is the optimal vertex inCi as well.

(ii) Let vi−1 $∈ hi. Suppose for a contradiction that Ci is not empty and that vidoes not lie on !i. Consider the line segment vi−1vi. We have vi−1 ∈ Ci−1and, since Ci ⊂ Ci−1, also vi ∈ Ci−1. Together with the convexity of Ci−1,this implies that the segment vi−1vi is contained in Ci−1. Since vi−1 is theoptimal point in Ci−1 and the objective function f"c is linear, it follows thatf"c(p) increases monotonically along vi−1vi as p moves from vi to vi−1. Nowconsider the intersection point q of vi−1vi and !i. This intersection pointexists, because vi−1 $∈ hi and vi ∈Ci. Since vi−1vi is contained in Ci−1, thepoint q must be in Ci. But the value of the objective function increases alongvi−1vi, so f"c(q) > f"c(vi). This contradicts the definition of vi.

vi

vi−1

q

Ci−1

Figure 4.6 illustrates the two cases that arise when adding a half-plane.In Figure 4.6(i), the optimal vertex v4 that we have after adding the first fourhalf-planes is contained in h5, the next half-plane that we add. Therefore theoptimal vertex remains the same. The optimal vertex is not contained in h6,however, so when we add h6 we must find a new optimal vertex. According

Figure 4.6Adding a half-plane

(i) (ii)

"c

v4 = v5h1 h2

h3

h4

h5

v6v5

h6h5 h3

h4

h2h1

to Lemma 4.5, this vertex v6 is contained in the line bounding h6, as is shownin Figure 4.6(ii). But Lemma 4.5 does not tell us how to find the new optimalvertex. Fortunately, this is not so difficult, as we show next.74

Page 22: D7013E Lecture 4 Linear Programming

Adding half-planes

•  Lemma 4.6: –  The change in Case (ii)

can be computed in O(i) time when we consider constraint (half-plane) hi.

•  We just loop through all previous half-planes and locate the lowest point on hi that lies in the previous feasible region.

•  Lemma 4.7: –  A solution can be

computed in O(n2) time. •  Σ O(i), for i=1,…,n.

D7013E Computational Geometry - Håkan Jonsson 22

Chapter 4LINEAR PROGRAMMING

1. Let v0 be the corner of C0.2. Let h1, . . . ,hn be the half-planes of H.3. for i ← 1 to n4. do if vi−1 ∈ hi5. then vi ← vi−16. else vi ←the point p on !i that maximizes f"c(p), subject to the

constraints in Hi−1.7. if p does not exist8. then Report that the linear program is infeasible and quit.9. return vn

We now analyze the performance of our algorithm.

Lemma 4.7 Algorithm 2DBOUNDEDLP computes the solution to a boundedlinear program with n constraints and two variables in O(n2) time and linearstorage.

Proof. To prove that the algorithm correctly finds the solution, we have to showthat after every stage—whenever we have added a new half-plane hi—the pointvi is still the optimum point for Ci. This follows immediately from Lemma 4.5.If the 1-dimensional linear program on !i is infeasible, then Ci is empty, andconsequently C = Cn ⊆ Ci is empty, which means that the linear program isinfeasible.

It is easy to see that the algorithm requires only linear storage. We add thehalf-planes one by one in n stages. The time spent in stage i is dominated by thetime to solve a 1-dimensional linear program in line 6, which is O(i). Hence,the total time needed is bounded by

n

∑i=1

O(i) = O(n2).

Although our linear programming algorithm is nice and simple, its runningtime is disappointing—the algorithm is much slower than the previous algorithm,which computed the whole feasible region. Is our analysis too crude? Webounded the cost of every stage i by O(i). This is not always a tight bound:Stage i takes Θ(i) time only when vi−1 %∈ hi; when vi−1 ∈ hi then stage i takesconstant time. So if we could bound the number of times the optimal vertexchanges, we might be able to prove a better running time. Unfortunately the

h1

h2

h3

h4

h5

hn

v2

vn

v5v4

v3

"c

optimum vertex can change n times: there are orders for some configurationswhere every new half-plane makes the previous optimum illegal. The figure inthe margin shows such an example. This means that the algorithm will reallyspend Θ(n2) time. How can we avoid this nasty situation?

4.4 Randomized Linear Programming

If we have a second look at the example where the optimum changes n times,we see that the problem is not so much that the set of half-planes is bad. If we76

Page 23: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 23

4.4 Randomized Linear Programming

•  There is a surprisingly simple way to reduce the time complexity from O(n2) to O(n): –  Start by permuting the input randomly(!)

•  This gets rid of the worst case(!) –  Asymptotically… that is, when n grows large.

Page 24: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 24

4.4 Randomized Linear Programming •  Lemma 4.8:

–  Our 2-dimensional linear programming problem can be solved in O(n) randomized expected time (using O(n) storage in the worst case).

•  Backward analysis: –  Let Xi be a random variable that equals 1 in Case (ii) and 0 in Case (1).

•  Total time consumed is E[∑ O(i)Xi] = ∑ E[O(i)Xi] = ∑ O(i) E[Xi]. •  Here, E[Xi] is the probability that hi does not contain the most

recently computed optimal solution vi-1. –  Crucial step: They show that E[Xi] ≤ 2/i. –  This is the probability that, in Step I going from Step i-1, we happen to

choose either of the two half-planes (among i half-planes) at whose intersection vi lies.

•  So, total time is ∑ O(i)(2/i) = O(n).

Page 25: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 25

Issues and generalizations

•  Section 4.5: Unbounded linear programming. •  Section 4.6: Higher dimensions.

–  Same strategy as in 2D.

•  Section 4.7 Application problem: Computing smallest enclosing discs.

Page 26: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 26

4.7 Smallest Enclosing Discs

•  Given a set P of n points pi, compute the smallest disc D that contains P.

Chapter 4LINEAR PROGRAMMING

4.7* Smallest Enclosing Discs

The simple randomized technique we used above turns out to be surprisinglypowerful. It can be applied not only to linear programming but to a variety ofother optimization problems as well. In this section we shall look at one suchproblem.

Consider a robot arm whose base is fixed to the work floor. The arm has to pickup items at various points and place them at other points. What would be a goodposition for the base of the arm? This would be somewhere “in the middle” ofthe points it must be able to reach. More precisely, a good position is at thecenter of the smallest disc that encloses all the points. This point minimizes themaximum distance between the base of the arm and any point it has to reach.We arrive at the following problem: given a set P of n points in the plane (thepoints on the work floor that the arm must be able to reach), find the smallestenclosing disc for P, that is, the smallest disc that contains all the points of P.This smallest enclosing disc is unique—see Lemma 4.14(i) below, which is ageneralization of this statement.

As in the previous sections, we will give a randomized incremental algorithm forthe problem: First we generate a random permutation p1, . . . , pn of the points inP. Let Pi := p1, . . . , pi. We add the points one by one while we maintain Di,the smallest enclosing disc of Pi.

In the case of linear programming, there was a nice fact that helped us tomaintain the optimal vertex: when the current optimal vertex is contained in thenext half-plane then it does not change, and otherwise the new optimal vertexlies on the boundary of the half-plane. Is a similar statement true for smallestenclosing discs? The answer is yes:

Lemma 4.13 Let 2 < i < n, and let Pi and Di be defined as above. Then wehavepi+1

pi

Di−1 = Di

Di+1 (i) If pi ∈ Di−1, then Di = Di−1.(ii) If pi #∈ Di−1, then pi lies on the boundary of Di.

We shall prove this lemma later, after we have seen how we can use it todesign a randomized incremental algorithm that is quite similar to the linearprogramming algorithm.

Algorithm MINIDISC(P)Input. A set P of n points in the plane.Output. The smallest enclosing disc for P.1. Compute a random permutation p1, . . . , pn of P.2. Let D2 be the smallest enclosing disc for p1, p2.3. for i ← 3 to n4. do if pi ∈ Di−15. then Di ← Di−16. else Di ← MINIDISCWITHPOINT(p1, . . . , pi−1, pi)7. return Dn86

Page 27: D7013E Lecture 4 Linear Programming

Solution 1

•  Observation: –  A smallest enclosing disc has 2 or 3 points on its boundary

that “spans it”. •  The brute-force solution would be to check all

triplets of points; for each of these, we compute the circle C defined by them and then we determine if all other n-3 points fall within C or not. –  Also all pairs, and the smallest circles defined by them.

•  One will be the smallest. •  The takes O(n3) time. •  Can we do better..?

D7013E Computational Geometry - Håkan Jonsson 27

Page 28: D7013E Lecture 4 Linear Programming

D7013E Computational Geometry - Håkan Jonsson 28

4.7 Smallest Enclosing Discs

•  Yes, this can be computed in O(n) randomized expected time using an incremental algorithm very similar to the one we have just seen(!)

•  Let Pi = p1, p2, …, pi and let Di be the smallest disc containing Pi.

•  Algorithm: –  Consider the points one at a time and compute Di:s. –  This is done in three separate procedures.

Page 29: D7013E Lecture 4 Linear Programming

4.7 Smallest Enclosing Discs

D7013E Computational Geometry - Håkan Jonsson 29

Chapter 4LINEAR PROGRAMMING

4.7* Smallest Enclosing Discs

The simple randomized technique we used above turns out to be surprisinglypowerful. It can be applied not only to linear programming but to a variety ofother optimization problems as well. In this section we shall look at one suchproblem.

Consider a robot arm whose base is fixed to the work floor. The arm has to pickup items at various points and place them at other points. What would be a goodposition for the base of the arm? This would be somewhere “in the middle” ofthe points it must be able to reach. More precisely, a good position is at thecenter of the smallest disc that encloses all the points. This point minimizes themaximum distance between the base of the arm and any point it has to reach.We arrive at the following problem: given a set P of n points in the plane (thepoints on the work floor that the arm must be able to reach), find the smallestenclosing disc for P, that is, the smallest disc that contains all the points of P.This smallest enclosing disc is unique—see Lemma 4.14(i) below, which is ageneralization of this statement.

As in the previous sections, we will give a randomized incremental algorithm forthe problem: First we generate a random permutation p1, . . . , pn of the points inP. Let Pi := p1, . . . , pi. We add the points one by one while we maintain Di,the smallest enclosing disc of Pi.

In the case of linear programming, there was a nice fact that helped us tomaintain the optimal vertex: when the current optimal vertex is contained in thenext half-plane then it does not change, and otherwise the new optimal vertexlies on the boundary of the half-plane. Is a similar statement true for smallestenclosing discs? The answer is yes:

Lemma 4.13 Let 2 < i < n, and let Pi and Di be defined as above. Then wehavepi+1

pi

Di−1 = Di

Di+1 (i) If pi ∈ Di−1, then Di = Di−1.(ii) If pi #∈ Di−1, then pi lies on the boundary of Di.

We shall prove this lemma later, after we have seen how we can use it todesign a randomized incremental algorithm that is quite similar to the linearprogramming algorithm.

Algorithm MINIDISC(P)Input. A set P of n points in the plane.Output. The smallest enclosing disc for P.1. Compute a random permutation p1, . . . , pn of P.2. Let D2 be the smallest enclosing disc for p1, p2.3. for i ← 3 to n4. do if pi ∈ Di−15. then Di ← Di−16. else Di ← MINIDISCWITHPOINT(p1, . . . , pi−1, pi)7. return Dn86

Page 30: D7013E Lecture 4 Linear Programming

4.7 Smallest Enclosing Discs

D7013E Computational Geometry - Håkan Jonsson 30

Section 4.7*SMALLEST ENCLOSING DISCS

The critical step occurs when pi !∈ Di−1. We need a subroutine that finds thesmallest disc enclosing Pi, using the knowledge that pi must lie on the boundaryof that disc. How do we implement this routine? Let q := pi. We use the sameframework once more: we add the points of Pi−1 in random order, and maintainthe smallest enclosing disc of Pi−1 ∪q under the extra constraint that it shouldhave q on its boundary. The addition of a point p j will be facilitated by thefollowing fact: when p j is contained in the currently smallest enclosing discthen this disc remains the same, and otherwise it must have p j on its boundary.So in the latter case, the disc has both q and p j and its boundary. We get thefollowing subroutine.

MINIDISCWITHPOINT(P,q)Input. A set P of n points in the plane, and a point q such that there exists an

enclosing disc for P with q on its boundary.Output. The smallest enclosing disc for P with q on its boundary.1. Compute a random permutation p1, . . . , pn of P.2. Let D1 be the smallest disc with q and p1 on its boundary.3. for j ← 2 to n4. do if p j ∈ D j−15. then D j ← D j−16. else D j ← MINIDISCWITH2POINTS(p1, . . . , p j−1, p j,q)7. return Dn

How do we find the smallest enclosing disc for a set under the restriction thattwo given points q1 and q2 are on its boundary? We simply apply the sameapproach one more time. Thus we add the points in random order and maintainthe optimal disc; when the point pk we add is inside the current disc we don’thave to do anything, and when pk is not inside the current disc it must be onthe boundary of the new disc. In the latter case we have three points on the discboundary: q1, q2, and pk. This means there is only one disc left: the uniquedisc with q1, q2, and pk on its boundary. This following routine describes thisin more detail.

MINIDISCWITH2POINTS(P,q1,q2)Input. A set P of n points in the plane, and two points q1 and q2 such that there

exists an enclosing disc for P with q1 and q2 on its boundary.Output. The smallest enclosing disc for P with q1 and q2 on its boundary.1. Let D0 be the smallest disc with q1 and q2 on its boundary.2. for k ← 1 to n3. do if pk ∈ Dk−14. then Dk ← Dk−15. else Dk ←the disc with q1, q2, and pk on its boundary6. return Dn

This finally completes the algorithm for computing the smallest enclosing discof a set of points. Before we analyze it, we must validate its correctness byproving some facts that we used in the algorithms. For instance, we used the 87

Section 4.8NOTES AND COMMENTS

and MINIDISC also need linear storage, so what remains is to analyze theirexpected running time.

q

Di

points that together withq define Di

The running time of MINIDISCWITHPOINT is O(n) as long as we don’tcount the time spent in calls to MINIDISCWITH2POINTS. What is the prob-ability of having to make such a call? Again we use backwards analysis tobound this probability: Fix a subset p1, . . . , pi, and let Di be the smallest discenclosing p1, . . . , pi and having q on its boundary. Imagine that we removeone of the points p1, . . . , pi. When does the smallest enclosing circle change?That happens only when we remove one of the three points on the boundary.One of the points on the boundary is q, so there are at most two points that causethe smallest enclosing circle to shrink. The probability that pi is one of thosepoints is 2/i. (When there are more than three points on the boundary, then theprobability that the smallest enclosing circle changes can only get smaller.) Sowe can bound the total expected running time of MINIDISCWITHPOINT by

O(n)+n

∑i=2

O(i)2i

= O(n).

Applying the same argument once more, we find that the expected running timeof MINIDISC is O(n) as well.

Algorithm MINIDISC can be improved in various ways. First of all, it is notnecessary to use a fresh random permutation in every instance of subroutineMINIDISCWITHPOINT. Instead, one can compute a permutation once, atthe start of MINIDISC, and pass the permutation to MINIDISCWITHPOINT.Furthermore, instead of writing three different routines, one could write a singlealgorithm MINIDISCWITHPOINTS(P,R) that computes md(P,R) as defined inLemma 4.14.

4.8 Notes and Comments

In this chapter we have studied an algorithmic problem that arises when onewants to manufacture an object using casting. Other manufacturing processeslead to challenging algorithmic problems as well, and a number of such problemshave been studied within computational geometry over the past years—see forexample the book by Dutta et al. [152] or the surveys by Janardan and Woo [220]and Bose and Toussaint [72].

The computation of the common intersection of half-planes is an old andwell-studied problem. As we will explain in Chapter 11, the problem is dual tothe computation of the convex hull of points in the plane. Both problems have along history in the field, and Preparata and Shamos [323] already list a numberof solutions. More information on the computation of 2-dimensional convexhulls can be found in the notes and comments of Chapter 1.

Computing the common intersection of half-spaces, which can be donein O(n logn) time in the plane and in 3-dimensional space, becomes a morecomputationally demanding problem when the dimension increases. The reason 89

Page 31: D7013E Lecture 4 Linear Programming

4.7 Smallest Enclosing Discs

D7013E Computational Geometry - Håkan Jonsson 31

Section 4.7*SMALLEST ENCLOSING DISCS

The critical step occurs when pi !∈ Di−1. We need a subroutine that finds thesmallest disc enclosing Pi, using the knowledge that pi must lie on the boundaryof that disc. How do we implement this routine? Let q := pi. We use the sameframework once more: we add the points of Pi−1 in random order, and maintainthe smallest enclosing disc of Pi−1 ∪q under the extra constraint that it shouldhave q on its boundary. The addition of a point p j will be facilitated by thefollowing fact: when p j is contained in the currently smallest enclosing discthen this disc remains the same, and otherwise it must have p j on its boundary.So in the latter case, the disc has both q and p j and its boundary. We get thefollowing subroutine.

MINIDISCWITHPOINT(P,q)Input. A set P of n points in the plane, and a point q such that there exists an

enclosing disc for P with q on its boundary.Output. The smallest enclosing disc for P with q on its boundary.1. Compute a random permutation p1, . . . , pn of P.2. Let D1 be the smallest disc with q and p1 on its boundary.3. for j ← 2 to n4. do if p j ∈ D j−15. then D j ← D j−16. else D j ← MINIDISCWITH2POINTS(p1, . . . , p j−1, p j,q)7. return Dn

How do we find the smallest enclosing disc for a set under the restriction thattwo given points q1 and q2 are on its boundary? We simply apply the sameapproach one more time. Thus we add the points in random order and maintainthe optimal disc; when the point pk we add is inside the current disc we don’thave to do anything, and when pk is not inside the current disc it must be onthe boundary of the new disc. In the latter case we have three points on the discboundary: q1, q2, and pk. This means there is only one disc left: the uniquedisc with q1, q2, and pk on its boundary. This following routine describes thisin more detail.

MINIDISCWITH2POINTS(P,q1,q2)Input. A set P of n points in the plane, and two points q1 and q2 such that there

exists an enclosing disc for P with q1 and q2 on its boundary.Output. The smallest enclosing disc for P with q1 and q2 on its boundary.1. Let D0 be the smallest disc with q1 and q2 on its boundary.2. for k ← 1 to n3. do if pk ∈ Dk−14. then Dk ← Dk−15. else Dk ←the disc with q1, q2, and pk on its boundary6. return Dn

This finally completes the algorithm for computing the smallest enclosing discof a set of points. Before we analyze it, we must validate its correctness byproving some facts that we used in the algorithms. For instance, we used the 87

Page 32: D7013E Lecture 4 Linear Programming

Result

•  Using a similar way of reasoning as in Lemma 4.8, we can establish that this solution indeed takes (just) O(n) randomized expected time.

•  Often randomization can be used to achieve faster algorithms. –  However, only in an expected sense, and there is

always that fear of the worst case still popping up when we expect it the least...

D7013E Computational Geometry - Håkan Jonsson 32