linear programming and smoothed complexity richard kelley

27
and Smoothed Complexity Richard Kelley

Post on 21-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Programming and Smoothed Complexity Richard Kelley

Linear Programming and Smoothed

Complexity

Richard Kelley

Page 2: Linear Programming and Smoothed Complexity Richard Kelley

The Nevanlinna Prize

Has anyone heard of the Nevanlinna prize?

What about the Fields Medal?

Godel prize?

Page 3: Linear Programming and Smoothed Complexity Richard Kelley

About the prize

Awarded at the International Congress of Mathematicians. Once every 4 years.

You have to be younger than 40.

Awarded for outstanding contributions in “information sciences” Mathematical aspects of computer science Scientific computing, optimization, and computer

algebra.

Page 4: Linear Programming and Smoothed Complexity Richard Kelley

2010 Winner

Daniel Spielman Professor of computer science at Yale.

Spielman showed how to combine worst-case complexity analysis with average-case complexity analysis. The result is called “smoothed analysis.” Spielman used smoothed analysis to show that

linear programming is “basically” a polynomial-time algorithm.

Page 5: Linear Programming and Smoothed Complexity Richard Kelley

Worst-Case Thinking

Think of this as a two-player game.

One player presents an algorithm.

The other player, the adversary, selects an input to the algorithm. The adversary wants the algorithm to run slowly.

The worst-case complexity is the running time of the algorithm on an input chosen by the best possible adversary.

Page 6: Linear Programming and Smoothed Complexity Richard Kelley

Average-Case Thinking

Also a game, but against a cold and indifferent Nature.

Player one presents an algorithm.

Nature chooses an input. At random. What should “random” mean?

Because the input is random, the running time becomes a random variable.

The expected value of this random variable is the average-case complexity.

Page 7: Linear Programming and Smoothed Complexity Richard Kelley

Can we do better?

Yes!

Combine the two!

Let’s pick up some background motivation first…

Page 8: Linear Programming and Smoothed Complexity Richard Kelley

Linear Programming

A form of constrained optimization. Common in the real world.

You want to maximize a linear function.

You have a set of linear constraints.

Your variables are required to be nonnegative.

Important in the history of computer science. Developed during WWII for Army planning. State secret until 1947.

Page 9: Linear Programming and Smoothed Complexity Richard Kelley

For Example…

max 3x + 5y

Subject to 5x + 7.2y <= 4

x + y <= 8

x, y >= 0

Page 10: Linear Programming and Smoothed Complexity Richard Kelley

And in general

Page 11: Linear Programming and Smoothed Complexity Richard Kelley

Linear Programming Geometry

Page 12: Linear Programming and Smoothed Complexity Richard Kelley

Problems that reduce to linear programming

Economics Utility maximization, cost minimization, profit maximization, game theory (Nash equilibria).

Graph Theory Matching

Connectivity

Graph coloring

Maximum flows

Minimum cuts

Spanning trees

Scheduling and allocation.

Geometry (Convex Polytopes)

Sorting!!

Linear programming is “P-complete”

Page 13: Linear Programming and Smoothed Complexity Richard Kelley

Example: Sorting

How is this an optimization problem? The biggest element in the array should have the

biggest index. The second biggest element should have the second

biggest index. Etc.

How is this a constrained optimization problem? No element should be duplicated (assuming unique

elements). No index should contain more than one element.

Page 14: Linear Programming and Smoothed Complexity Richard Kelley

Sorting: The Linear Program

Page 15: Linear Programming and Smoothed Complexity Richard Kelley

Solving Linear Programs

Even though we’re working in a continuous space, this is a discrete problem. Why?

The basic idea is to walk along the vertices of the feasible region, climbing to vertices that have better and better values for the objective function.

Page 16: Linear Programming and Smoothed Complexity Richard Kelley

The Simplex Algorithm

Start at some point the feasible region. The vertex consisting of all 0’s usually works.

Look at all the neighboring vertices, find one with a higher objective function value. Involves keeping track of a set of “basis variables” Variables go in and out of the basis depending on

whether or not they make the objective function bigger.

Repeat until you can’t improve the objective value any more. “Easy” to show that you’re done.

Page 17: Linear Programming and Smoothed Complexity Richard Kelley

More Geometry

Page 18: Linear Programming and Smoothed Complexity Richard Kelley

Complexity?

How should we talk about the “size” of an instance of the linear programming problem?

Any guesses how long it should take to run the simplex algorithm?

Usually, it’s pretty quick. With n variables and m constraints, it takes about 2m iterations

In theory, this is a O(2^n) algorithm. Not a typo.

This is a perfect example of the difference between theory and practice!

Page 19: Linear Programming and Smoothed Complexity Richard Kelley

Worst Case

We want to force the simplex algorithm to look at exponentially many vertices before finding the best one.

The trick is to use a (hyper-)cube.

Page 20: Linear Programming and Smoothed Complexity Richard Kelley

Worst Case: Klee-Minty

Page 21: Linear Programming and Smoothed Complexity Richard Kelley

The “Real World”

Huge LPs are solved all the time. And the simplex algorithm is usually linear.

This is one of the only exponential algorithms that is used widely in practice.

Page 22: Linear Programming and Smoothed Complexity Richard Kelley

Longstanding Open Problem

For a long time, nobody had a clue.

The solution was pretty much worked out by 2005.

Page 23: Linear Programming and Smoothed Complexity Richard Kelley

The Solution

Smoothed Analysis. Start with a arbitrarily chosen input

Could be from the adversary. “Shake it” a little bit.

The analysis then depends on the size of the input and the magnitude of the shaking.

The idea is that an algorithm with a low smoothed complexity is “almost always” well-behaved.

Page 24: Linear Programming and Smoothed Complexity Richard Kelley

In Symbols

Page 25: Linear Programming and Smoothed Complexity Richard Kelley

Graphically – Worst case Complexity

Page 26: Linear Programming and Smoothed Complexity Richard Kelley

Smoothed Analysis

Page 27: Linear Programming and Smoothed Complexity Richard Kelley

Questions?