classical mathematics and new challenges

29
Classical mathematics and new challenges László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052 [email protected] Theorems and Algorithms

Upload: julius

Post on 22-Feb-2016

36 views

Category:

Documents


0 download

DESCRIPTION

Classical mathematics and new challenges. Theorems and Algorithms. L á szl ó Lov á sz Microsoft Research One Microsoft Way, Redmond, WA 98052 [email protected]. Geometric constructions. Euclidean algorithm. Newton’s method. Gaussian elimination. - PowerPoint PPT Presentation

TRANSCRIPT

Classical mathematicsand new challenges

László Lovász Microsoft Research

One Microsoft Way, Redmond, WA 98052 [email protected]

Theorems and Algorithms

Algorithmic vs. structural mathematics

Geometric constructions

Euclidean algorithm

Newton’s method

Gaussian elimination

ancient and classical algorithms

An example: diophantine approximation and continued fractions

Given , find rational approximation /p q

such that | / | /p q q and 1/ .q

m

n

| | | |m n p q p

0a 0 ?a

10 0

1 1a

a a

01

1a

a

continued fraction expansion1/q

30’s: Mathematical notion of algorithms

Church, Turing, Post

recursive functions, Λ-calculus, Turing-machines

Church, Gödel

algorithmic and logical undecidability

A mini-history of algorithms

50’s, 60’s: Computers the significance of running time

simple and complex problems

sortingsearchingarithmetic …

Travelling Salesmanmatchingnetwork flowsfactoring …

late 60’s-80’s: Complexity theory

P=NP?

Time, space, information complexity

Polynomial hierarchy

Nondeterminism, good characteriztion, completeness

Randomization, parallelism

Classification of many real-life problems into P vs. NP-complete

90’s: Increasing sophistication upper and lower bounds on complexity

algorithms negative results

factoringvolume computationsemidefinite optimization

topologyalgebraic geometrycoding theory

Higlights of the 90’s:Approximation algorithms

positive and negative results

Probabilistic algorithms

Markov chains, high concentration, nibble methods, phase transitions

Pseudorandom number generators

from art to science: theory and constructions

Approximation algorithms:The Max Cut Problem

maximize

NP-hard

…Approximations?

Easy with 50% error Erdős ~’65

Polynomial with 12% error Goemans-Williamson ’93

???

Arora-Lund-Motwani-Sudan-Szegedy ’92Hastad

NP-hard with 6% error

(Interactive proof systems, PCP)

(semidefinite optimization)

Randomized algorithms (making coin flips):

Algorithms and probability

Algorithms with stochastic input:

difficult to analyze

even more difficult to analyze

important applications (primality testing, integration, optimization, volume computation, simulation)

even more important applications

Difficulty: after a few iterations, complicated functions of the original random variables arise.

Strong concentration (Talagrand)

Laws of Large Numbers: sums of independent random variables is strongly concentratedGeneral strong concentration: very general “smooth” functions of independent random variables are strongly concentrated

Nibble, martingales, rapidly mixing Markov chains,…

New methods in probability:

Example

1 2 33, , ,. ( ).. Ga Fa qa Want: such that:

- any 3 linearly independent

- every vector is a linear combination of 2

Few vectors

O(q)?

(was open for 30 years)

Every finite projective plane of order qhas a complete arc of size q polylog(q).

Kim-Vu

Second idea: choose 1 2 3, , ,...a a a at random

?????

Solution: Rödl nibble + strong concentration results

First idea: use algebraic construction (conics,…)

gives only about q

Driving forces for the next decade

New areas of applications

The study of very large structures

More tools from classical areas in mathematics

New areas of application: interaction between discrete and continuousBiology: genetic code population dynamics protein folding

Physics: elementary particles, quarks, etc. (Feynman graphs) statistical mechanics (graph theory, discrete probability)

Economics: indivisibilities (integer programming, game theory)

Computing: algorithms, complexity, databases, networks, VLSI, ...

Very large structures

-genetic code

-brain

-animal

-ecosystem

-economy

-society

How to model them?

non-constant but stablepartly random

-internet

-VLSI

-databases

Very large structures: how to model them?

Graph minors Robertson, Seymour, Thomas

If a graph does not contain a given minor,then it is essentially a 1-dimensional structure of essentially 2-dimensional pieces.

up to a bounded number of additional nodes

tree-decomposition

embedable in a fixed surface

except for “fringes” of bounded depth

The nodes of graph can be partitioned into a bounded number of essentially equal parts so that almost all bipartite graphs between 2 partsare essentially random(with different densities).

with k2 exceptions

Very large structures: how to model them?Regularity Lemma Szeméredi 74

given >0 and k>1, # of parts is between k and f(k, )

difference at most 1

for subsets X,Y of the two parts,# of edges between X and Y

is p|X||Y| n2

How to model them?

How to handle themalgorithmically?

heuristics/approximation algorithms

-internet

-VLSI

-databases

-genetic code -brain

-animal

-ecosystem

-economy

-society

A complexity theory of linear time?

Very large structures

linear time algorithms

sublinear time algorithms (sampling)

Example: Volume computation

nK Given: , convex

Want: volume of K

by a membership oracle;2(0,1) (0, )B K B n

with relative error ε

Not possible in polynomial time, even if ε=ncn. Elekes, Bárány, Füredi

Possible in randomized polynomial time,for arbitrarily small ε. Dyer, Frieze, Kannan

in n

More and more tools from classical math

Complexity:For self-reducible problems,counting sampling (Jerrum-Valiant-Vazirani)

Enough to samplefrom convex bodies

*

**

*

**

* *

*

( ) | |( ) | |

vol K K Svol B S

must be exponentialin n

Complexity:For self-reducible problems,counting sampling (Jerrum-Valiant-Vazirani)

Enough to samplefrom convex bodies

0K B

1/1 2 nK B K 2/

2 2 nK B K

0

1

vol( )vol( )

KK

by sampling

1

2

vol( )vol( )

KK

by sampling

Complexity:For self-reducible problems,counting sampling (Jerrum-Valiant-Vazirani)

Enough to samplefrom convex bodies

Algorithmic results:Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair)

Enough to estimate the mixing rate of random walk on lattice in K

Complexity:For self-reducible problems,counting sampling (Jerrum-Valiant-Vazirani)

Enough to samplefrom convex bodies

Algorithmic results:Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair)

Enough to estimate the mixing rate of random walk on lattice in K

Graph theory (expanders):use conductance toestimate eigenvalue gapAlon, Jerrum-Sinclair

Probability:use eigenvalue gap

K’K”

F

1vol ( ) vol( )vol( ') vol( ")

n F KK K

Complexity:For self-reducible problems,counting sampling (Jerrum-Valiant-Vazirani)

Enough to samplefrom convex bodies

Algorithmic results:Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair)

Enough to estimate the mixing rate of random walk on lattice in K

Graph theory (expanders):use conductance toestimate eigenvalue gapAlon, Jerrum-Sinclair

Enough to proveisoperimetric inequalityfor subsets of K

Differential geometry: Isoperimetric inequality

DyerFriezeKannan1989

* 27( )O n

Probability:use eigenvalue gap

Differential equations:bounds on Poincaré constantPaine-Weinberger

bisection method,improvedisoperimetric inequalityLL-Simonovits 1990

* 16( )O nLog-concave functions: reduction to integration

Applegate-Kannan 1992* 10( )O n

Convex geometry: Ball walkLL 1992

* 10( )O n

Statistics: Better error handlingDyer-Frieze 1993

* 8( )O n

Optimization: Better prepocessingLL-Simonovits 1995

* 7( )O n

achieving isotropic positionKannan-LL-Simonovits 1998

* 5( )O nFunctional analysis:isotropic position ofconvex bodies

Geometry:projective (Hilbert)distance

affine invariant isoperimetric inequalityanalysis of hit-and-run walkLL 1999

* 5( )O n

Differential equations:log-Sobolev inequality

elimination of “start penalty” for lattice walkFrieze-Kannan 1999

log-Cheeger inequality elimination of “start penalty” for ball walkKannan-LL 1999

* 5( )O n

Scientific computing:non-reversible chainsmix better; liftingDiaconis-Holmes-NealFeng-LL-Pak

walk with inertiaAspnes-Kannan-LL

* 3( )??O n

Linear algebra : eigenvalues semidefinite optimization higher incidence matrices homology theory

More and more tools from classical math

Geometry : geometric representations convexity

Analysis: generating functions Fourier analysis, quantum computing

Number theory: cryptography

Topology, group theory, algebraic geometry,special functions, differential equations,…