lecture 14 (last, but not least)bchor/cm/compute14.pdfthe greedy heuristic remark: a node with high...
TRANSCRIPT
Lecture 14 (last, but not least)
Coping with NP-Completeness/Intractability
Approximationalgorithms for hardoptimizationproblems.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.1
Lecture 14 (last, but not least)
Coping with NP-Completeness/Intractability
Approximationalgorithms for hardoptimizationproblems.
Randomized(coin flipping) algorithms.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.1
Lecture 14 (last, but not least)
Coping with NP-Completeness/Intractability
Approximationalgorithms for hardoptimizationproblems.
Randomized(coin flipping) algorithms.
Fixed parameteralgorithms.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.1
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Searchproblems: if there is a solution, find one.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Searchproblems: if there is a solution, find one.
Optimizationproblems: find a solution thatoptimizessome objective function.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Searchproblems: if there is a solution, find one.
Optimizationproblems: find a solution thatoptimizessome objective function.
Optimization comes in two flavours
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Searchproblems: if there is a solution, find one.
Optimizationproblems: find a solution thatoptimizessome objective function.
Optimization comes in two flavoursmaximization
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Algorithms
In this course, we deal withthreekinds of problems
Decisionproblems: is there a solution (yes/noanswer)?
Searchproblems: if there is a solution, find one.
Optimizationproblems: find a solution thatoptimizessome objective function.
Optimization comes in two flavoursmaximizationminimization
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.2
Approximation Problems
A maximization(minimization) problem consists of
Set offeasible solutions
Each feasible solutionA has acostc(A)
Suppose solution with max (min) costOPTisoptimal.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.3
Approximation Problems
A maximization(minimization) problem consists of
Set offeasible solutions
Each feasible solutionA has acostc(A)
Suppose solution with max (min) costOPTisoptimal.
Definition: An ε-approximationalgorithmA is onethat satisfies
c(A)/OPT ≥ 1 − ε (maximization)
OPT/c(A) ≥ 1 − ε (minimization)
Note that0 ≤ ε ≤ 1.Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.3
Approximation
Question: What is the smallestε for which a givenNP-complete problem has apolynomial-timeε-approximation?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.4
Approximation
Question: What is the smallestε for which a givenNP-complete problem has apolynomial-timeε-approximation?
Not all NP-complete problems are created equal.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.4
Approximation
Question: What is the smallestε for which a givenNP-complete problem has apolynomial-timeε-approximation?
Not all NP-complete problems are created equal.
NP-complete problems may have
no ε-approximation, foranyε.
anε-approximation, forsomeε.
anε-approximation, foreveryε.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.4
Approximation
Question: What is the smallestε for which a givenNP-complete problem has apolynomial-timeε-approximation?
Not all NP-complete problems are created equal.
NP-complete problems may have
no ε-approximation, foranyε.
anε-approximation, forsomeε.
anε-approximation, foreveryε.
Remark: Polynomial reductions do not necessarilypreserve approximations.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.4
Example: Vertex Cover
Given a graph(V,E)
find thesmallestset of verticesC
such that for each edge in the graph,
C contains at least one endpoint.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.5
Example: Vertex Cover
Given a graph(V,E)
find thesmallestset of verticesC
such that for each edge in the graph,
C contains at least one endpoint.
(figure from www.cc.ioc.ee/jus/gtglossary/assets/vertex_cover.gif)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.5
Vertex CoverThe decision version of this problem isNP-completeby a reduction from IS (a factyoushould be able toprove easily).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.6
Vertex CoverThe decision version of this problem isNP-completeby a reduction from IS (a factyoushould be able toprove easily).
(figure from http://wwwbrauer.in.tum.de/gruppen/theorie/hard/vc1.png)Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.6
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inGchoose nodev ∈ G with highest degree
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inGchoose nodev ∈ G with highest degreeadd it toC
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inGchoose nodev ∈ G with highest degreeadd it toCremove it and all edges incident to it fromG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inGchoose nodev ∈ G with highest degreeadd it toCremove it and all edges incident to it fromG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Remark: A node with high degree looks promisingfor inclusion in cover. This intuition leads tofollowing greedy algorithm:
C := ∅
while there are edges inGchoose nodev ∈ G with highest degreeadd it toCremove it and all edges incident to it fromG
Question: How are we doing?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.7
The Greedy Heuristic
Question: How are we doing?
Answer: Poorly.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.8
The Greedy Heuristic
Question: How are we doing?
Answer: Poorly.
This greedy algorithm isnotanε-approximationalgorithm for anyε. There are instances wherec(A)/OPT ≥ Ω(log |V |), implyingOPT/c(A) 6≥ 1 − ε for anyε.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.8
Another Greedy Algorithm (Gavril ’74)
C := ∅
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
addu andv to C
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
addu andv to Cremove them fromG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
addu andv to Cremove them fromG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
addu andv to Cremove them fromG
Claim: This algorithm is a12-approximation
algorithm for vertex cover.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Another Greedy Algorithm (Gavril ’74)
C := ∅
while there are edges inGchoose any edge(u, v) in G
addu andv to Cremove them fromG
Claim: This algorithm is a12-approximation
algorithm for vertex cover.
MeaningC is at mosttwice as largeas aminimum vertex cover.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.9
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
any vertex cover, including the optimum,
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
any vertex cover, including the optimum,
contains at least one node from each of theseedges (otherwise an edge would not be covered).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
any vertex cover, including the optimum,
contains at least one node from each of theseedges (otherwise an edge would not be covered).
It follows thatOPT (G) ≥ 12 |C|
(soOPT/c(A) ≥ 1 − 12)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
any vertex cover, including the optimum,
contains at least one node from each of theseedges (otherwise an edge would not be covered).
It follows thatOPT (G) ≥ 12 |C|
(soOPT/c(A) ≥ 1 − 12)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Gavril’s Approximation Algorithm
Claim: This is a12-approximationalgorithm.
C constructed from|C|/2 edges ofG
no two edges of these share a vertex
any vertex cover, including the optimum,
contains at least one node from each of theseedges (otherwise an edge would not be covered).
It follows thatOPT (G) ≥ 12 |C|
(soOPT/c(A) ≥ 1 − 12)
Remark: This is the best approximation ratio forvertex cover known todate .
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.10
Cuts in Graphs
Definition Let G = (V,E) be an undirected graph.For anypartitionof the nodes of into two sets,S andV − S, the set of edges betweenS andV − S iscalled acut .
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.11
Cuts in Graphs
Definition Let G = (V,E) be an undirected graph.For anypartitionof the nodes of into two sets,S andV − S, the set of edges betweenS andV − S iscalled acut .
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.11
Cuts in Graphs
Definition Let G = (V,E) be an undirected graph.For anypartitionof the nodes of into two sets,S andV − S, the set of edges betweenS andV − S iscalled acut .
(pictures from http://www.cs.sunysb.edu/∼algorith/files/edge-vertex-connectivity.shtml)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.11
Cuts in Graphs
For cuts, both optimization problems make sense (indifferent contexts):
1. Min Cut: Find a partition thatminimizesthenumber of edges betweenS andV − S.
2. Max Cut: Find a partition thatmaximizesthenumber of edges betweenS andV − S.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.12
Cuts in Graphs
For cuts, both optimization problems make sense (indifferent contexts):
1. Min Cut: Find a partition thatminimizesthenumber of edges betweenS andV − S.
2. Max Cut: Find a partition thatmaximizesthenumber of edges betweenS andV − S.
The two optimization problems have very differentcomplexities:
1. Min Cut is tightly related tonetwork flow, andhas polynomial time algorithms.
2. Max Cutis NP-complete.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.12
Max Cut Algorithm
Consider the followinglocal improvementstrategy
Pick any partitionS andV − S
If the cut can be improved by moving any vertexfrom V − S to S, or vice-versa, do so.
Quit when no improvement is possible (localmaximum reached).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.13
Max Cut Algorithm
Consider the followinglocal improvementstrategy
Pick any partitionS andV − S
If the cut can be improved by moving any vertexfrom V − S to S, or vice-versa, do so.
Quit when no improvement is possible (localmaximum reached).
Running time
Any cut has at most|E| edges,
thus at most|E| improvements possible,
=⇒ algorithm is polynomial time.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.13
Max Cut Algorithm
Consider the followinglocal improvementstrategy
Pick any partitionS andV − S
If the cut can be improved by moving any vertexfrom V − S to S, or vice-versa, do so.
Quit when no improvement is possible (localmaximum reached).
Running time
Any cut has at most|E| edges,
thus at most|E| improvements possible,
=⇒ algorithm is polynomial time.
Claim: Will show this is a12-approximation
algorithm.Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.13
Max Cut Algorithm
heuristic cut
optimal cut
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.14
Max Cut Algorithm
heuristic cut
optimal cut
V V
VV
1 2
43
Heuristic yieldsV1 ∪ V2, V3 ∪ V4
Optimal yieldsV1 ∪ V3, V2 ∪ V4
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.15
Max Cut Algorithm
heuristic cut
optimal cut
V 1 V 2
V 4V 3
e12
e24
e34
e13
Every cut partitions the edges intocut edges,EC ,andnon-cut edges,EN .Let cv be the number ofcut edgesfrom v.Let nv be the number ofnon-cut edgesfrom v.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.16
Max Cut Algorithm
heuristic cut
optimal cut
V 1 V 2
V 4V 3
V 1 V 2
V 4V 3
For every nodev, the number ofcut edgesisgreateror equal than the number ofnon-cutedges, cv ≥ nv.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.17
Max Cut Algorithm
heuristic cut
optimal cut
V 1 V 2
V 4V 3
V 1 V 2
V 4V 3
For every nodev, the number ofcut edgesisgreateror equal than the number ofnon-cutedges, cv ≥ nv.Otherwise, switching the nodev would increasethe size of the cut produced by the algorithm.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.17
Max Cut Algorithm
heuristic cut
optimal cut
V 1 V 2
V 4V 3
V 1 V 2
V 4V 3
For every nodev, the number ofcut edgesisgreateror equal than the number ofnon-cutedges, cv ≥ nv.Otherwise, switching the nodev would increasethe size of the cut produced by the algorithm.Summing over all nodes inV :
∑
v cv ≥∑
v nvSlides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.17
Max Cut Algorithm
heuristic cut
optimal cut
V 1 V 2
V 4V 3
V 1 V 2
V 4V 3
For every nodev, the number ofcut edgesisgreateror equal than the number ofnon-cutedges, cv ≥ nv.Otherwise, switching the nodev would increasethe size of the cut produced by the algorithm.Summing over all nodes inV :
∑
v cv ≥∑
v nvSlides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.17
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Thus|EC | ≥ |EN |.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Thus|EC | ≥ |EN |.
=⇒ 2|EC | ≥ |EN | + |EC | = |E|
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Thus|EC | ≥ |EN |.
=⇒ 2|EC | ≥ |EN | + |EC | = |E|
So|EC| ≥ |E|/2.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Thus|EC | ≥ |EN |.
=⇒ 2|EC | ≥ |EN | + |EC | = |E|
So|EC| ≥ |E|/2.
Clearly|E| ≥ OPT (any cut is set of edges).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Max Cut Algorithm
Summing over all nodes inV :∑
v cv ≥∑
v nv.
But∑
v cv = 2|EC |,∑
v nv = 2|EN |(each edge is countedtwice).
Thus|EC | ≥ |EN |.
=⇒ 2|EC | ≥ |EN | + |EC | = |E|
So|EC| ≥ |E|/2.
Clearly|E| ≥ OPT (any cut is set of edges).
Thusc(A) ≥ OPT/2, i.e. algorithm is12-MaxCut
approximation. ♣
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.18
Fix Parameter Algorithms
Many optimization problems have a naturalparameter.
Vertex cover: Given a graph withn vertices andm edges, find avertex coverof sizek (or reportthat none exists).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.19
Fix Parameter Algorithms
Many optimization problems have a naturalparameter.
Vertex cover: Given a graph withn vertices andm edges, find avertex coverof sizek (or reportthat none exists).
Clique: Given a graph withn vertices andmedges, find acliqueof sizek (or report that noneexists).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.19
Fix Parameter Algorithms
Many optimization problems have a naturalparameter.
Vertex cover: Given a graph withn vertices andm edges, find avertex coverof sizek (or reportthat none exists).
Clique: Given a graph withn vertices andmedges, find acliqueof sizek (or report that noneexists).
Both problems solvable in timenk
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.19
Fix Parameter Algorithms
Bothk-VC andk-clique are solvable in timenk.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.20
Fix Parameter Algorithms
Bothk-VC andk-clique are solvable in timenk.
Is this the best we can do?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.20
Fix Parameter Algorithms
Bothk-VC andk-clique are solvable in timenk.
Is this the best we can do?
The classfix parameter tractable – FPT(Downeyand Fellows, 1992) contains all parameterizedoptimization problems withf(k)nc timealgorithms.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.20
Fix Parameter Algorithms
The classFPT(Downey and Fellows, 1992)contains all parameterized optimization problemswith f(k)nc time algorithms.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.21
Fix Parameter Algorithms
The classFPT(Downey and Fellows, 1992)contains all parameterized optimization problemswith f(k)nc time algorithms.
The inherent"combinatorial explosion"of suchproblems is limited to the functionf(k).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.21
Fix Parameter Algorithms
The classFPT(Downey and Fellows, 1992)contains all parameterized optimization problemswith f(k)nc time algorithms.
The inherent"combinatorial explosion"of suchproblems is limited to the functionf(k).
What is this good for? Even thoughf(k) will beexponential (or worse), for small values of theparameter,f(k)nc may be feasible, whereasnk
may be infeasible.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.21
Fix Parameter Algorithms
The classFPT(Downey and Fellows, 1992)contains all parameterized optimization problemswith f(k)nc time algorithms.
The inherent"combinatorial explosion"of suchproblems is limited to the functionf(k).
What is this good for? Even thoughf(k) will beexponential (or worse), for small values of theparameter,f(k)nc may be feasible, whereasnk
may be infeasible.
Hey, this is all very nice, but isn’t this fictitiousclassFPTis actually empty. . .
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.21
VC ∈ FPTVC Reminder: Given a graph(V,E)Find thesmallestset of verticesC such that for eachedge in the graph,C contains at least one endpoint.
We describe abranching algorithm
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.22
VC ∈ FPTVC Reminder: Given a graph(V,E)Find thesmallestset of verticesC such that for eachedge in the graph,C contains at least one endpoint.
We describe abranching algorithm
a problem instance is repeatedly split into smallerproblem instances
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.22
VC ∈ FPTVC Reminder: Given a graph(V,E)Find thesmallestset of verticesC such that for eachedge in the graph,C contains at least one endpoint.
We describe abranching algorithm
a problem instance is repeatedly split into smallerproblem instances
leading to atree of subproblemsin whichproblems at the leaves supply solutions.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.22
VC ∈ FPTVC Reminder: Given a graph(V,E)Find thesmallestset of verticesC such that for eachedge in the graph,C contains at least one endpoint.
We describe abranching algorithm
a problem instance is repeatedly split into smallerproblem instances
leading to atree of subproblemsin whichproblems at the leaves supply solutions.
our tree will be binary, of depthk.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.22
VC ∈ FPTVC Reminder: Given a graph(V,E)Find thesmallestset of verticesC such that for eachedge in the graph,C contains at least one endpoint.
We describe abranching algorithm
a problem instance is repeatedly split into smallerproblem instances
leading to atree of subproblemsin whichproblems at the leaves supply solutions.
our tree will be binary, of depthk.
=⇒ complexity will be2knc.Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.22
VC ∈ FPTBranching algorithm:
Initialization: C = ∅, H = E
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.23
VC ∈ FPTBranching algorithm:
Initialization: C = ∅, H = E
Pick an edge(u, v) ∈ H. Split the problem intotwo subproblems:
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.23
VC ∈ FPTBranching algorithm:
Initialization: C = ∅, H = E
Pick an edge(u, v) ∈ H. Split the problem intotwo subproblems:
(Hu, C ∪ u) whereHu equal toH with all edgesincident tou removed and isolated nodes thrownaway.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.23
VC ∈ FPTBranching algorithm:
Initialization: C = ∅, H = E
Pick an edge(u, v) ∈ H. Split the problem intotwo subproblems:
(Hu, C ∪ u) whereHu equal toH with all edgesincident tou removed and isolated nodes thrownaway.
(Hv, C ∪ v) whereHv equal toH with all edgesincident tov removed and isolated nodes thrownaway.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.23
VC ∈ FPTBranching algorithm:
Initialization: C = ∅, H = E
Pick an edge(u, v) ∈ H. Split the problem intotwo subproblems:
(Hu, C ∪ u) whereHu equal toH with all edgesincident tou removed and isolated nodes thrownaway.
(Hv, C ∪ v) whereHv equal toH with all edgesincident tov removed and isolated nodes thrownaway.
Stop whenC is of sizek. Check if it is a cover.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.23
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
This property also holds for graphs insubproblems.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
This property also holds for graphs insubproblems.
Therefore if there is a coverC of sizek, thealgorithm finds it.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
This property also holds for graphs insubproblems.
Therefore if there is a coverC of sizek, thealgorithm finds it.
At each node of branching tree,O(|E|) stepsrequired.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
This property also holds for graphs insubproblems.
Therefore if there is a coverC of sizek, thealgorithm finds it.
At each node of branching tree,O(|E|) stepsrequired.
So overall, runtime isO(2k|E|),
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
VC ∈ FPTBranching algorithm correctness and time analysis:
For every edge(u, v) ∈ E, each minimum covermust contain at least one ofu, v.
This property also holds for graphs insubproblems.
Therefore if there is a coverC of sizek, thealgorithm finds it.
At each node of branching tree,O(|E|) stepsrequired.
So overall, runtime isO(2k|E|),
=⇒ VC ∈ FPT. ♣
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.24
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Enables finding covers of size up tok = 100.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Enables finding covers of size up tok = 100.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Enables finding covers of size up tok = 100.
What aboutk-clique?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Enables finding covers of size up tok = 100.
What aboutk-clique?
Usingparameterized reductions, can show cliqueis &*%$-hard (for an appropriate class&*%$),so clique unlikely inFPTunless&*%$=FPT.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
FPT Odds and EndsBest parameterized VC algorithm known to datein O(1.2852k + kn) (Chen, Kanj and Jia, 2001).
Enables finding covers of size up tok = 100.
What aboutk-clique?
Usingparameterized reductions, can show cliqueis &*%$-hard (for an appropriate class&*%$),so clique unlikely inFPTunless&*%$=FPT.
For further details, see Downey & Fellows’ book,or relevant course in Wellington (NZ), Newcastle(OZ), Tubingen (GE), Victoria (CA), . . .
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.25
Coin Fipping TMs
10
$
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.26
Randomized Computation
We can sometimes userandomizationto solveproblems that are difficult to solvedeterministically.
Examples:
determining if a polynomial is identically zero
primality testing
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.27
Randomized Computation
We are given a multivariant polynomialπ(x1, . . . , xm).We wish to know ifπ(x1, . . . , xm) is identically zero.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.28
Randomized Computation
We are given a multivariant polynomialπ(x1, . . . , xm).We wish to know ifπ(x1, . . . , xm) is identically zero.
One strategy: Fully expandπ(x1, . . . , xm), and checkif all resulting coefficients arezero.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.28
Randomized Computation
We are given a multivariant polynomialπ(x1, . . . , xm).We wish to know ifπ(x1, . . . , xm) is identically zero.
One strategy: Fully expandπ(x1, . . . , xm), and checkif all resulting coefficients arezero.
This strategy may not be very efficient.For example, considerπ(x1, . . . , xm) = (x1 − y1)(x2 − y2) . . . (xn − yn)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.28
Randomized Computation
We are given a multivariant polynomialπ(x1, . . . , xm).We wish to know ifπ(x1, . . . , xm) is identically zero.
One strategy: Fully expandπ(x1, . . . , xm), and checkif all resulting coefficients arezero.
This strategy may not be very efficient.For example, considerπ(x1, . . . , xm) = (x1 − y1)(x2 − y2) . . . (xn − yn)
A second example deals with the determinant of a sym-
bolic matrix (a matrix containing both constants and
variables’ symbols).Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.28
Randomized Computation
Recall thedeterminantof a matrix:
det
(
a b
c d
)
= ad − bc
det A =∑
π
σ(π)n
∏
i=1
ai,π(i)
Where
π ranges over all permutations
σ is 1 if π is product of even number oftranspositions
σ is -1 otherwiseSlides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.29
Randomized Computation
Can compute determinants byGaussian Elimination
1 3 2 5
1 7 −2 4
−1 −3 −2 2
0 1 6 2
⇒
1 3 2 5
0 4 −4 −1
0 0 0 7
0 1 6 2
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.30
Gaussian Eliminationsubtract multiples of first row from2, . . . , n
to make first column element zero
continue with subsequent rows . . .
1 3 2 5
0 4 −4 −1
0 0 0 7
0 1 6 2
⇒
1 3 2 5
0 4 −4 −1
0 0 0 7
0 0 6 214
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.31
Randomized Computation
If pivot element is zero:
1 3 2 5
0 4 −4 −1
0 0 0 7
0 0 6 214
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.32
Randomized Computation
Then
transpose rows
multiply determinant by−1
if this is impossible, determinant is zero.
1 3 2 5
0 4 −4 −1
0 0 6 214
0 0 0 7
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.33
Randomized Computation
At the end, matrix hasupper triangular form:
1 3 2 5
0 4 −4 −1
0 0 6 214
0 0 0 7
Determinant is
product of diagonal elements:196
adjusted for row interchanges:−196
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.34
Randomized Computation
Claim: We can calculate the determinant of ann × nmatrix in polynomial time.
Must check
if entries are integers ofb bits,
result not exponentially long inb
This isn’t difficult.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.35
Randomized Computation
What about determinants of symbolic matrices?
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.36
Randomized Computation
What about determinants of symbolic matrices?
det
x w z
z x w
y z w
=⇒ det
x w z
0 x2−zwx
wx−z2
x
0 zx−wyx
−zyx
=⇒
det
x w z
0 x2−zwx
wx−z2
x
0 0 −yz(xz−xw)+(zx−wy)(wx−x2)x(x2−xw)
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.36
Randomized Computation
What about determinants of symbolic matrices?
det
x w z
z x w
y z w
=⇒ det
x w z
0 x2−zwx
wx−z2
x
0 zx−wyx
−zyx
=⇒
det
x w z
0 x2−zwx
wx−z2
x
0 0 −yz(xz−xw)+(zx−wy)(wx−x2)x(x2−xw)
Is there a problem here?Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.36
Randomized Computation
We do have a problem.
Applying Gaussian elimination to symbolic matricesworks, but
intermediate terms are rational functions
it can be shown that their size becomeexponentially large!
Oh, oh.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.37
Randomization to the RescueSchwartz’ LemmaLet π(x1, . . . , xm) be a polynomial
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.38
Randomization to the RescueSchwartz’ LemmaLet π(x1, . . . , xm) be a polynomial
not identically zero
in m variables
of degree at mostd in each variable
and letM > 0 be an integer.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.38
Randomization to the RescueSchwartz’ LemmaLet π(x1, . . . , xm) be a polynomial
not identically zero
in m variables
of degree at mostd in each variable
and letM > 0 be an integer.
Then
the number ofm-tuples(x1, . . . , xm) ∈ 0, 1, . . . ,Msatisfyingπ(x1, . . . , xm) = 0is at mostmdMm−1
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.38
Randomization to the RescueUsing Schwartz’ Lemma, chooseM > 0
large enough so thatmdMm−1/Mm = md/M < ε.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.39
Randomization to the RescueUsing Schwartz’ Lemma, chooseM > 0
large enough so thatmdMm−1/Mm = md/M < ε.
Pick (x1, . . . , xm) ∈ 0, 1, . . . ,M at random.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.39
Randomization to the RescueUsing Schwartz’ Lemma, chooseM > 0
large enough so thatmdMm−1/Mm = md/M < ε.
Pick (x1, . . . , xm) ∈ 0, 1, . . . ,M at random.
Then, ifπ is not identically zero,Pr(π(x1, . . . , xm) = 0) < ε
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.39
Bipartite Matching
Surprisingly enough, this approach can be applied tofind perfect matchingin bipartite graphs.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.40
Heuristics
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.41
HeuristicsMain Entryheuristic
Pronunciation: hyu-’ris-tik
Function: adjective
Etymology: German heuristisch, from New Latin heuristicus,
from Greek heuriskein –to discover;
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.42
HeuristicsMain Entryheuristic
Pronunciation: hyu-’ris-tik
Function: adjective
Etymology: German heuristisch, from New Latin heuristicus,
from Greek heuriskein –to discover;
involving or serving as an aid to learning, discovery, or
problem-solving by experimental and especially trial-and-error
methods (heuristic techniques; a heuristic assumption);
also : of or relating to exploratory problem-solving techniques
that utilize self-educating techniques (as the evaluationof
feedback) to improve performance (a heuristic computer
program )
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.42
HeuristicsHeuristics are widely used in almost every area withhard optimization problems. They are typically basedon solidintuition, but their run-time analysis "inpractice" is beyond current knowledge.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.43
HeuristicsHeuristics are widely used in almost every area withhard optimization problems. They are typically basedon solidintuition, but their run-time analysis "inpractice" is beyond current knowledge.Examples:Simulated annealing, genetic(evolutionary) algorithms, neural networks.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.43
HeuristicsHeuristics are widely used in almost every area withhard optimization problems. They are typically basedon solidintuition, but their run-time analysis "inpractice" is beyond current knowledge.Examples:Simulated annealing, genetic(evolutionary) algorithms, neural networks.
When all else fails, a smart heuristic may do wonders.
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.43
Famous Last WordsYou have brains in your head.You have feet in your shoes.You can steer yourselfany direction you choose.You’re on your own. And you know what you know.And YOU are the guy who’ll decide where to go.
http://www.apple.com/hardware/ads/1984/1984_480.html
Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. – p.44