pareto optimal solutions for smoothed analystspeople.csail.mit.edu/moitra/docs/paretod.pdf ·...
TRANSCRIPT
Pareto Optimal Solutionsfor Smoothed Analysts
Ankur Moitra, IAS
joint work with Ryan O’Donnell, CMU
September 26, 2011
Ankur Moitra (IAS) Pareto September 26, 2011
1
vi+1
wi+1
vi
wi
vn
wn
values
weights
...
...
...
...
v W1
w
1
vi+1
wi+1
vi
wi
vn
wn
values
weights
...
...
...
...
v W1
w
1
vi+1
wi+1
vi
wi
vn
wn
values
weights
...
...
...
...val
ue
W − weight
v W1
w
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...val
ue
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...val
ue
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...
Not Pareto Optimal
val
ue
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...
Pareto Optimal!
val
ue
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...val
ue
1
vi+1
wi+1
vi
wi
vn
wn
PO(i)
1
w Wvvalues
weights
...
...
...
...val
ue
W − weight
1
vi+1
wi+1
vi
wi
vn
wnW1
w
W − weight
vvalues
weights
...
...
...
...val
ue
1
vi+1
wi+1
vi
wi
vn
wn
v
vi+1
wi+1
1
w ...
...
...val
ue
W − weight
Wvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wnW1
w ...
...
...val
ue
W − weight
vvalues
weights
...
1
vi+1
wi+1
vi
wi
vn
wn
PO(i+1)
1
w
...
...val
ue
W − weight
Wvvalues
weights
...
...
Dynamic Programming with Lists (Nemhauser, Ullmann):
PO(i + 1) can be computed from PO(i)...
... in linear (in |PO(i)|) time
an optimal solution can be computed from PO(n)
Bottleneck in many algorithms: enumerate the set ofPareto optimal solutions
Dynamic Programming with Lists (Nemhauser, Ullmann):
PO(i + 1) can be computed from PO(i)...
... in linear (in |PO(i)|) time
an optimal solution can be computed from PO(n)
Bottleneck in many algorithms: enumerate the set ofPareto optimal solutions
Dynamic Programming with Lists (Nemhauser, Ullmann):
PO(i + 1) can be computed from PO(i)...
... in linear (in |PO(i)|) time
an optimal solution can be computed from PO(n)
Bottleneck in many algorithms: enumerate the set ofPareto optimal solutions
Dynamic Programming with Lists (Nemhauser, Ullmann):
PO(i + 1) can be computed from PO(i)...
... in linear (in |PO(i)|) time
an optimal solution can be computed from PO(n)
Bottleneck in many algorithms: enumerate the set ofPareto optimal solutions
Dynamic Programming with Lists (Nemhauser, Ullmann):
PO(i + 1) can be computed from PO(i)...
... in linear (in |PO(i)|) time
an optimal solution can be computed from PO(n)
Bottleneck in many algorithms: enumerate the set ofPareto optimal solutions
Results of Beier and Vocking (STOC 2003, STOC 2004)
Smoothed Analysis of Pareto Curves:
Polynomial bound on |PO(i)|s (in two dimensions) inthe framework of Smoothed Analysis (Spielman, Teng)
Knapsack has polynomial smoothed complexity:
first NP-hard problem that is (smoothed) easy
generalizes long line of results on random instances
Results of Beier and Vocking (STOC 2003, STOC 2004)
Smoothed Analysis of Pareto Curves:
Polynomial bound on |PO(i)|s (in two dimensions) inthe framework of Smoothed Analysis (Spielman, Teng)
Knapsack has polynomial smoothed complexity:
first NP-hard problem that is (smoothed) easy
generalizes long line of results on random instances
Results of Beier and Vocking (STOC 2003, STOC 2004)
Smoothed Analysis of Pareto Curves:
Polynomial bound on |PO(i)|s (in two dimensions) inthe framework of Smoothed Analysis (Spielman, Teng)
Knapsack has polynomial smoothed complexity:
first NP-hard problem that is (smoothed) easy
generalizes long line of results on random instances
Results of Beier and Vocking (STOC 2003, STOC 2004)
Smoothed Analysis of Pareto Curves:
Polynomial bound on |PO(i)|s (in two dimensions) inthe framework of Smoothed Analysis (Spielman, Teng)
Knapsack has polynomial smoothed complexity:
first NP-hard problem that is (smoothed) easy
generalizes long line of results on random instances
Results of Beier and Vocking (STOC 2003, STOC 2004)
Smoothed Analysis of Pareto Curves:
Polynomial bound on |PO(i)|s (in two dimensions) inthe framework of Smoothed Analysis (Spielman, Teng)
Knapsack has polynomial smoothed complexity:
first NP-hard problem that is (smoothed) easy
generalizes long line of results on random instances
Capturing Tradeoffs
Question
What if the precise objective function (of a decisionmaker) is unknown?
e.g. travel planning: prefer lower fare, shorter trip, fewer transfers
Question
Can we algorithmically help a decision maker?
Pareto curves capture tradeoffs among competingobjectives
Capturing Tradeoffs
Question
What if the precise objective function (of a decisionmaker) is unknown?
e.g. travel planning: prefer lower fare, shorter trip, fewer transfers
Question
Can we algorithmically help a decision maker?
Pareto curves capture tradeoffs among competingobjectives
Capturing Tradeoffs
Question
What if the precise objective function (of a decisionmaker) is unknown?
e.g. travel planning: prefer lower fare, shorter trip, fewer transfers
Question
Can we algorithmically help a decision maker?
Pareto curves capture tradeoffs among competingobjectives
Capturing Tradeoffs
Question
What if the precise objective function (of a decisionmaker) is unknown?
e.g. travel planning: prefer lower fare, shorter trip, fewer transfers
Question
Can we algorithmically help a decision maker?
Pareto curves capture tradeoffs among competingobjectives
Capturing Tradeoffs
Question
What if the precise objective function (of a decisionmaker) is unknown?
e.g. travel planning: prefer lower fare, shorter trip, fewer transfers
Question
Can we algorithmically help a decision maker?
Pareto curves capture tradeoffs among competingobjectives
Proposition
Only useful approach if Pareto curves are small
Confirmed empirically e.g. Muller-Hannemann, Weihe: German
train system
Question
Why should we expect Pareto curves to be small?
Caveat: Smoothed Analysis is not a complete explanation
Proposition
Only useful approach if Pareto curves are small
Confirmed empirically e.g. Muller-Hannemann, Weihe: German
train system
Question
Why should we expect Pareto curves to be small?
Caveat: Smoothed Analysis is not a complete explanation
Proposition
Only useful approach if Pareto curves are small
Confirmed empirically e.g. Muller-Hannemann, Weihe: German
train system
Question
Why should we expect Pareto curves to be small?
Caveat: Smoothed Analysis is not a complete explanation
Proposition
Only useful approach if Pareto curves are small
Confirmed empirically e.g. Muller-Hannemann, Weihe: German
train system
Question
Why should we expect Pareto curves to be small?
Caveat: Smoothed Analysis is not a complete explanation
The Model
Adversary chooses:
Z ⊂ {0, 1}n (e.g. spanning trees, Hamiltonian cycles)
an adversarial objective function OBJ1 : Z → <+
d − 1 linear objective functions
OBJi(~x) = [w i1,w
i2, ...w
in] · ~x
... each w ij is a random variable on [−1,+1] – density
is bounded by φ
The Model
Adversary chooses:
Z ⊂ {0, 1}n (e.g. spanning trees, Hamiltonian cycles)
an adversarial objective function OBJ1 : Z → <+
d − 1 linear objective functions
OBJi(~x) = [w i1,w
i2, ...w
in] · ~x
... each w ij is a random variable on [−1,+1] – density
is bounded by φ
The Model
Adversary chooses:
Z ⊂ {0, 1}n (e.g. spanning trees, Hamiltonian cycles)
an adversarial objective function OBJ1 : Z → <+
d − 1 linear objective functions
OBJi(~x) = [w i1,w
i2, ...w
in] · ~x
... each w ij is a random variable on [−1,+1] – density
is bounded by φ
The Model
Adversary chooses:
Z ⊂ {0, 1}n (e.g. spanning trees, Hamiltonian cycles)
an adversarial objective function OBJ1 : Z → <+
d − 1 linear objective functions
OBJi(~x) = [w i1,w
i2, ...w
in] · ~x
... each w ij is a random variable on [−1,+1] – density
is bounded by φ
The Model
Adversary chooses:
Z ⊂ {0, 1}n (e.g. spanning trees, Hamiltonian cycles)
an adversarial objective function OBJ1 : Z → <+
d − 1 linear objective functions
OBJi(~x) = [w i1,w
i2, ...w
in] · ~x
... each w ij is a random variable on [−1,+1] – density
is bounded by φ
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
History
Let PO be the set of Pareto optimal solutions...
[Beier, Vocking STOC 2003] (d = 2), E [|PO|] = O(n5φ)
[Beier, Vocking STOC 2004] (d = 2), E [|PO|] = O(n4φ)
[Beier, Roglin, Vocking IPCO 2007] (d = 2), E [|PO|] = O(n2φ)(tight)
[Roglin, Teng, FOCS 2009] E [|PO|] = O((n√φ)f (d))
... where f (d) = 2d−1(d + 1)!
[Dughmi, Roughgarden, FOCS 2010] any FPTAS can betransformed to a truthful in expectation FPTAS
Our Results
Theorem
E [|PO|] ≤ 2 · (4φd)d(d−1)/2n2d−2
...answers a conjecture of Teng
[Bently et al, JACM 1978]: 2n points sampled from a d-dimensionalGaussian,
E [|PO|] = Θ(nd−1)
square factor difference necessary for d = 2
Our Results
Theorem
E [|PO|] ≤ 2 · (4φd)d(d−1)/2n2d−2
...answers a conjecture of Teng
[Bently et al, JACM 1978]: 2n points sampled from a d-dimensionalGaussian,
E [|PO|] = Θ(nd−1)
square factor difference necessary for d = 2
Our Results
Theorem
E [|PO|] ≤ 2 · (4φd)d(d−1)/2n2d−2
...answers a conjecture of Teng
[Bently et al, JACM 1978]: 2n points sampled from a d-dimensionalGaussian,
E [|PO|] = Θ(nd−1)
square factor difference necessary for d = 2
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
On Smoothed Analysis
Bounds are notoriously pessimistic (e.g. Simplex)
Method of Analysis:
Define a ”bad” event
...that you can blame if your algorithm runs slowly
Prove this event is rare
Proposition
Randomness is your friend!
Our Approach
Goal”Define” bad events using as little randomness as possible
... ”conserve” randomness, save for analysis
Challenge
Events become convoluted (to say the least!)
We give an algorithm to define these events
Our Approach
Goal”Define” bad events using as little randomness as possible
... ”conserve” randomness, save for analysis
Challenge
Events become convoluted (to say the least!)
We give an algorithm to define these events
Our Approach
Goal”Define” bad events using as little randomness as possible
... ”conserve” randomness, save for analysis
Challenge
Events become convoluted (to say the least!)
We give an algorithm to define these events
Our Approach
Goal”Define” bad events using as little randomness as possible
... ”conserve” randomness, save for analysis
Challenge
Events become convoluted (to say the least!)
We give an algorithm to define these events
Our Approach
Goal”Define” bad events using as little randomness as possible
... ”conserve” randomness, save for analysis
Challenge
Events become convoluted (to say the least!)
We give an algorithm to define these events
An Alternative Condition
time OBJ
OBJ2
1
An Alternative Condition
time OBJ
OBJ2
1
An Alternative Condition
time OBJ
OBJ2
1
An Alternative Condition
time OBJ
OBJ2
1
An Alternative Condition
time1OBJ
OBJ2
An Alternative Condition
time1OBJ
OBJ2
An Alternative Condition
time1OBJ
OBJ2
Method of Proof
Goal
Count the (expected) number of Pareto optimal solutions
Define a complete family of events... for each Pareto optimal solution at least one (unique) eventoccurs
Then bound the expected number of events
The key is in the definition
Method of Proof
Goal
Count the (expected) number of Pareto optimal solutions
Define a complete family of events... for each Pareto optimal solution at least one (unique) eventoccurs
Then bound the expected number of events
The key is in the definition
Method of Proof
Goal
Count the (expected) number of Pareto optimal solutions
Define a complete family of events... for each Pareto optimal solution at least one (unique) eventoccurs
Then bound the expected number of events
The key is in the definition
Method of Proof
Goal
Count the (expected) number of Pareto optimal solutions
Define a complete family of events... for each Pareto optimal solution at least one (unique) eventoccurs
Then bound the expected number of events
The key is in the definition
The Common Case (Beier, Roglin, Vocking)
time
x
OBJ
OBJ2
1
The Common Case (Beier, Roglin, Vocking)
time
x
y
1OBJ
OBJ2
The Common Case (Beier, Roglin, Vocking)
x
yEMPTY
time OBJ
OBJ2
1
The Common Case (Beier, Roglin, Vocking)
x
yEMPTY
i
OBJ
OBJ2
1time
The Common Case (Beier, Roglin, Vocking)
x
y = 0i
yEMPTY
i
OBJ
OBJ2
1time
i
x = 1
The Common Case (Beier, Roglin, Vocking)
x
y = 0i
yEMPTY
i
OBJ
OBJ2
1time
EMPTY
i
x = 1
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Complete Family of Events
Let I = (a, b) be an interval of [−n,+n] of width ε
Let E be the event that there is an ~x ∈ PO:
OBJ2(~x) ∈ I
~y have the smallest OBJ1(~y) among all points for whichOBJ2(~y) > b
~yi = 0, ~xi = 1
Claim
Pr [E ] ≤ εφ
Analysis
[ , , ... .... ] EOBJ2 x
OBJ
OBJ2
1
: : arbitrary
time
??
Z
?? ??
OBJ
??1
Analysis
[ , , ... .... ] EOBJ2 x
OBJ
OBJ2
1
: : arbitrary
time
??
I
Z
?? ??
OBJ
??1
Analysis
[ , , ... .... ] E2 xw1 w2
OBJ
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
wn
Z
1
OBJ
Analysis
[ , , ... .... ] E2 xw1 w2
z
1
OBJ
index i
??
I
wn
Z
x
OBJ
OBJ
OBJ2
1
: : arbitrary
time
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1i
z
1
OBJ n
i −iZ
Z
Z
x
OBJ
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
w
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1iy
z
1
OBJ n
i −iZ
Z
Z
x
OBJ
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
w
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
y
w1
OBJ
OBJ
n
i −iZ
Z
Z
Tx
z
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
y
w1
OBJ
OBJ
n
i −iZ
Z
Z
T
x
z
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
y same on index i
1
OBJ n
i −iZ
Z
Z
T
OBJ
x
z
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
w
Analysis
[ , , ... .... ] E2 xw1 w2
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
y
w1
OBJ
OBJ
n
i −iZ
Z
Z
Tx
z
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
Analysis
[ , , ... .... ] E2 xw1 w2
i
1
OBJ
OBJ
−iZ
Z
Z
T
x
z
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
y
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
wn
Analysis
[ , , ... .... ] E2 xw1 w2
i
1
OBJ
Z
Z
T
x
z
and y = 0
OBJ
x = 0ix = 1i
OBJ (x)>OBJ (y)1 1
x = 1i
y
OBJ
OBJ2
1
: : arbitrary
time
index i
??
I
wn
i −iZ
A Re-interpretation
Implicitly defined a Transcription Algorithm:
Question
Given a Pareto optimal solution x, which event should weblame?
Blame event E : interval I , index i , xi and yi
Transcription Algorithm
Input: values of r.v.s (and Pareto optimal x)
Output: interval I , index i , xi and yi
A Re-interpretation
Implicitly defined a Transcription Algorithm:
Question
Given a Pareto optimal solution x, which event should weblame?
Blame event E : interval I , index i , xi and yi
Transcription Algorithm
Input: values of r.v.s (and Pareto optimal x)
Output: interval I , index i , xi and yi
A Re-interpretation
Implicitly defined a Transcription Algorithm:
Question
Given a Pareto optimal solution x, which event should weblame?
Blame event E : interval I , index i , xi and yi
Transcription Algorithm
Input: values of r.v.s (and Pareto optimal x)
Output: interval I , index i , xi and yi
A Re-interpretation
Implicitly defined a Transcription Algorithm:
Question
Given a Pareto optimal solution x, which event should weblame?
Blame event E : interval I , index i , xi and yi
Transcription Algorithm
Input: values of r.v.s (and Pareto optimal x)
Output: interval I , index i , xi and yi
A Re-interpretation
Implicitly defined a Transcription Algorithm:
Question
Given a Pareto optimal solution x, which event should weblame?
Blame event E : interval I , index i , xi and yi
Transcription Algorithm
Input: values of r.v.s (and Pareto optimal x)
Output: interval I , index i , xi and yi
A Re-interpretation
Suppose output is event E : interval I , index i , xi and yi
Question
Which solution x caused this event to occur?
We do not need to know wi to determine the identity of x!
Reverse Algorithm
Find y
Then find x
A Re-interpretation
Suppose output is event E : interval I , index i , xi and yi
Question
Which solution x caused this event to occur?
We do not need to know wi to determine the identity of x!
Reverse Algorithm
Find y
Then find x
A Re-interpretation
Suppose output is event E : interval I , index i , xi and yi
Question
Which solution x caused this event to occur?
We do not need to know wi to determine the identity of x!
Reverse Algorithm
Find y
Then find x
A Re-interpretation
Suppose output is event E : interval I , index i , xi and yi
Question
Which solution x caused this event to occur?
We do not need to know wi to determine the identity of x!
Reverse Algorithm
Find y
Then find x
A Re-interpretation
Suppose output is event E : interval I , index i , xi and yi
Question
Which solution x caused this event to occur?
We do not need to know wi to determine the identity of x!
Reverse Algorithm
Find y
Then find x
A Re-interpretation
Reverse Algorithm: Find x (without looking at wi)
Question
Does x fall into the interval I ?
Hidden random variable wi must fall into some small range
Proposition
We can deduce a missing input to TranscriptionAlgorithm, from just some of the inputs and outputs
Hence each output is unlikely
A Re-interpretation
Reverse Algorithm: Find x (without looking at wi)
Question
Does x fall into the interval I ?
Hidden random variable wi must fall into some small range
Proposition
We can deduce a missing input to TranscriptionAlgorithm, from just some of the inputs and outputs
Hence each output is unlikely
A Re-interpretation
Reverse Algorithm: Find x (without looking at wi)
Question
Does x fall into the interval I ?
Hidden random variable wi must fall into some small range
Proposition
We can deduce a missing input to TranscriptionAlgorithm, from just some of the inputs and outputs
Hence each output is unlikely
A Re-interpretation
Reverse Algorithm: Find x (without looking at wi)
Question
Does x fall into the interval I ?
Hidden random variable wi must fall into some small range
Proposition
We can deduce a missing input to TranscriptionAlgorithm, from just some of the inputs and outputs
Hence each output is unlikely
A Re-interpretation
Reverse Algorithm: Find x (without looking at wi)
Question
Does x fall into the interval I ?
Hidden random variable wi must fall into some small range
Proposition
We can deduce a missing input to TranscriptionAlgorithm, from just some of the inputs and outputs
Hence each output is unlikely
Transcription for d = 3
2
OBJ2
( , )
OBJ
OBJ3
: adversarial
: linear,
OBJ1
OBJ3
,Output: ,
Z
Transcription for d = 3
2
OBJ2
( , )
Z
OBJ
OBJ3
: adversarial
: linear,
OBJ1
OBJ3
,Output: ,
Transcription for d = 3
2
OBJ2
( , ),Z
OBJ
OBJ3
: adversarial
: linear,
OBJ1
OBJ3
,Output:
Transcription for d = 3
2
OBJ2
( , )Output: ,Z
OBJ
OBJ3
: adversarial
: linear,
OBJ1
OBJ3
,
Transcription for d = 3
x
OBJ2
OBJ2
x
( , )
,Output: ,
Z
OBJ3
: adversarial
: linear,
OBJ1
OBJ3
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
( , )
3
,Output: ,
Z
OBJ3
: adversarial
: linear,
OBJ1
OBJ
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
( , )
3
,Output: ,
Z
OBJ3
: adversarial
: linear,
OBJ1
OBJ
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
Z3
,Output: ,(i, )
OBJ3
index i
: adversarial
: linear,
OBJ1
OBJ
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
Z
10
Output: ,(i, )OBJ
3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
Z
10
Output: ,(i, )OBJ
3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
Z
10
Output: ,(i, )OBJ
3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
x = 0i
x = 1i
−i
(i, )
ZZ
Z i
OBJ3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 10
Output: ,
Transcription for d = 3
x
OBJ2
OBJ2
y x
x = 0i
x = 1i
−i
y
time
ZZ
Z i
OBJ3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 10
Output: ,(i, )
Transcription for d = 3
y
x
OBJ2
OBJ2
y x
x = 0i
x = 1i
−i
z
(i, )
ZZ
Z i
OBJ
time
3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 10
Output: ,
Transcription for d = 3
y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
z
(i, )
ZZ
Z i
OBJ
time
3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 1 10
Output: ,
Transcription for d = 3
z y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
(i, )
ZZ
Z i
OBJ3
index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 1 10
Output: ,
Transcription for d = 3
z y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
(i, )
ZZ
Z i
OBJ3
index j index i
: adversarial
: linear,
OBJ1
OBJ3
,index i 1 10
Output: ,
Transcription for d = 3
z y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
,Z
Z
Z i
OBJ3
index j index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
index j
1 1
1
0
0 1
Output: (i,j)
Transcription for d = 3
z y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
,Z
Z
Z i
OBJ3
index j index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
index j
1 1
1
0
0 1
Output: (i,j)
Transcription for d = 3
z y
x
OBJ2
OBJ2
zy x
x = 0i
x = 1i
−i
,Z
Z
Z i
OBJ3
index j index i
: adversarial
: linear,
OBJ1
OBJ3
,index i
index j
1 1
1
0
0 1
Output: (i,j)
Transcription for d = 3
z y
x
zy x
index i
index j
1 1
1
0
0 1
Output: (i,j),
,,
OBJ2
OBJ2
x = 0i
x = 1i
−i3 Z
Z
Z i
OBJ3
index j index i
: adversarial
: linear,
OBJ1
OBJ
Future Directions?
Open Question
Are there additional applications of randomness”conservation” in Smoothed Analysis?
e.g. simplex algorithm
Open Question
Are there perturbation models that make sense fornon-linear objectives?
e.g. submodular functions
Future Directions?
Open Question
Are there additional applications of randomness”conservation” in Smoothed Analysis?
e.g. simplex algorithm
Open Question
Are there perturbation models that make sense fornon-linear objectives?
e.g. submodular functions
Future Directions?
Open Question
Are there additional applications of randomness”conservation” in Smoothed Analysis?
e.g. simplex algorithm
Open Question
Are there perturbation models that make sense fornon-linear objectives?
e.g. submodular functions
Future Directions?
Open Question
Are there additional applications of randomness”conservation” in Smoothed Analysis?
e.g. simplex algorithm
Open Question
Are there perturbation models that make sense fornon-linear objectives?
e.g. submodular functions
Future Directions?
Open Question
Are there additional applications of randomness”conservation” in Smoothed Analysis?
e.g. simplex algorithm
Open Question
Are there perturbation models that make sense fornon-linear objectives?
e.g. submodular functions
Questions?
Thanks!