spectrally thin trees
DESCRIPTION
Spectrally Thin Trees. Nick Harvey University of British Columbia Joint work with Neil Olver (MIT Vrije Universiteit ). TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. Approximating Dense Objects by Sparse Objects. Floor joists. Wood Joists. - PowerPoint PPT PresentationTRANSCRIPT
Spectrally Thin Trees
Nick Harvey University of British Columbia
Joint work with Neil Olver (MIT Vrije Universiteit)
Approximating Dense Objectsby Sparse Objects
Floor joists
Wood Joists Engineered Joists
Approximating Dense Objectsby Sparse Objects
Bridges
Masonry Arch Truss Arch
Approximating Dense Objectsby Sparse Objects
Bones
Human Femur Robin Bone
Approximating Dense Objectsby Sparse Objects
Graphs
Dense Graph Sparse Graph
How well can any graph be approximated by a sparse graph?
First way to compare graphsDo graphs have nearly same weight on
corresponding cuts?
S S
Second way to compare graphsDo their Laplacian matrices have nearly same
eigensystem?
5 -1 -1 -1 -1 -14 -1 -1 -1 -1
-1 -1 6 -1 -1 -1 -1-1 5 -1 -1 -1 -1
-1 -1 -1 7 -1 -1 -1 -1-1 -1 -1 5 -1 -1-1 -1 -1 5 -1 -1
-1 -1 -1 -1 6 -1 -1-1 -1 -1 -1 -1 5
-1 -1 -1 -1 -1 -1 6
6 -1 -55 -1 -3 -1
-1 2 -18 -8
-1 2 -11 -1
-3 -1 5 -12 -1 -1
-5 -1 -1 -1 8-1 -8 -1 10
First way, more formally
Weight of cut: u(±(S)) w(±(S))
S
Edge weights u
S
Edge weights w
®-cut sparsifier: u(±(S)) · w(±(S)) · ®¢u(±(S)) 8S
Cut ±(S) = { edge st : s2S, tS }
Second way, more formally
Lu = D-A =
7 -2 -5-2 3 -1-5 -1 16 -
10-
1010
abcd
a b c d
weighted degree of node
c
negative of u(ac)
Graph with weights u:ab
dc5 102 1
Laplacian Matrix:
Second way, more formally
Def: A¹B , B-A is PSD , xTAx · xTBx 8x2Rn
®-spectral sparsifier: Lu ¹ Lw ¹ ®¢Lu
5 -1 -1 -1 -1 -14 -1 -1 -1 -1
-1 -1 6 -1 -1 -1 -1-1 5 -1 -1 -1 -1
-1 -1 -1 7 -1 -1 -1 -1-1 -1 -1 5 -1 -1-1 -1 -1 5 -1 -1
-1 -1 -1 -1 6 -1 -1-1 -1 -1 -1 -1 5
-1 -1 -1 -1 -1 -1 6
6 -1 -55 -1 -3 -1
-1 2 -18 -8
-1 2 -11 -1
-3 -1 5 -12 -1 -1
-5 -1 -1 -1 8-1 -8 -1 10
Edge weights u
Edge weights w
Lu = Lw =
Thin trees
Let w be supported on a spanning tree®-thin tree: w(±(S)) · ®¢u(±(S)) 8S®-spectrally thin tree: Lw ¹ ®¢Lu
S
Edge weights u
S
Edge weights w
Connectivity and Conductance
Connectivity: kst = min { u(±(S)) : s2S, tS }Global connectivity: K = min { ke : e2E }Effective Resistance from s to t: voltage difference when a 1-amp current source placed between s and tEffective Conductance: cst = 1 / (effective resistance from s to t)Global conductance: C = min { ce : e2E }Fact: cst · kst 8s,t.Example: cst =1/n but kst=1.Long paths affect conductance but not connectivity
Various Kappas: κ κ κ κ κ κ κ κ κ κκκ κ κ
s t
Motivation for thin trees
Goddyn’s Conjecture: every graph has a O(1/K)-thin tree
O(1)-approximation for asymmetric TSPJaeger’s conjecture on nowhere-zero 3-flows [solved]Goddyn-Seymour conjecture on nowhere-zero 2+² flows
Spectrally thin trees may be a useful step towards thin trees
Edge weights u
Unweighted
Intriguing Phenomenon
cut-sparsifier result involving connectivities holds
seemingly if and only if
spectral-sparsifier result involving conductances holds
Uniform sampling
Recall K = min { ke : e2E }Karger Skeletons:
Define p = O( ²-2 log(n) / K )Sample every edge e with probability pGive every sampled edge e weight 1/p
Resulting graph is a (1+²)-cut sparsifier,and number of edges shrinks by factor O(p), whp.Spectral version: [unpublished]Replace K by C and “cut” by “spectral”
and C = min { ce : e2E }
spectral
C
Assume unweighted
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniform sampling
Let ke be “strong connectivity” of edge eBenczur-Karger:
Define pe = O( ²-2 log(n) / ke )Sample every edge e with probability pe
Give every sampled edge e weight 1/pe
Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(n log(n) ²-2), whp.Fung-Hariharan-Harvey-Panigrahi:Replace ke by ke and log(n) by log2(n).
ke
log2(n)
log2(n)*
*
*
Open QuestionImprove to
log(n)
Non-uniform sampling
Let ke be “strong connectivity” of edge eBenczur-Karger:
Define pe = O( ²-2 log(n) / ke )Sample every edge e with probability pe
Give every sampled edge e weight 1/pe
Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(n log(n) ²-2), whp.Spielman-Srivastava:Replace ke by ce and “cut” by “spectral”.
ce
spectral sparsifier
*
*
*
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
Thin trees
Asadpour et al:Pick special distribution on spanning trees such thatevery edge e has Pr[ e in tree ] = £( 1/ K )Give every edge e in tree weight K
Resulting tree is an -cut thin treeMaximum entropy distribution worksChekuri et al: Pipage rounding also worksHarvey-Olver: Replace K by ce and “cut” by “spectral”
cecespectrally thin
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Chekuri-Vondrak-
Zenklusen
Linear-size sparsifiers
Batson-Spielman-Srivastava:Can efficiently construct a (1+²)-spectral sparsifierwith O( n²-2
) edges such that “on average”weight of each edge e is £( ²2
ce )Marcus-Spielman-Srivastava: Remove “on average”, but not efficient.Open question:Replace ce by ke and “spectral” by “cut”?
ke?
cut?
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Linear-sizeSparsifiers
Batson-Spielman-Srivastava
Marcus-Spielman-Srivastava
?Chekuri-Vondrak-
Zenklusen
Optimal thin trees
Suppose we have a (1+²)-spectral sparsifier such thatweight of every edge is we = £( ²2
ce )Any spanning tree T (with weights w) is (1+²)-spectrally thinOr, unweighted tree T is O(1/C )-spectrally thinThe same argument works if we replace ce by keand “spectrally thin” by “cut thin”.
weights u tree Tweights w
ke cut
cut
K cut
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Linear-sizeSparsifiers
Batson-Spielman-Srivastava
Marcus-Spielman-Srivastava
?O(1)
thin trees
Corollary ofMSS
?Chekuri-Vondrak-
Zenklusen
Given a graph G with eff. conductances ¸ C.Find an unweighted spanning subtree T with
Easy lower bound: ® ¸ 1.5.Easy upper bound: ® = O(log n), algorithmic (even deterministic).
Main Theorem: ® = , algorithmic (even deterministic).
Theorem [MSS]: ® = O(1), existential result only.
Spectrally Thin Trees
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives
spectral thinness, but not a tree.► Sample every edge e independently with
prob. xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.
Matrix ConcentrationGiven any random nxn, symmetric matrices Y1,…,Ym.Is there an analog of Chernoff bound showing that i Yiis probably “close” to E[i Yi]?
Theorem: [Tropp ‘12]Let Y1,…,Ym be independent, PSD matrices of size nxn.Let Y=i Yi and Z=E [ Y ]. Suppose Yi ¹ R¢Z a.s. Then
Define sampling probabilities xe = 1/ce. It is known that e xe
= n–1.Claim: Independent sampling gives T µ E with E [|T|]=n–1 and
Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices.Let D(x) be a product distribution on {0,1}m with marginals x.Let Suppose Mi ¹ Z.ThenDefine Me = ce¢Le. Then Z = LG and Me ¹ Z holds.Setting ®=6 log n / log log n, we get whp.But T is not a tree!
Independent sampling
Laplacian of the single edge eProperties of conductances used
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives spectral thinness,
but not a tree.► Sample every edge e independently with prob.
xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.► Run pipage rounding to get tree T with Pr[ e2T ] = xe =
1/ce
Pipage rounding[Ageev-Svirideno ‘04, Srinivasan ‘01, Calinescu et al. ‘07, Chekuri et al. ‘09]
Let P be any matroid polytope.E.g., convex hull of characteristic vectors of spanning trees.Given fractional x
Find coordinates a and b s.t. linez x + z ( ea – eb ) stays in current faceFind two points where line leaves PRandomly choose one of thosepoints s.t. expectation is x
Repeat until x = ÂT is integral
x is a martingale: expectation of final ÂT is original fractional x.
ÂT1ÂT2
ÂT3
ÂT4
ÂT5
ÂT6
x
Say f : Rm ! R is concave under swaps if z ! f( x + z(ea-eb) ) is concave 8x2P, 8a, b2[m].Let X0 be initial point and ÂT be final point visited by pipage rounding.Claim: If f concave under swaps then E[f(ÂT)] · f(X0). [Jensen]
Let E µ {0,1}m be an event.Let g : [0,1]m ! R be a pessimistic estimator for E, i.e.,
Claim: Suppose g is concave under swaps. Then Pr[ ÂT 2 E ] · g(X0).
Pipage rounding and concavity
(e.g. f is multilinear extension of a supermodular function)
Chernoff BoundChernoff Bound: Fix any w, x 2 [0,1]m and let ¹ = wTx.Define . Then,
Claim: gt,µ is concave under swaps. [Elementary calculus]
Let X0 be initial point and ÂT be final point visited by pipage rounding.Let ¹ = wTX0. Then Bound achieved by independent sampling also achieved by pipage rounding
Matrix Pessimistic Estimators
Main Theorem: gt,µ is concave under swaps.
Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices.Let D(x) be a product distribution on {0,1}m with marginals x.Let Suppose Mi ¹ Z.LetThen and .
Bound achieved by independent sampling also achieved by pipage rounding
Pessimistic estimator
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives spectral thinness,
but not a tree.► Sample every edge e independently with prob. xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.► Run pipage rounding to get tree T with Pr[ e2T ] = xe =
1/ce
Matrix AnalysisMatrix concentration inequalities are usually proven via sophisticated inequalities in matrix analysisRudelson: non-commutative Khinchine inequalityAhlswede-Winter: Golden-Thompson inequalityif A, B symmetric, then tr(eA+B) · tr(eA eB).Tropp: Lieb’s concavity inequality [1973]if A, B symmetric and C is PD, then z ! tr exp( A + log(C+zB) ) is concave.Key technical result: new variant of Lieb’s theoremif A symmetric, B1, B2 are PSD, and C1, C2 are PD, then z ! tr exp( A + log(C1+zB1) + log(C2–zB2) ) is concave.
QuestionsO(1/C)-spectrally thin trees exist. Is
there an algorithm?Does sampling by edge connectivities give a cut sparsifierwith O(n log n) edges?Do O(1/K)-cut thin trees exist?
What about if we consider only the min cuts?
Do cut-sparsifiers with O(n²-2) edges exist for whichevery edge e has weight £(²2ke)?