the prokofiev-svistunov-ising process is rapidly...
TRANSCRIPT
-
Introduction Main Theorem Proof Discussion
The Prokofiev-Svistunov-Ising process is
rapidly mixing
Tim Garoni
School of Mathematical SciencesMonash University
-
Introduction Main Theorem Proof Discussion
Collaborators
◮ Andrea Collevecchio (Monash University)
◮ Tim Hyndman (Monash University)
◮ Daniel Tokarev (Monash University)
-
Introduction Main Theorem Proof Discussion
Ising model - Motivation
◮ Introduced in 1920 as a model for ferromagnetism
◮ Hope was to explain the Curie transition:◮ Place iron in a magnetic field◮ Increase field to high value◮ Slowly reduce field to zero◮ Exists critical temperature Tc below which iron remains magnetized
-
Introduction Main Theorem Proof Discussion
Ising model - Motivation
◮ Introduced in 1920 as a model for ferromagnetism
◮ Hope was to explain the Curie transition:◮ Place iron in a magnetic field◮ Increase field to high value◮ Slowly reduce field to zero◮ Exists critical temperature Tc below which iron remains magnetized
◮ During mid 1930s it was realized the Ising model also describesphases transitions in other physical systems
◮ Gas-liquid critical phenomena (lattice gas)◮ Binary alloys
-
Introduction Main Theorem Proof Discussion
Ising model - Motivation
◮ Introduced in 1920 as a model for ferromagnetism
◮ Hope was to explain the Curie transition:◮ Place iron in a magnetic field◮ Increase field to high value◮ Slowly reduce field to zero◮ Exists critical temperature Tc below which iron remains magnetized
◮ During mid 1930s it was realized the Ising model also describesphases transitions in other physical systems
◮ Gas-liquid critical phenomena (lattice gas)◮ Binary alloys
◮ Now a paradigm for order/disorder transitions with applications to
◮ Economics (opinion formation)◮ Finance (stock price dynamics)◮ Biology (hemoglobin, DNA, . . . )◮ Image processing (archetypal Markov random field)
-
Introduction Main Theorem Proof Discussion
Ising model - Motivation
◮ Introduced in 1920 as a model for ferromagnetism
◮ Hope was to explain the Curie transition:◮ Place iron in a magnetic field◮ Increase field to high value◮ Slowly reduce field to zero◮ Exists critical temperature Tc below which iron remains magnetized
◮ During mid 1930s it was realized the Ising model also describesphases transitions in other physical systems
◮ Gas-liquid critical phenomena (lattice gas)◮ Binary alloys
◮ Now a paradigm for order/disorder transitions with applications to
◮ Economics (opinion formation)◮ Finance (stock price dynamics)◮ Biology (hemoglobin, DNA, . . . )◮ Image processing (archetypal Markov random field)
◮ Continues to play fundamental role in theoretical/mathematical
studies of phase transitions and critical phenomena
-
Introduction Main Theorem Proof Discussion
The Ising model
◮ Finite graph G = (V,E)
◮ Configuration space ΣG = {−1,+1}V
◮ Measure
P(σ) =1
Zexp
β∑
ij∈E
σiσj + h∑
i∈V
σi
◮ Inverse temperature β
◮ External field h
◮ Z is the partition function
-
Introduction Main Theorem Proof Discussion
The Ising model
◮ Finite graph G = (V,E)
◮ Configuration space ΣG = {−1,+1}V
◮ Measure
P(σ) =1
Zexp
β∑
ij∈E
σiσj + h∑
i∈V
σi
◮ Inverse temperature β
◮ External field h
◮ Z is the partition function
◮ Main physical interest is in certain expectations such as:◮ Two-point correlation cov(σu, σv) = E(σuσv)− E(σu)E(σv)
◮ Susceptibility χ =1
|V |
∑
u,v∈V
cov(σu, σv)
-
Introduction Main Theorem Proof Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd, and consider
Σ+Λn = {σ ∈ {−1, 1}Zd
: σi = +1 for all i 6∈ Λn}
Sequence of Gibbs measures on Λn converges: P+Λn,β,h
⇒ P+β,h
-
Introduction Main Theorem Proof Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd, and consider
Σ+Λn = {σ ∈ {−1, 1}Zd
: σi = +1 for all i 6∈ Λn}
Sequence of Gibbs measures on Λn converges: P+Λn,β,h
⇒ P+β,h
Analogous construction for “minus” boundary conditions
-
Introduction Main Theorem Proof Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd, and consider
Σ+Λn = {σ ∈ {−1, 1}Zd
: σi = +1 for all i 6∈ Λn}
Sequence of Gibbs measures on Λn converges: P+Λn,β,h
⇒ P+β,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0,∞)× R there is a uniqueinfinite-volume Gibbs measure
-
Introduction Main Theorem Proof Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd, and consider
Σ+Λn = {σ ∈ {−1, 1}Zd
: σi = +1 for all i 6∈ Λn}
Sequence of Gibbs measures on Λn converges: P+Λn,β,h
⇒ P+β,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0,∞)× R there is a uniqueinfinite-volume Gibbs measure
2. If d ≥ 2 and h 6= 0, then for any β ∈ [0,∞) there is a uniqueinfinite-volume Gibbs measure
-
Introduction Main Theorem Proof Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd, and consider
Σ+Λn = {σ ∈ {−1, 1}Zd
: σi = +1 for all i 6∈ Λn}
Sequence of Gibbs measures on Λn converges: P+Λn,β,h
⇒ P+β,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0,∞)× R there is a uniqueinfinite-volume Gibbs measure
2. If d ≥ 2 and h 6= 0, then for any β ∈ [0,∞) there is a uniqueinfinite-volume Gibbs measure
3. If d ≥ 2 and h = 0, there exists βc(d) ∈ (0,∞) such that:
a. If β ≤ βc, there is a unique infinite-volume Gibbs measureb. If β > βc, then P
+
β,0 6= P−
β,0
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between+/− components have conformally invariant limit
◮ SLE(3) - Schramm-Löwner Evolution
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between+/− components have conformally invariant limit
◮ SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between+/− components have conformally invariant limit
◮ SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings◮ Elegant solution on planar graphs in terms of Pfaffians
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between+/− components have conformally invariant limit
◮ SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings◮ Elegant solution on planar graphs in terms of Pfaffians◮ Only tractable on graphs of bounded (small) genus
-
Introduction Main Theorem Proof Discussion
Exact solutions
◮ Gn = Zn solved by Ising (1925)
◮ Gn = Kn solved by Husimi (1953) and Temperley (1954)
◮ Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between+/− components have conformally invariant limit
◮ SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings◮ Elegant solution on planar graphs in terms of Pfaffians◮ Only tractable on graphs of bounded (small) genus◮ Method not tractable for Gn = Z
dn with d ≥ 3
-
Introduction Main Theorem Proof Discussion
Computational Complexity
◮ PARTITION:◮ Input: Finite graph G = (V,E), and parameters β, h◮ Output: Ising partition function
◮ CORRELATION:◮ Input: Finite graph G = (V,E), a pair u, v ∈ V , and parameters β, h◮ Output: Ising two-point correlation function
◮ SUSCEPTIBILITY:◮ Input: Finite graph G = (V,E), and parameters β, h◮ Output: Ising susceptibility
-
Introduction Main Theorem Proof Discussion
Computational Complexity
◮ PARTITION:◮ Input: Finite graph G = (V,E), and parameters β, h◮ Output: Ising partition function
◮ CORRELATION:◮ Input: Finite graph G = (V,E), a pair u, v ∈ V , and parameters β, h◮ Output: Ising two-point correlation function
◮ SUSCEPTIBILITY:◮ Input: Finite graph G = (V,E), and parameters β, h◮ Output: Ising susceptibility
Proposition (Jerrum-Sinclair 1993; Sinclair-Srivastava 2014 )
PARTITION, SUSCEPTIBILITY and CORRELATION are #P-hard.
-
Introduction Main Theorem Proof Discussion
Markov-chain Monte Carlo
◮ Construct a transition matrix P on Ω which:◮ Is ergodic◮ Has stationary distribution π(·)
◮ Generate random samples with (approximate) distribution π
◮ Estimate π expectations using sample means
-
Introduction Main Theorem Proof Discussion
Markov-chain Monte Carlo
◮ Construct a transition matrix P on Ω which:◮ Is ergodic◮ Has stationary distribution π(·)
◮ Generate random samples with (approximate) distribution π
◮ Estimate π expectations using sample means
d(t) := maxx∈Ω
‖P t(x, ·)− π(·)‖ ≤ Cαt, for α ∈ (0, 1)
◮ Mixing time quantifies the rate of convergence
tmix(δ) := min {t : d(t) ≤ δ}
-
Introduction Main Theorem Proof Discussion
Markov-chain Monte Carlo
◮ Construct a transition matrix P on Ω which:◮ Is ergodic◮ Has stationary distribution π(·)
◮ Generate random samples with (approximate) distribution π
◮ Estimate π expectations using sample means
d(t) := maxx∈Ω
‖P t(x, ·)− π(·)‖ ≤ Cαt, for α ∈ (0, 1)
◮ Mixing time quantifies the rate of convergence
tmix(δ) := min {t : d(t) ≤ δ}
◮ How does tmix depend on size of Ω?
-
Introduction Main Theorem Proof Discussion
Markov-chain Monte Carlo
◮ Construct a transition matrix P on Ω which:◮ Is ergodic◮ Has stationary distribution π(·)
◮ Generate random samples with (approximate) distribution π
◮ Estimate π expectations using sample means
d(t) := maxx∈Ω
‖P t(x, ·)− π(·)‖ ≤ Cαt, for α ∈ (0, 1)
◮ Mixing time quantifies the rate of convergence
tmix(δ) := min {t : d(t) ≤ δ}
◮ How does tmix depend on size of Ω?◮ For Ising the size of a problem instance is n = |V |◮ If tmix = O(poly(n)) we have rapid mixing
-
Introduction Main Theorem Proof Discussion
Markov-chain Monte Carlo
◮ Construct a transition matrix P on Ω which:◮ Is ergodic◮ Has stationary distribution π(·)
◮ Generate random samples with (approximate) distribution π
◮ Estimate π expectations using sample means
d(t) := maxx∈Ω
‖P t(x, ·)− π(·)‖ ≤ Cαt, for α ∈ (0, 1)
◮ Mixing time quantifies the rate of convergence
tmix(δ) := min {t : d(t) ≤ δ}
◮ How does tmix depend on size of Ω?◮ For Ising the size of a problem instance is n = |V |◮ If tmix = O(poly(n)) we have rapid mixing◮ |Ω| = 2n so rapid mixing implies only logarithmically-many states
need be visited to reach approximate stationarity
-
Introduction Main Theorem Proof Discussion
Markov chains for the Ising model
◮ Glauber process (arbitrary field) 1963◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc◮ Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0◮ Not used by computational physicists
-
Introduction Main Theorem Proof Discussion
Markov chains for the Ising model
◮ Glauber process (arbitrary field) 1963◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc◮ Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0◮ Not used by computational physicists
◮ Swendsen-Wang process (zero field) 1987◮ Simulates coupling of Ising and Fortuin-Kasteleyn models◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn◮ Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc◮ Empirically fast. State of the art 1987 – recently
-
Introduction Main Theorem Proof Discussion
Markov chains for the Ising model
◮ Glauber process (arbitrary field) 1963◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc◮ Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0◮ Not used by computational physicists
◮ Swendsen-Wang process (zero field) 1987◮ Simulates coupling of Ising and Fortuin-Kasteleyn models◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn◮ Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc◮ Empirically fast. State of the art 1987 – recently
◮ Jerrum-Sinclair process (positive field) 1993◮ Simulates high-temperature graphs for h > 0◮ Proved rapidly mixing on all graphs at all temperatures for all h > 0◮ No empirical results - not used by computational physicists
-
Introduction Main Theorem Proof Discussion
Markov chains for the Ising model
◮ Glauber process (arbitrary field) 1963◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc◮ Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0◮ Not used by computational physicists
◮ Swendsen-Wang process (zero field) 1987◮ Simulates coupling of Ising and Fortuin-Kasteleyn models◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn◮ Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc◮ Empirically fast. State of the art 1987 – recently
◮ Jerrum-Sinclair process (positive field) 1993◮ Simulates high-temperature graphs for h > 0◮ Proved rapidly mixing on all graphs at all temperatures for all h > 0◮ No empirical results - not used by computational physicists
◮ Prokofiev-Svistunov worm process (zero field) 2001◮ No rigorous results currently known◮ Empirically, best method known for susceptibility (Deng, G., Sokal)◮ Widely used by computational physicists
-
Introduction Main Theorem Proof Discussion
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
For any temperature, the mixing time of the PS process on graph
G = (V,E) satisfies
tmix(δ) = O(∆(G)m2n5)
with n = |V |, m = |E| and ∆(G) the maximum degree.
-
Introduction Main Theorem Proof Discussion
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
For any temperature, the mixing time of the PS process on graph
G = (V,E) satisfies
tmix(δ) = O(∆(G)m2n5)
with n = |V |, m = |E| and ∆(G) the maximum degree.
Only Markov chain for the Ising model currently known to be rapidly
mixing at the critical point for boxes in Zd
-
Introduction Main Theorem Proof Discussion
High-temperature expansions and the PS measure
◮ Let Ck = {A ⊆ E : (V,A) has k odd vertices}
-
Introduction Main Theorem Proof Discussion
High-temperature expansions and the PS measure
◮ Let Ck = {A ⊆ E : (V,A) has k odd vertices}
◮ Let CW = {A ⊆ E : W is the set of odd vertices in (V,A) }
-
Introduction Main Theorem Proof Discussion
High-temperature expansions and the PS measure
◮ Let Ck = {A ⊆ E : (V,A) has k odd vertices}
◮ Let CW = {A ⊆ E : W is the set of odd vertices in (V,A) }
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) ∝ x|A|
{
n, A ∈ C0,
2, A ∈ C2.
-
Introduction Main Theorem Proof Discussion
High-temperature expansions and the PS measure
◮ Let Ck = {A ⊆ E : (V,A) has k odd vertices}
◮ Let CW = {A ⊆ E : W is the set of odd vertices in (V,A) }
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) ∝ x|A|
{
n, A ∈ C0,
2, A ∈ C2.
◮ If x = tanhβ then:
◮ Ising susceptibility χ =1
π(C0)◮ Ising two-point correlation function E(σuσv) =
n
2
π(Cuv)
π(C0)
-
Introduction Main Theorem Proof Discussion
High-temperature expansions and the PS measure
◮ Let Ck = {A ⊆ E : (V,A) has k odd vertices}
◮ Let CW = {A ⊆ E : W is the set of odd vertices in (V,A) }
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) ∝ x|A|
{
n, A ∈ C0,
2, A ∈ C2.
◮ If x = tanhβ then:
◮ Ising susceptibility χ =1
π(C0)◮ Ising two-point correlation function E(σuσv) =
n
2
π(Cuv)
π(C0)
◮ PS measure is stationary distribution of PS process
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):◮ Let R(G) be our upper bound for tmix(δ) with δ = ξ/[8S(n)]
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):◮ Let R(G) be our upper bound for tmix(δ) with δ = ξ/[8S(n)]◮ Let Y = 1A
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):◮ Let R(G) be our upper bound for tmix(δ) with δ = ξ/[8S(n)]◮ Let Y = 1A◮ Run the PS process T = R(G) time steps and record YT
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):◮ Let R(G) be our upper bound for tmix(δ) with δ = ξ/[8S(n)]◮ Let Y = 1A◮ Run the PS process T = R(G) time steps and record YT◮ Independently generate 72ξ−2S(n) such samples and take the
sample mean
-
Introduction Main Theorem Proof Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such thatfor given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1− ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1− η
and the running time is bounded by a polynomial in n, ξ−1, η−1.
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮ Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
◮ The following defines an fpras for π(A):◮ Let R(G) be our upper bound for tmix(δ) with δ = ξ/[8S(n)]◮ Let Y = 1A◮ Run the PS process T = R(G) time steps and record YT◮ Independently generate 72ξ−2S(n) such samples and take the
sample mean◮ Repeat 6 lg⌈1/η⌉+ 1 such experiments and take the median
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
-
Introduction Main Theorem Proof Discussion
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0:◮ Pick uniformly random u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
◮ If A ∈ C2:◮ Pick random odd u ∈ V◮ Pick uniformly random v ∼ u◮ Propose A → A△uv
Metropolize proposals with respect to PS measure π(·)
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
◮ Specify paths γI,F in G between pairs of states I, F
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
◮ Specify paths γI,F in G between pairs of states I, F◮ If no transition is used too often, the process is rapidly mixing
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
◮ Specify paths γI,F in G between pairs of states I, F◮ If no transition is used too often, the process is rapidly mixing◮ In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
◮ Specify paths γI,F in G between pairs of states I, F◮ If no transition is used too often, the process is rapidly mixing◮ In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2◮ For PS process, convenient to only specify paths from C2 to C0
C2 C0
-
Introduction Main Theorem Proof Discussion
Proof of rapid mixing◮ We use the path method◮ Consider the transition graph G = (V, E) of the PS process
◮ V = C0 ∪ C2◮ E = {AA′ : P (A,A′) > 0}
◮ Specify paths γI,F in G between pairs of states I, F◮ If no transition is used too often, the process is rapidly mixing◮ In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2◮ For PS process, convenient to only specify paths from C2 to C0
I
F
C2 C0
-
Introduction Main Theorem Proof Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S
c}. Then
tmix(δ) ≤ log
1
δminA∈Ω
π(A)
[
2 + 4
(
π(S)
π(Sc)+
π(Sc)
π(S)
)]
ϕ(Γ)
where
ϕ(Γ) :=
(
max(I,F )∈S×Sc
|γI,F |
)
maxAA′∈E
∑
(I,F )∈S×Sc
γI,F ∋AA′
π(I)π(F )
π(A)P (A,A′)
-
Introduction Main Theorem Proof Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S
c}. Then
tmix(δ) ≤ log
1
δminA∈Ω
π(A)
[
2 + 4
(
π(S)
π(Sc)+
π(Sc)
π(S)
)]
ϕ(Γ)
where
ϕ(Γ) :=
(
max(I,F )∈S×Sc
|γI,F |
)
maxAA′∈E
∑
(I,F )∈S×Sc
γI,F ∋AA′
π(I)π(F )
π(A)P (A,A′)
◮ We choose S = C2.
-
Introduction Main Theorem Proof Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S
c}. Then
tmix(δ) ≤ log
1
δminA∈Ω
π(A)
[
2 + 4
(
π(S)
π(Sc)+
π(Sc)
π(S)
)]
ϕ(Γ)
where
ϕ(Γ) :=
(
max(I,F )∈S×Sc
|γI,F |
)
maxAA′∈E
∑
(I,F )∈S×Sc
γI,F ∋AA′
π(I)π(F )
π(A)P (A,A′)
◮ We choose S = C2. Elementary to show:2
n
mx
mx+ 1≤
π(C2)
π(C0)≤ n−1, and π(A) ≥
(x
8
)m
for all A
-
Introduction Main Theorem Proof Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S
c}. Then
tmix(δ) ≤ log
1
δminA∈Ω
π(A)
[
2 + 4
(
π(S)
π(Sc)+
π(Sc)
π(S)
)]
ϕ(Γ)
where
ϕ(Γ) :=
(
max(I,F )∈S×Sc
|γI,F |
)
maxAA′∈E
∑
(I,F )∈S×Sc
γI,F ∋AA′
π(I)π(F )
π(A)P (A,A′)
◮ We choose S = C2. Elementary to show:2
n
mx
mx+ 1≤
π(C2)
π(C0)≤ n−1, and π(A) ≥
(x
8
)m
for all A
◮ Therefore:
tmix(δ) ≤
(
log
(
8
x
)
−log δ
m
)(
3 +1
mx
)
2mnϕ(Γ)
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
I F
I△F
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
I F
I△F
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
I△F
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
A0
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
A1
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
A2
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
A2
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
I
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
F
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
◮ Γ = {γI,F }
F
-
Introduction Main Theorem Proof Discussion
Choice of Canonical Paths
◮ To transition from I to F◮ Flip each e ∈ I△F
◮ If (I, F ) ∈ C2 × C0 thenI△F ∈ C2
◮ I△F = A0 ∪(
⋃
i≥1 Ai
)
◮ A0 is a path◮ Ai disjoint cycles
I F
◮ γI,F defined by:◮ Traverse A0◮ . . . then A1◮ . . . then A2◮ . . .
◮ Γ = {γI,F }
FOne can show that ϕ(Γ) ≤ ∆(G)n4m/2
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
If T = AA′ is a maximally congested transition
ϕ(Γ) =∑
(I,F )∈P(T )
π(I)π(F )
π(A)P (A,A′)max
(I,F )∈S×Sc|γI,F |
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤∑
(I,F )∈P(T )
π(I)π(F )
π(A)P (A,A′)m
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
m
Λ(C0 ∪ C2)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
◮Λ(I)Λ(F )
Λ(A)P (A,A′)≤ 4∆(G)nΛ(ηT (I, F ))
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
◮Λ(I)Λ(F )
Λ(A)P (A,A′)≤ 4∆(G)nΛ(ηT (I, F ))
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
≤ 4∆(G)nm1
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(ηT (I, F ))
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
◮Λ(I)Λ(F )
Λ(A)P (A,A′)≤ 4∆(G)nΛ(ηT (I, F ))
◮ ηT is an injection
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
≤ 4∆(G)nm1
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(ηT (I, F ))
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
◮Λ(I)Λ(F )
Λ(A)P (A,A′)≤ 4∆(G)nΛ(ηT (I, F ))
◮ ηT is an injection
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
≤ 4∆(G)nm1
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(ηT (I, F ))
≤ 4∆(G)nmΛ(C0 ∪ C2 ∪ C4)
Λ(C0 ∪ C2)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion
◮ Define measure Λ
Λ(A) = x|A|
n, A ∈ C0
2, A ∈ C2
1, A ∈ C4
◮ π(A) =Λ(A)
Λ(C0 ∪ C2)
◮ Define for each T = AA′ ∈ E◮ ηT : C2 × C0 → C0 ∪ C2 ∪ C4◮ ηT (I, F ) = I△F△(A ∪A
′)
◮Λ(I)Λ(F )
Λ(A)P (A,A′)≤ 4∆(G)nΛ(ηT (I, F ))
◮ ηT is an injection
If T = AA′ is a maximally congested transition
ϕ(Γ) ≤m
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(I)Λ(F )
Λ(A)P (A,A′)
≤ 4∆(G)nm1
Λ(C0 ∪ C2)
∑
(I,F )∈P(T )
Λ(ηT (I, F ))
≤ 4∆(G)nmΛ(C0 ∪ C2 ∪ C4)
Λ(C0 ∪ C2)= 4∆(G)nm
(
1 +Λ(C4)
Λ(C0) + Λ(C2)
)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion cont. . .
The final step is to note that the high-temperature expansion impliesΛ(C4)
Λ(C0)≤
1
n
(
n
4
)
so that
ϕ(Γ) ≤ 4∆(G)nm
(
1 +Λ(C4)
Λ(C0)
)
-
Introduction Main Theorem Proof Discussion
Bounding the congestion cont. . .
The final step is to note that the high-temperature expansion impliesΛ(C4)
Λ(C0)≤
1
n
(
n
4
)
so that
ϕ(Γ) ≤ 4∆(G)nm
(
1 +Λ(C4)
Λ(C0)
)
≤ 4∆(G)nm
(
1 +1
n
(
n
4
))
-
Introduction Main Theorem Proof Discussion
Bounding the congestion cont. . .
The final step is to note that the high-temperature expansion impliesΛ(C4)
Λ(C0)≤
1
n
(
n
4
)
so that
ϕ(Γ) ≤ 4∆(G)nm
(
1 +Λ(C4)
Λ(C0)
)
≤ 4∆(G)nm
(
1 +1
n
(
n
4
))
≤ 4∆(G)nmn3
8
-
Introduction Main Theorem Proof Discussion
Bounding the congestion cont. . .
The final step is to note that the high-temperature expansion impliesΛ(C4)
Λ(C0)≤
1
n
(
n
4
)
so that
ϕ(Γ) ≤ 4∆(G)nm
(
1 +Λ(C4)
Λ(C0)
)
≤ 4∆(G)nm
(
1 +1
n
(
n
4
))
≤ 4∆(G)nmn3
8
=∆(G)n4 m
2
-
Introduction Main Theorem Proof Discussion
Bounding the congestion cont. . .
The final step is to note that the high-temperature expansion impliesΛ(C4)
Λ(C0)≤
1
n
(
n
4
)
so that
ϕ(Γ) ≤ 4∆(G)nm
(
1 +Λ(C4)
Λ(C0)
)
≤ 4∆(G)nm
(
1 +1
n
(
n
4
))
≤ 4∆(G)nmn3
8
=∆(G)n4 m
2
�
-
Introduction Main Theorem Proof Discussion
Discussion
◮ Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL?
-
Introduction Main Theorem Proof Discussion
Discussion
◮ Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL?
◮ The PS process is closely related to a modification of the
“lamplighter walk” in which lamps are always switched when
visited. Can we use this similarity to say something more precise
when G = ZdL?
-
Introduction Main Theorem Proof Discussion
Discussion
◮ Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL?
◮ The PS process is closely related to a modification of the
“lamplighter walk” in which lamps are always switched when
visited. Can we use this similarity to say something more precise
when G = ZdL?
◮ Study related spin models using similar methods?
-
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
The mixing time of the PS process on graph G = (V,E) withparameter x ∈ (0, 1) satisfies
tmix(δ) ≤
(
log
(
8
x
)
−log δ
m
)(
3 +1
mx
)
∆(G)m2n5,
with n = |V |, m = |E| and ∆(G) the maximum degree.
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
◮ λ(·) defined by λ(S) =∑
A∈S x|A| for S ⊆ {A ⊆ E}
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
◮ λ(·) defined by λ(S) =∑
A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanhβ then
E(Ising)G,β
(
∏
v∈W
σv
)
=λ(CW )
λ(C0)
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
◮ λ(·) defined by λ(S) =∑
A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanhβ then
E(Ising)G,β
(
∏
v∈W
σv
)
=λ(CW )
λ(C0)
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) =λ(A)
nλ(C0) + 2λ(C2)
{
n, A ∈ C0,
2, A ∈ C2.
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
◮ λ(·) defined by λ(S) =∑
A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanhβ then
E(Ising)G,β
(
∏
v∈W
σv
)
=λ(CW )
λ(C0)
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) =λ(A)
nλ(C0) + 2λ(C2)
{
n, A ∈ C0,
2, A ∈ C2.
◮ Ising susceptibility χ =1
π(C0)◮ Ising two-point correlation function E(σuσv) =
n
2
π(Cuv)
π(C0)
-
High-temperature expansions and the PS measure◮ Let ∂A = {v ∈ V : v has odd degree in (V,A)}◮ Let CW := {A ⊆ E : ∂A = W} for W ⊆ V◮ Let Ck :=
⋃
W⊆V|W |=k
CW for integer 1 ≤ k ≤ |V |
◮ λ(·) defined by λ(S) =∑
A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanhβ then
E(Ising)G,β
(
∏
v∈W
σv
)
=λ(CW )
λ(C0)
◮ PS measure defined on the configuration space C0 ∪ C2
π(A) =λ(A)
nλ(C0) + 2λ(C2)
{
n, A ∈ C0,
2, A ∈ C2.
◮ Ising susceptibility χ =1
π(C0)◮ Ising two-point correlation function E(σuσv) =
n
2
π(Cuv)
π(C0)◮ PS measure is stationary distribution of PS process
Proof