lifting with sunflowers

31
Lifting with Sunflowers Shachar Lovett * UC San Diego [email protected] Raghu Meka UC Los Angeles [email protected] Ian Mertz University of Toronto [email protected] Toniann Pitassi University of Toronto and IAS [email protected] Jiapeng Zhang University of Southern California [email protected] November 16, 2020 Abstract Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Whereas previous proofs used sophisticated Fourier analytic techniques, our proof uses elementary counting together with a novel connection to the sunflower lemma. In addition to a simplified proof, our approach also gives quantitative improvements in terms of gadget size. Focusing on one of the most widely used gadgets—the index gadget—existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to poly logarithmic, similar to the known bounds for the corresponding robust sunflower lemmas. 1 Introduction A query-to-communication lifting theorem is a reductive lower bound technique that translates lower bounds on query complexity (such as decision tree complexity) to lower bounds for the corresponding communication complexity model. For a function f : {0, 1} n R, and a function g : X×Y→{0, 1} (called the gadget), their composition f g n : X n ×Y n R is defined by (f g n )(x, y) := f (g(x 1 ,y 1 ),...,g(x n ,y n )). Here, Alice holds x ∈X n and Bob holds y ∈Y n . Typically g is the popular index gadget Ind m :[m] ×{0, 1} m →{0, 1} mapping (x, y) to the x-th bit of y. There is a substantial body of work proving lifting theorems for a variety of flavors of query-to- communication, including: deterministic [RM99, GPW15, dRNV16, WYY17, CKLM17], nondetermin- istic [GLM + 16, o15], randomized [GPW17, CFK + 19] degree-to-rank [She11, PR17, PR18, RPRC16] and nonnegative degree to nonnegative rank [CLRS16, KMR17]. In these papers and others, lifting * Research supported by NSF Award DMS-1953928. Research supported by NSERC. Research supported by NSF Award CCF-1900460 and NSERC. 1 ISSN 1433-8092

Upload: others

Post on 27-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lifting with Sunflowers

Lifting with Sunflowers

Shachar Lovett∗UC San Diego

[email protected]

Raghu MekaUC Los Angeles

[email protected]

Ian Mertz†University of Toronto

[email protected]

Toniann Pitassi‡University of Toronto and IAS

[email protected]

Jiapeng ZhangUniversity of Southern California

[email protected]

November 16, 2020

Abstract

Query-to-communication lifting theorems translate lower bounds on query complexity tolower bounds for the corresponding communication model. In this paper, we give a simplifiedproof of deterministic lifting (in both the tree-like and dag-like settings). Whereas previous proofsused sophisticated Fourier analytic techniques, our proof uses elementary counting together witha novel connection to the sunflower lemma.

In addition to a simplified proof, our approach also gives quantitative improvements in termsof gadget size. Focusing on one of the most widely used gadgets—the index gadget—existinglifting techniques are known to require at least a quadratic gadget size. Our new approachcombined with robust sunflower lemmas allows us to reduce the gadget size to near linear. Weconjecture that it can be further improved to poly logarithmic, similar to the known bounds forthe corresponding robust sunflower lemmas.

1 Introduction

A query-to-communication lifting theorem is a reductive lower bound technique that translates lowerbounds on query complexity (such as decision tree complexity) to lower bounds for the correspondingcommunication complexity model. For a function f : 0, 1n → R, and a function g : X ×Y → 0, 1(called the gadget), their composition f gn : X n × Yn → R is defined by

(f gn)(x, y) := f(g(x1, y1), . . . , g(xn, yn)).

Here, Alice holds x ∈ X n and Bob holds y ∈ Yn. Typically g is the popular index gadgetIndm : [m]× 0, 1m → 0, 1 mapping (x, y) to the x-th bit of y.

There is a substantial body of work proving lifting theorems for a variety of flavors of query-to-communication, including: deterministic [RM99,GPW15,dRNV16,WYY17,CKLM17], nondetermin-istic [GLM+16,Goo15], randomized [GPW17,CFK+19] degree-to-rank [She11,PR17,PR18,RPRC16]and nonnegative degree to nonnegative rank [CLRS16,KMR17]. In these papers and others, lifting

∗Research supported by NSF Award DMS-1953928.†Research supported by NSERC.‡Research supported by NSF Award CCF-1900460 and NSERC.

1

ISSN 1433-8092

Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 111 (2020)

Page 2: Lifting with Sunflowers

theorems have been applied to simplify and resolve some longstanding open problems, includ-ing new separations in communication complexity [GP18, GPW15, GPW17, CKLM17, CFK+19],proof complexity [GLM+16,HN12,GP18,dRNV16,dRMN+19,GKMP20] monotone circuit complex-ity [GGKS18], monotone span programs and linear secret sharing schemes [RPRC16,PR17,PR18],and lower bounds on the extension complexity of linear and semi-definite programs [CLRS16,KMR17,LRS15]. Furthermore within communication complexity most functions of interest—e.g. equality,set-disjointness, inner product, gap-hamming (c.f. [Kus97,Juk12])—are lifted functions.

At the heart of these proofs is a simulation theorem. A communication protocol for the liftedfunction can “mimic” a decision tree for the original function by taking logm steps to calculateeach variable queried by the decision tree in turn. For m = nO(1) and for every f the deterministicsimulation theorem [RM99,GPW15] shows that this simulation goes the other way as well:

Pcc(f Indnm) = Pdt(f) ·Θ(logm)

The proof of this theorem has evolved considerably since [RM99], applying to a wider range ofgadgets [CFK+19], and with more sharpened results giving somewhat improved parameters andsimulation theorems for the more difficult settings of randomized and dag-like lifting. However,nearly all proofs of even the basic deterministic simulation theorem use tools from the Fourieranalysis of Boolean functions, together with somewhat intricate counting arguments.

Lifting using the sunflower lemma. The primary purpose of this paper is to give a readable,self-contained and simplified proof of the deterministic query-to-communication lifting theorem.Our proof uses the same basic setup as in previous arguments, but our proof of the main invariant –showing that any large rectangle can be decomposed into a part that has structure and a part thatis pseudo-random – is proven by a direct reduction to the famous sunflower lemma.

The sunflower lemma is one of the most important examples of a structure-versus-randomnesstheorem in combinatorics. A sunflower with r petals is a collection of r sets such that the intersectionof each pair is equal to the intersection of all of them. The sunflower lemma of Erdos and Rado [ER60]roughly states that any sufficiently large w-uniform set system (of size about ww) must contain asunflower. A recent breakthrough result due to Alweiss et al. [ALWZ20] proves the sunflower lemmawith significantly improved parameters, making a huge step towards resolving the longstandingopen problem obtaining optimal parameters.

Both the original Sunflower Lemma as well as Rossman’s robust version [Ros10] have playedan important role in recent advances in theoretical computer science. Most famously, Razborovproved the first superpolynomial lower bounds for monotone circuits computing the Clique function,using the Sunflower Lemma. It has also been a fundamental tool used to obtain a wide variety ofother hardness results including: hardness of approximation, matrix multiplication, cryptography,and data structure lower bounds. (See [ALWZ20] for a nice survey of the many applications toComputer Science.)

In all of these lower bounds, the central idea is to use the sunflower lemma in order to “tame”a protocol or algorithm, in order to show that at each step of the computation, the underlyingset of inputs consistent so far can be partitioned into a structured part and a random part. Thisallows the algorithm to be massaged into a simpler form, where the lower bound is easier to prove.Since lifting theorems are attempting to do precisely the same thing, it is natural to expect thatthere should be a connection between the two lines of research. Indeed, [LLZ18] made an explicitconnection between sunflowers and randomness extractors where the latter is again a primary toolused in many if not all of the proofs of lifting.

Additionally these same tools can be used to prove a lifting theorem for dag-like communicationprotocols, as originally proven in [GGKS18]. Again we follow the same proof as before but for the

2

Page 3: Lifting with Sunflowers

main invariant we rely on a connection to the sunflower lemma. In fact while the central lemmaneeded for the invariant in the dag-like case is stronger than in the case of the basic lifting theorem,it actually follows more directly from the connection to sunflowers. We note that our results extendstraightforwardly to the real communication setting as well.1

Gadget size. In spite of the tremendous progress in lifting theorems, most generic lifting theoremsrequire gadget sizes that are polynomial in n.2 Define the gadget size of g : X × Y → 0, 1as min(|X |, |Y|). For example, we now have lifting theorems for the three classical models ofcommunication: deterministic, non-deterministic, and randomized. In each of these results andothers that use Fourier-analytic arguments, the gadget size has to be at least Ω(n2). Can wecircumvent this quadratic barrier?

Gadget size is a fundamental parameter in lifting theorems and their applications. This isbecause often in applications, one loses factors that depend polynomially on the gadget size (asdefined above). An ideal lifting theorem - one with constant gadget size - would give a unified wayto prove tight lower bounds in several models of computation. For example, the best known sizelower bounds for extension complexity as well as monotone circuit size is 2Ω(

√n) [GJW18,HR00].

Improving the gadget size from polyn to O(1) (or even poly logn) would improve the best knownlower bounds for extended formulations and monotone circuit size to 2Ω(n).

Luckily the quadratic bottleneck comes from the core lemma that we reprove in this paper, andas a nice side effect our simplified proof immediately gives us a gadget of size n1+ε. Furthermore byinspecting the parameters of the argument we can prove a “sliding” lifting theorem which allowsus to make a tradeoff between the strength of our lower bound and the size of the gadget, up toa gadget of size O(n logn). Our approach does not seem to have the same bottleneck as previousapproaches, and we conjecture that it can be pushed to poly-logarithmic gadget sizes (similar to theimprovements made for the sunflower lemma in [ALWZ20]).

We also use our sunflower method to give a new proof (and with tightened parameters) of[GKMP20] who prove deterministic lifting with the gadget size bounded by a polynomial in the querycomplexity of the outer function. This applies to situations such as fixed parameter complexity, wherethe query complexity is modest, allowing us to lift problems with query complexity with gadgets ofcomparable. Here we use a more involved argument using an older iteration of the sunflower lemma.Again our approach does not seem to suffer from a bottleneck, and improvements to this theoremwould yield e.g. stronger lower bounds on the automatizability of Cutting Planes [GKMP20].

Organization for the rest of the paper. After setting up the preliminaries in Section 2, inSection 3 we give an overview of our proof. In Section 4 we present our main contribution: asimplified proof of lifting via the sunflower lemma. Then for the remainder of the paper we investigatevarious extensions of this basic lifting theorem. In Section 5 we show that the gadget size m canbe improved to n1+ε, and by sacrificing in the strength of the lifting theorem we can even push itdown to O(n logn). In Section 6 we show that we can lift dag-like query complexity to dag-likecommunication complexity. In Section 7 we show that m can be made poly(Pdt(f)) with no otherdependence on n. For these extensions we make extensive reference to the basic lifting theorem inorder to highlight how the proofs differ, and where necessary how our results fit into the context oftheir original proofs. We conclude with directions for further research Section 8.

1In most query-to-communication settings it is fairly simple to extend results for communication complexity tothe real communication setting [Kra98]; we refer readers to e.g. [dRNV16,GGKS18] for examples of these techniquesand applications of lifting to real communication complexity.

2Some notable exceptions for models of communication with better gadget size are [She11,She14,GP18,PR17].

3

Page 4: Lifting with Sunflowers

2 Preliminaries

We will use n to denote the length of the input, N ≤ n to denote an arbitrary number, m to denotean external parameter, and for this preliminaries section we will use U to denote an arbitrary set.We will mostly focus on two types of universes, UN and (Um)N . In the case of UN we often refer toi ∈ [N ] as being a coordinate, while in the case of (Um)N we often refer to i ∈ [N ] as being a block.We will be primarily using terminology from previous lifting papers and computational complexity;for a connection to the language more commonly used in sunflower papers and combinatorics, seeAppendix A.

Basic notation. For a set S ⊆ U we write S := U r S. For a set U and a set I ⊆ [N ] we say astring x is in UI if each value in x is an element of U indexed by a unique element of I. For a stringx ∈ UN and I ⊆ [N ] we define x[I] ∈ UI to be the values of x at the locations in I, and for a stringy ∈ (Um)N and I ⊆ [N ], α ∈ [m]I we define y[I, α] ∈ UI to be the values of y at the locations αifor each i ∈ I. For a set X ⊆ UN we define XI ⊆ UI to be the set that is the projection of X ontocoordinates I, and for a set Y ⊆ (Um)N we define YI ∈ (Um)I likewise. For a set system F over Uand a set S ⊆ U , we define FS := γ r S : γ ∈ F , S ⊆ γ.

Definition 2.1. Let γ ⊆ [mN ]. Treating each element in γ as being a pair (i, a) where i ∈ [N ] anda ∈ [m], we say γ is over (Um)N , meaning that for s ∈ (Um)N each (i, a) ∈ γ indicates an elements[i, a] ∈ U . We sometimes say (i, a) is a pointer.

For γ over (Um)N , γ is a block-respecting subset of [mN ] if γ contains at most one element perblock, or in other words if i 6= i′ for all distinct (i, a), (i′, a′) ∈ γ. We can represent γ by a pair(I, α), where I ⊆ [N ] and α ∈ [m]I ; here γ chooses one element (indicated by αi) from each blocki ∈ I. A set system F over (Um)N is block-respecting if all elements γ ∈ F are block-respecting.

We say that a set ρ ∈ 0, 1, ∗N is a restriction, or sometimes a partial assignment. We denoteby free(ρ) ⊆ [N ] the variables assigned a star, and define fix(ρ) := [N ] r free(ρ). If we have tworestrictions ρ, ρ′ such that fix(ρ) ∩ fix(ρ′) = ∅, then we define ρ ∪ ρ′ to be the restriction whichassigns fix(ρ) to ρ[fix(ρ)] and fix(ρ′) to ρ′[fix(ρ′)], with all other coordinates being assigned ∗.

In general in this paper we will use bold letters to denote random variables. For a set S wedenote by S ∈ S the random variable that is uniform over S. For S ⊆ UN and I ⊆ [N ] we denoteby SI the marginal distribution over coordinates I of the uniform distribution over S; note that therandom draw is taken over the original set S before marginalizing to the coordinates I, rather thanbeing the uniform distribution over SI .

Definition 2.2. Let S be a set. For a random variable s ∈ S we define its min-entropy by H∞(s) :=mins log(1/Pr[ s = s ]). We also define the deficiency of s by D∞(s) := log |S| −H∞(s) ≥ 0. Whens is chosen from a set S ⊆ UN , we define its blockwise min-entropy by min∅6=I⊆[N ]

1|I|H∞(sI), or in

other words the least (normalized) marginal min-entropy over all subsets I of the coordinates [N ].

Search problems. A search problem is a relation f ⊆ Z × O such that for every z ∈ Z thereexists some o ∈ O such that (z, o) ∈ f . Let f(z) 6= ∅ denote the set of all o ∈ O such that (z, o) ∈ f .Likewise a bipartite search problem is a relation F ⊆ X ×Y ×O such that F (x, y) 6= ∅, where F (x, y)is defined analogously to f(z). We say that f is on Z and F is on X × Y.

Definition 2.3. Let m ∈ N. The index gadget, denoted Indm, is a Boolean function which takestwo inputs x ∈ [m] and y ∈ 0, 1m, and outputs y[x]. We will often have N separate instancesof the index gadget, which we denote by IndNm and which is a function which takes two inputs

4

Page 5: Lifting with Sunflowers

x ∈ [m]N and y ∈ (0, 1m)N and outputs the Boolean string (y[i, xi])i∈[N ]. For a search problemf with Z = 0, 1n, the lifted search problem f Indnm is a bipartite search problem defined byX := [m]n, Y := (0, 1m)n, and f Indnm(x, y) = o ∈ O : o ∈ f(Indnm(x, y)).

Intuitively, each x ∈ X can be viewed as a block-respecting subset over the universe [mn] wheren elements are chosen, one from each block of size m. For each i ∈ [n], to determine the value ofthe variable zi in the original problem f , we restrict ourselves to the i-th block of y and take the bitindexed by the i-th coordinate of x.

Consider a search problem f ⊆ 0, 1n ×O. A decision tree T is a binary tree such that eachnon-leaf node v is labeled with an input variable zi, and each leaf v is labeled with a solution ov ∈ O.The tree T solves f if, for any input z ∈ 0, 1n, the unique root-to-leaf path, generated by walkingleft at node v if the variable zi that v is labeled with is 0 (and right otherwise), terminates at a leafu with ou ∈ f(z). We define

Pdt(f) := least depth of a decision tree solving f .

Consider a bipartite search problem F . A communication protocol Π is a binary tree where noweach non-leaf node v is labeled with a binary function gv which takes its input either from X or Y.This is informally viewed as two players Alice and Bob jointly computing a function, where Alicereceives x ∈ X and Bob receives y ∈ Y, and where at each node in the protocol either Alice or Bobcomputes gv(x) or gv(y), respectively, and “speaks” as to which child to go to, depending on whoseturn it is. The protocol Π solves F if, for any input (x, y) ∈ X × Y, the unique root-to-leaf path,generated by walking left at node v if gv(x, y) = 0 (and right otherwise), terminates at a leaf u withou ∈ F (x, y). We define

Pcc(F ) := least depth of a communication protocol solving F .

An alternative characterization of communication protocols, which will be useful for proving ourmain theorem, is as follows. Each non-leaf node v is labeled with a (combinatorial) rectangleRv = Xv × Yv ⊆ X × Y, such that if v` and vr are the children of v, Rv` and Rvr partition Rv.Furthermore this partition is either of the form Xv` × Yv tXvr × Yv or Xv × Yv` tXv × Yvr . Theunique root-to-leaf path on input (x, y) is generated by walking to whichever child v of the currentnode satisfies (x, y) ∈ Rv.

Sunflowers. Let F be a set system over some universe U . We say that F is (p, ε)-satisfying if

Pry⊆pU (∀γ ∈ F : γ 6⊆ y) ≤ ε

where ⊆p means that each element is added to y independently with probability p.We say that F is a (p, ε)-robust sunflower (sometimes called an approximate sunflower or a

quasi-sunflower) if it satisfies the following. Let S = ∩T∈FT be the common intersection of all setsin F . We require that FS is (p, ε)-satisfying. In other words,

Pry⊆pUrS(∀γ ∈ F : γ r S 6⊆ y) ≤ ε.

In this paper we will always be using p = 1/2, and so for convenience we simply write y ⊆ U rSinstead of ⊆1/2 and call F an ε-robust sunflower instead of an (1/2, ε)-robust sunflower. An analogueof the famed sunflower lemma of Erdos was proved for robust sunflowers by Rossman [Ros10]:

Lemma 2.1 (Robust Sunflower Lemma). Let s ∈ N and let ε > 0. Let F be a set system overU such that a) |γ| ≤ s for all γ ∈ F ; and b) |F| ≥ (Cs log 1/ε)s for some absolute constant C > 0.Then F contains an ε-robust sunflower.

5

Page 6: Lifting with Sunflowers

A recent breakthrough result (proved in [ALWZ20] and simplified in [Rao19]) proves the sunflowerlemma with significantly improved parameters. As a stepping stone they also prove an improvementon Robust Sunflower Lemma assuming a condition called spreadness, but which we will state in thefollowing way.Lemma 2.2 (Blockwise Robust Sunflower Lemma). Let s ∈ N and let ε > 0. Let F be a setsystem over U such that a) |γ| ≤ s for all γ ∈ F ; and b) F has blockwise min-entropy at leastlog(K log s/ε) for some absolute constant K > 0. Then F is ε-satisfying.

In our main argument we will use a simple and general statement about the satisfiability ofmonotone CNFs in order to connect sunflowers to restrictions.Claim 2.3. Let C = C1 ∧ . . .∧Cm be a CNF on the variables x1 . . . xn such that no clause containsboth the literals xi and xi for any i. Let Cmon be the result of replacing, for every i, every occurrenceof xi in C with xi. Then

|x ∈ 0, 1n : C(x) = 1| ≤ |x ∈ 0, 1n : Cmon(x) = 1|

Proof. Let Ci be the result of replacing every occurrence of xi in C with xi. It is enough to showthat for any i, Ci(x) is satisfied by at least as many assignments β ∈ 0, 1n to x as C(x) is, as wecan then apply the argument inductively for i = 1 . . . n. Let β−i ∈ 0, 1[n]ri be an assignment toevery variable except xi. We claim that for every β−i, Ci(β−i, xi) is satisfied by at least as manyassignments βi ∈ 0, 1 to xi as C(β−i, xi).

Since there are no clauses with both xi and xi, each clause in C is of the form xi ∨A, xi ∨B, orC, where A, B, and C don’t depend on xi; the corresponding clauses in Ci are xi ∨A, xi ∨B, andC. If Ci(β−i, 0) = 1, then A(β−i) = B(β−i) = C(β−i) = 1 for all A, B, and C, and so Ci(β−i, xi) isalways satisfied. If Ci(β−i, 1) = 0, then it must be that C(β−i) = 0 for some C, and so C(β−i, xi)has no satisfying assignments. Finally assume neither of these cases hold, and so Ci(β−i, 0) = 0 andCi(β−i, 1) = 1. Then it must be that either A(β−i) = 0 for some A, in which case C(β−i, 0) = 0, orB(β−i) = 0 for some B, in which case C(β−i, 1) = 0. Therefore C(β−i, xi) has at least one falsifyingassignment, while Ci(β−i, xi) has exactly one.

3 The basic lifting theorem: proof overview

The following is our basic deterministic lifting theorem. An earlier version was originally provenin [RM99] with m = n20, and more recently in [GPW15] with m = n2. We improve this to a nearlinear dependence on n.Theorem 3.1 (Basic Lifting Theorem). Let f be a search problem over 0, 1n, and let m = n1.1.Then

Pcc(f Indnm) = Pdt(f) ·Θ(logm)In this section we will sketch out the technical ideas that go into proving Theorem 3.1, along

with some of the innovations that have helped simplify the proof since [RM99]. We prove that a) adecision tree of depth d for f can be simulated by a communication protocol of depth O(d logm)for the composed problem f Indnm, and b) a communication protocol of depth d logm for thecomposed problem f Indnm can be simulated by a decision-tree of depth O(d) for f . Let zii bethe variables of f and let xii, yii be the variables of f Indnm; recall that each zi takes values in0, 1, xi takes values in [m], and yi takes values in 0, 1m. The forward direction of the theoremis obvious: given a decision tree T for f , Alice and Bob can simply trace down T and compute theappropriate variable zi at each node v ∈ T visited, spending logm bits to compute Indm(xi, yi) todo so. Thus we focus on simulating a communication protocol Π of depth d logm.

6

Page 7: Lifting with Sunflowers

High level idea: Tracing the “important” coordinates. What does it mean to “simulate” acommunication protocol for f Indnm by a decision tree for f? When we look at the communicationmatrix for f Indnm, we label the (x, y) entry with the solutions o ∈ O satisfying (x, y) ∈ (f Indnm)−1(o). However we have no control over f , and so in some sense what we really care aboutis the z variables. So instead we will think of the (x, y) entry as storing z = Indnm(x, y), and theninstead of having to reason about f we can instead ask “what does the set of z values that make itto any given leaf of Π look like?”

For each leaf we want to split the coordinates into two categories: the “important” coordinateswhere the coordinates are (jointly) nearly fixed, and the rest where every possibility is still open.Hopefully this means that knowing the important coordinates is enough to declare the answer.Applying the same logic to the internal nodes we can query variables as they cross the thresholdfrom unfixed to important, which leads us down to the leaves in a natural way. To do this efficiently,we have to define “importance” in a way that satisfies all these conditions while also ensuring thatno leaf contains more than O(d) important variables.

Blockwise min-entropy. In order to prove this formally, we will trace down the communicationprotocol node by node, at each step looking for the z variables that are fairly “well determined” bythe current rectangle. We focus exclusively on the X side of the current rectangle, since Y is solarge that it would take more than d logm rounds just to fix a single yi. Our measure of coordinatei being well-determined will be the min-entropy of the uniform distribution on X marginalizedto the coordinate i. At the start of the protocol, every coordinate will have min-entropy logm,while each round can drop the min-entropy of a coordinate by at most 1. Once a coordinate i fallsbelow a certain min-entropy threshold, say 0.95 logm, we can consider the coordinate importantenough to query in the decision tree. Also of note is that we can think of Π as having “paid” for thecoordinate i; since min-entropy can only drop by 1 each round, it took 0.05 logm rounds to reducethe entropy of Xi to below the threshold, and since we ultimately want to shave an Ω(logm) factoroff the height of the communication protocol in our decision tree we can feel satisfied giving up therest of the information about Xi and Yi for free.

In fact we will use the generalization of min-entropy to blockwise min-entropy, and so insteadof tracking individual coordinates we stop whenever a set of coordinates I has a joint assignmentx[I] = α which violates 0.95 logm blockwise min-entropy. In addition we will use an entropy-restoring procedure called the rectangle partition. Whenever we find an assigment x[I1] = α1 thatviolates 0.95 logm blockwise min-entropy, we split X into two pieces: X1 = x : x[I1] = α1 andX −X1 = x : x[I1] 6= α1. Next we repeat for X −X1; if there is an assignment x[I2] = α2 thatviolates 0.95 logm blockwise min-entropy, then we split X −X1 into X2 and (X −X1)−X2. Werepeat until there are no more assignments, and then we can make a decision to toss out all Xj setsor to pick one and query z[Ij ].3

We now describe our high level procedure using this partitioning subroutine. In addition to therectangles Rv at each node v of Π, we maintain a subrectangle R = X×Y—initially full—which willbe our guide for how to proceed down Π. Starting at the root, we go down to the child v with thelarger intersection with R (in order to avoid stumbling into a situation where many coordinates getfixed for free) until we find that a set of coordinates has blockwise min-entropy less than 0.95 logmin R. After running the rectangle partition, we will need to decide which assignment to query;ultimately once we’ve chosen the assignment x[Ij ] = αj , we will query z[Ij ] and restrict R to beconsistent with the result. Our first key lemma states that if we run the rectangle partition on X

3As described in Section 4.1, unlike in [GPW17] in our proof we truncate this procedure before X is empty, butthe same basic principle applies.

7

Page 8: Lifting with Sunflowers

Figure 1: Rectangle Partition procedure (figure from [GPW17]).

such that X has blockwise min-entropy at least 0.95 logm on Ij , and Y has size at least 2mn−poly(n)

there is always some choice of j such that for every possible result z[Ij ] = βj , the resulting rectangleR is large on the Y side.

As mentioned before, our choice of min-entropy will be enough to guarantee that at every step,our rectangle R will have every assignment to z consistent with the current path in the decisiontree available. When we reach a leaf ` and have queried some coordinates I, we need to ensure thatwe know enough to output the same answer as Π. Our second key lemma states that if X and Yare fixed on the coordinates I ⊆ [n], X has min-entropy at least 0.95 logm on I, and Y has size atleast 2m·|I|−poly(n), then IndIm(X,Y ) = 0, 1I ; thus R ⊆ R` has every option left for I, and so itcannot be that one of those assignments gives a different answer that the one covering R`.

Key lemmas through sunflowers. Up until this point, everything we’ve stated is as it appearsin [GPW17]. For our new proof we unify our two key lemmas with a more challenging but ultimatelymore straightforward lemma: given X and Y such that XI has high blockwise min-entropy and Y

is large, there is a single row x∗ ∈ X such that IndIm(x∗, Y ) = 0, 1I . Given this statement bothclaims are easy to see. In the rectangle partition, for every Ij , αj such that some value βj has fewys consistent with it, we can simply throw out those ys; then since some row x∗ still has the fullrange of values available, whichever Xj part it appears in must not have had any ys thrown outon its account, and so y[Ij , αj ] should be fairly uniform. At the leaves, if a single x∗ gives the fullrange, then so does X 3 x∗.

Despite seeming more challenging, this unifying lemma follows almost immediately from theBlockwise Robust Sunflower Lemma. To illustrate this with a simple (but ultimately completelygeneral) case, assume the all-ones vector is missing from IndNm(x, Y ) for all x, or in other wordsthere is no (x, y) such that y[x] = 1N . Consider the universe [mN ], and let Sx be the set of size ndefined by the values x points to. Since X has high blockwise min-entropy, by Blockwise RobustSunflower Lemma a random set Sy ⊆ [mN ] will contain some Sx with high probability. If we lookat the incidence vector of our random Sy, it is a string y ∈ 0, 1mN = (0, 1m)N , and for Sy tonot contain Sx is equivalent to saying that y[x] 6= 1N . Thus Pry[∀x : y[x] 6= 1x] is very low, or inother words a sufficiently large set Y ⊆ (0, 1m)n must contain some y such that y[x] = 1x forsome x. This gives us our contradiction since we assumed Y was large.

One key aspect is that in the rectangle partition we’ve switched from proving an extractor-likeproperty—showing Y [Ij , αj ] is close to uniform—to proving a disperser-like property—showingIndnm(x∗, Y ) has every option in its support without need to show any sort of uniformity. This wasan obstruction to getting small gadgets before, as Fourier arguments tend to be fairly coarse interms of the polynomials involved.

Recap. Summing up, our final procedure will be as follows. For all v ∈ Π let Rv be the rectangleassociated with node v, let R = [m]n × (0, 1m)n, and let v = root. At each step we go to the

8

Page 9: Lifting with Sunflowers

child v′ of v maximizing R ∩ Rv′ . Then we perform the rectangle partition on X, query z[Ij ] forIj from the key lemma (possibly empty) to get the answer βj , and fix R to be consistent withx[Ij ] = αj and y[Ij , αj ] = βj . As an invariant we have that at the start of each round R is fixedon the coordinates J queried in our decision tree, XJ has blockwise min-entropy 0.95 logm, and|YJ | ≥ 2m·|J |−n logn. When we reach a leaf we apply the key lemma one last time to get that allpossible z values consistent with our path in the decision tree are still available, and so we canreturn the same answer as Π.

4 The basic lifting theorem: full proof

To prove Basic Lifting Theorem, we prove that if there exists a communication protocol Π of depthd logm for the composed problem f Indnm, then there exists a decision tree of depth O(d) for f ;the other direction is trivial as a communication protocol can simply compute each variable queriedby the decision tree. Our proof will follow the basic structure of previous works [GPW17,GGKS18].We first define a procedure, called the rectangle partition, which forms the main technical tool inour simulation. We then prove that with this tool and a few useful facts about its output, we canefficiently simulate the protocol Π by a decision tree T , using a number of invariants to show theefficiency and correctness of T . Before we begin, we prove a very useful lemma that shows that ifX has high blockwise min-entropy and Y is large then it’s possible to find an x∗ ∈ X such that thefull image of the index gadget is available to x∗, or in other words IndNm(x∗, Y ) = 0, 1N . Thisappears as Lemma 7 in [GGKS18] for dag-like lifting and is stronger than is necessary for provingBasic Lifting Theorem, but the proof highlights our new counting strategy and will be a useful toolthroughout the rest of the paper.4

Lemma 4.1 (Full Range Lemma). Let m ≥ n1.1 and let N ≤ n. Let X×Y ⊆ [m]N ×(0, 1m)Nbe such that X has blockwise min-entropy at least 0.95 logm−O(1) and |Y | > 2mN−n logm. Then thereexists an x∗ ∈ X such that for every β ∈ 0, 1N , there exists a yβ ∈ Y such that IndNm(x∗, yβ) = β.

Proof. Assume for contradiction that for all x there exists a βx such that |y ∈ Y : y[x] = βx| = 0,or in other words for all (x, y) ∈ X × Y , y[x] 6= βx. Consider the CNF over y1 . . . ymN where clauseCx is the clause uniquely falsified by y[x] = βx; then by Claim 2.3 we see that |y ∈ (0, 1m)N :∀x, y[x] 6= βx| is maximized when βx = 0N . Thus because Y ⊆ (0, 1m)N ,

|y ∈ Y : ∀x, y[x] 6= βx| ≤ |y ∈ (0, 1m)N : ∀x, y[x] 6= 0N|

Consider the space [mN ] where each element is indexed by (i, α) ∈ [N ] × [m]. For each x ∈ X,let Sx ⊆ [mN ] be the set defined by including (i, α) iff x[i] = α, and let SX = Sx : x ∈ X. Bythe fact that m0.95 O(n logm) and N ≤ n, SX has blockwise min-entropy 0.95 logm−O(1) >log(Kn logm)) ≥ log(K log(N/ε)), where ε := 2−n logm and K is the constant given by BlockwiseRobust Sunflower Lemma. Thus we can apply Blockwise Robust Sunflower Lemma to SX and getthat PrSy⊆[mN ](∀Sx ∈ SX ,Sx 6⊆ Sy) ≤ ε,5 and if we look at y as being the indicator vector for Sythen we get that Pry∼0,1mN (∀x ∈ X, y[x] 6= 0x) ≤ ε. Thus by counting we get

|Y | ≤ |y ∈ (0, 1m)N : ∀x, y[x] 6= 0N| ≤ ε · 2mN = 2mN−n logm

which is a contradiction as |Y | > 2mN−n logm by assumption.4While we simplify things in this section by using m = n1.1, our improved gadget size (see Section 5) crucially

uses the improvements in Blockwise Robust Sunflower Lemma over the basic Robust Sunflower Lemma; the sameimprovements also give us a very short proof of our main lemma. However, these improvements aren’t strictly necessaryfor our techniques; in Section 7 we provide an alternate proof just using Robust Sunflower Lemma.

5Recall that it does not matter that Sy is not necessarily block-respecting.

9

Page 10: Lifting with Sunflowers

4.1 Density-restoring partition

Before going into the simulation, we define our essential tool, which is usually called the density-restoring partition or rectangle partition as per [GPW17]. Let N ≤ n and let X × Y ⊆ [m]N ×(0, 1m)N . Our goal will be to output a set of rectangles Xj × Y j,β which cover most of X × Ysuch that each Xj × Y j,β is “good” in a similar sense to the statement of Full Range Lemma. Morespecifically, for each Xj × Y j,β there is some set of coordinates J ⊆ [N ] such that X and Y arecompletely fixed on J and “very unfixed” on [N ]r J . For X this means high blockwise min-entropyof XJ , meaning that every joint setting of some set of free coordinates is roughly equally likely. ForY the universe (0, 1m)N is so large in comparison to [m]N that a lower bound on |Y j,β

J| is enough

to assure Y j,β is free enough in the unfixed coordinates.Definition 4.1. Let N ≤ n and let ρ ∈ 0, 1, ∗N be a partial assignment with J := fix(ρ) ⊆ [N ].A rectangle R = X × Y ⊆ [m]N × (0, 1m)N is ρ-structured if the following conditions hold:− X and Y are fixed on the blocks J and IndJm(XJ , YJ) = ρ[J ]− XJ has blockwise min-entropy at least 0.95 logm− |YJ | > 2m·|J |−n logm

To perform the partition we will need to find the sets Xj × Y j,β along with a correspondingassignment ρj,β for which they are ρj,β-structured. This is done in two phases. Our goal in Phase Iwill be to break up X into disjoint parts Xj , such that each Xj is fixed on some set Ij ⊆ [N ] andhas blockwise min-entropy 0.95 logm on Ij—hence this partition is “density-restoring” when Xstarts off with blockwise min-entropy below 0.95 logm. To do this, the procedure iteratively finds amaximal partial assigment (Ij , αj) such that the assignment x[Ij ] = αj violates 0.95 logm blockwisemin-entropy in X, splits the remaining X into the part Xj satisfying this assignment and the partX rXj not satisfying it, and recurses on the latter part. We do this until we’ve covered at leasthalf of X by Xj subsets.

Our goal in Phase II will be to break up Y into disjoint parts Y j,β for each Xj from Phase I,such that each Xj × Y j,β is ρj,β-structured for some restriction ρj,β . We already have the blockwisemin-entropy of Xj in the coordinates [N ] r Ij by our first goal, so clearly fix(ρj,β) = Ij for any k.Thus we need to fix the coordinates of Y within the blocks Ij , and within each Y j,β it should bethe case that y[Ij , αj ] = β for all y ∈ Y j,β , at which point ρj,β can be fixed to β on Ij and left freeeverywhere else.

Algorithm 1: Rectangle PartitionInitialize F = ∅, j = 1, and X≥1 := X;PHASE I (Xj): while |X≥j | ≥ |X|/2 do

Let Ij be a maximal (possibly empty) subset of [N ] such that X≥j violates0.95 logm-blockwise min-entropy on Ij , and let αj ∈ [m]Ij be an outcome witnessingthis: Prx∼X≥j (x[Ij ] = αj) > 2−0.95|Ij | logm;

Define Xj := x ∈ X≥j : x[Ij ] = αj;Update F ← F ∪ (Ij , αj), X≥j+1 := X≥j rXj , and j ← j + 1;

endPHASE II (Y j,β): for (Ij , αj) ∈ F , β ∈ 0, 1Ij do

Let Y ′ = y ∈ Y : y[Ij , αj ] = β, and let ηj,β ∈ (0, 1m)|Ij | be the string whichmaximizes |y ∈ Y ′ : y[Ij ] = ηj,β|;

Define Y j,β := y ∈ Y : y[Ij ] = ηj,β;endreturn F , Xjj , Y j,βj,β;

10

Page 11: Lifting with Sunflowers

X1

X2

X3

...

x[I1] = α1

x[I2] = α2 x[I1] 6= α1

x[I3] = α3x[I1] 6= α1

x[I2] 6= α2

X1

X2

X3

Y 1,00 Y 1,01 Y 1,10 Y 1,11

Y 2,000 Y 2,001 Y 2,010 Y 2,011 Y 2,100 Y 2,101

Y 3,0 Y 3,1

Figure 2: Phases I and II of Rectangle Partition. In each Xj × Y j,β , x[Ij ] is fixed to αj and y[Ij ]is fixed so that IndIjm(Xj

Ij, Y j,β

Ij) = β.

Our algorithm is formally described in Rectangle Partition.6 Let X ⊆ [m]N , let Y ⊆ (0, 1m)N ,and let F , Xjj , Y j,βj,β be the outputs of the rectangle partition on X × Y . Recall that ourgoal was to break X × Y up into ρj,β-structured rectangles Xj × Y j,β; the following simple claimsshow that the obvious choice of ρj,β achieves two of the three conditions needed.

Claim 4.2. For all j and for all β ∈ 0, 1Ij , define ρj,β ∈ 0, 1, ∗N to be the restriction wherefix(ρj,β) = Ij and ρj,β [Ij ] = β. Then Xj×Y j,β is fixed on Ij and outputs IndIjm(Xj

Ij, Y j,β

Ij) = ρj,β [Ij ].

Proof. By definition Xj is fixed to αj on the coordinates Ij , while Y j,β is fixed to ηj,β on the blocks Ij .Since ηj,β ∈ 0, 1Ij clearly satisfies ηj,β[αj ] = β, it holds that IndIjm(Xj

Ij, Y j,β

Ij) = β = ρj,β[Ij ].

Claim 4.3. For all j, Xj

Ijhas blockwise min-entropy at least 0.95 logm.

Proof. Assume for contradiction that I∗ ⊆ [N ] r Ij such that Xj violates 0.95 logm-blockwisemin-entropy on I∗, and let α∗ be an outcome witnessing this. Then

Prx∼X≥j (x[Ij ] = αj ∧ x[I∗] = α∗) > 2−0.95|Ij | logm ·Prx∼Xj (x[I∗] = α∗)> 2−0.95|Ij | logm−0.95|I∗| logm = 2−0.95|Ij∪I∗| logm

which contradicts the maximality of Ij .

Before moving to the third condition, the size of Y j,β

Ij, we show that the deficiency of each Xj

drops by Ω(|Ij | logm). This will be used later to show the efficiency of our simulation.

Claim 4.4. For all (Ij , αj) ∈ F , D∞(Xj

Ij) ≤ D∞(X)− 0.05|Ij | logm+ 1.

6For those familiar with previous works [GPW17], Rectangle Partition varies in two ways: 1) we truncate Phase Ionce we’ve partitioned at least half of X; and 2) in Phase II we fix the rest of Y j,β inside the blocks Ij .

11

Page 12: Lifting with Sunflowers

Proof. By our choice of (Ij , αj) it must be that |Xj | = |X≥j | · Prx∼X≥j (x[Ij ] = αj) ≥ |X≥j | ·2−0.95 logm. Then by a simple calculation

D∞(Xj

Ij) = |Ij | logm− log |Xj |≤ (N − |Ij |) logm− log(|X≥j | · 2−0.95|Ij | logm)≤ (N logm− |Ij | logm)− log |X≥j |+ 0.95|Ij | logm− log |X|+ log |X|= (N logm− log |X|)− 0.05|Ij | logm+ log(|X|/|X≥j |)≤ D∞(X)− 0.05|Ij | logm+ 1

where the last step used the fact that |X≥j | ≥ |X|/2, since we terminate as soon as |X≥j | < |X|/2at the start of the j-th iteration.

For our last lemma before going into the simulation, instead of showing that |Y j,β

Ij| = |Y j,β| is

large for every j and every β, we want to show that |Y j,β| is large for some j and every β. If every βwere equally likely then |Y j,β| ≈ |Y |/2m·|Ij |; for us it is enough that the smallest Y j,β be a factor of2−O(|Ij | logm) away from this. We add two new assumptions on X × Y : 1) X starts with blockwisemin-entropy very close to 0.95 logm; and 2) Y is initially large. For convenience we redefine X toonly be the union of the Xj parts; since we terminate after |X≥j | < |X|/2 we can do this and onlydecrease the blockwise min-entropy of X by 1. This lemma is a fairly direct application of FullRange Lemma.Lemma 4.5. Let X := ∪jXj be such that X has blockwise min-entropy 0.95 logm−O(1), and letY be such that |Y | > 2mN−n logm+1. Then there is a j such that for all β ∈ 0, 1Ij ,

|Y j,β

Ij| ≥ |Y |/2m·|Ij |+2|Ij | logm

Proof. We will show that there exists a j such that for every β ∈ 0, 1Ij , |y ∈ Y : y[Ij , αj ] = β| ≥|Y |/22|Ij | logm. If this is true, then by averaging there is some assignment to Ij—aka ηj,β—such that

|Y j,β

Ij| = |Y j,β| ≥ (|Y |/22|Ij | logm)/2m·|Ij | = |Y |/2m·|Ij |+2|Ij | logm

Assume for contradiction that for every j there exists a βj such that |y ∈ Y : y[Ij , αj ] =βj| < |Y |/22|Ij | logm. Define Y= := y ∈ Y : ∃j, y[Ij , αj ] = βj and Y6= := Y r Y= = y ∈ Y :∀j, y[Ij , αj ] 6= βj. We will show that |Y=| < |Y |/2; if this is the case then for it must be that|Y 6=| ≥ |Y |/2 > 2mN−n logm. By Full Range Lemma there must exist some x∗ ∈ X such that for everyβ ∈ 0, 1N there exists yβ ∈ Y6= such that yβ [x∗] = β. Since x∗ ∈ X, x∗ ∈ Xj for some j, and so forany β ∈ 0, 1N such that β[Ij ] = βj , there exists a yβ ∈ Y6= such that yβ [x∗] = β. But since x∗ ∈ Xj ,x∗[Ij ] = αj , so yβ[Ij , αj ] = βj which is a contradiction since Y6= = y ∈ Y : ∀j, y[Ij , αj ] 6= βj.

We now show that |Y=| < |Y |/2. Define F(k) := (Ij , αj) ∈ F : |Ij | = k. Clearly |F(k)| ≤21.9k logm−1 since there are at most

(Nk

) 20.9k logm−1 possible sets Ij , and for each there are mk

possible assignments αj . Furthermore F(0) must be empty, because if the empty restriction whereIj = ∅ is in F , then the corresponding βj would be empty and y ∈ Y : y[∅, ∅] = ∅ = Y would beof size |Y |/20, which contradicts our choice of βj . Since we assumed that |y ∈ Y : y[Ij , αj ] = β| <|Y |/22|Ij | logm for all j, by our bound on |F(k)|

|Y=| <N∑k=1

(21.9k logm−1 · |Y |22k logm )

≤ |Y |2 ·

N∑k=1

(20.1·logm)−k

<|Y |2 ·

∞∑k=1

2−k = |Y |2

12

Page 13: Lifting with Sunflowers

which completes the proof.

4.2 Simulation

Proof of Basic Lifting Theorem. For n sufficiently large let m = n1.1 and let d ≤ n. As stated inSection 1 given a decision tree T for f of depth d we can build a communication protocol for f Indnmof depth d logm; Alice sends the entirety of xj for whatever variable zj the decision tree queries, Bobsends back yj [xj ], and they go down the appropriate path in the decision tree. Thus we show theother direction: given a protocol Π of depth d logm for the composed problem f Indnm we want toconstruct a decision-tree of depth O(d) for f . Note that we can assume that d = o(n) as the theoremis trivial otherwise. The decision-tree is naturally constructed by starting at the root of Π andtaking a walk down the protocol tree guided by occasional queries to the variables z = (z1, . . . , zn)of f . During the walk, we maintain a ρ-structured rectangle R = X × Y ⊆ [m]n × (0, 1m)n whichwill be a subset of the inputs that reach the current node in the protocol tree, where ρ correspondsto the restriction induced by the decision tree at the current step. Thus our goal is to ensure thatthe image Indnm(X × Y ) has some of its bits fixed according to the queries to z made so far, and noinformation has been leaked about the remaining free bits of z.

To choose which bits to fix, we use the density restoring partition to identify any assignments tosome of the x variables that have occurred with too high a probability; by the way the rectanglepartition is defined the corresponding sets Xj regain blockwise min-entropy. Then using Lemma 4.5,we pick one of these assignments and query all the corresponding z variables, and for the resultingβ we know Xj × Y j,β is ρj,β-structured since the size of Y j,β doesn’t decrease too much. With theblockwise min-entropy of X restored and the size of Y kept high, we can update ρ to include ρj,βand continue to run the rectangle partition at the next node, and so we proceed in this way downthe whole communication protocol.

We describe our query simulation of the communication protocol Π in Simulation Protocol. Forall v ∈ Π let Rv = Xv × Yv be the rectangle induced at node v by the protocol Π. The query andoutput actions listed in bold are the ones performed by our decision tree.

Algorithm 2: Simulation protocolInitialize v := root of Π; R := [m]n × (0, 1m)n; ρ = ∗n;while v is not a leaf do

Precondition: R = X × Y is ρ-structured; for convenience define J := fix(ρ);Let v`, vr be the children of v, and update v ← v` if |R ∩Rv` | ≥ |R|/2 and v ← vrotherwise;

Execute Rectangle Partition on (X ∩Xv)J × (Y ∩ Yv)J and let F = (Ij , αj)j , Xjj ,Y j,βj,β be the outputs;

Apply Lemma 4.5 to F , Xjj , Y j,βj,β to get some index j corresponding to(Ij , αj) ∈ F ;

Query each variable zi for every i ∈ Ij , and let β ∈ 0, 1Ij be the result;Update XJ ← Xj and YJ ← Y j,β, and update ρ← ρ ∪ ρj,β (recall that ρj,β ∈ 0, 1, ∗nis the restriction where fix(ρj,β) = Ij and ρj,β[Ij ] = β);

endOutput the same value as v does;Before we prove the correctness and efficiency of our algorithm, we note that we make no

distinction between Alice speaking and Bob speaking in our procedure. Here we note that each Rv isa rectangle induced by the protocol Π, and so updating v only splits X or Y—corresponding to when

13

Page 14: Lifting with Sunflowers

Rv` Rvr

R

Figure 3: One iteration of Simulation Protocol. We perform Rectangle Partition (green lines) onthe larger half of R after moving from v to its child (shaded in purple), use Lemma 4.5 to identify apart j (shaded in blue), and then query Ij and set R to Xj × Y j,β for the result z[Ij ] = β (shadedin brown).

Alice and Bob speak respectively—but not both, and so since R ⊆ Rv we get that |X ∩Xv| ≥ |X|/2and |Y ∩ Yv| ≥ |Y |/2.

Efficiency and correctness. To prove the efficiency and correctness of our algorithm, considerthe start of the t-th iteration, where we are at a node v and maintaining Rt = Xt × Y t and ρt.7Again for convenience we write J t := fix(ρt). Let (It, αt) be the (possibly empty) assignmentreturned by Lemma 4.5 corresponding to index jt, and let βt be the result of querying z[It]. Notethat J t+1 = J t t It, Xt+1

Jt+1 = Xjt

It, and Y t+1

Jt+1 = Y jt,βt

It. Also note that t ≤ d logm by the depth of

the protocol Π.We show that our precondition that Rt is ρt-structured holds for all t, assuming for the moment

that that |J t| ≤ O(d) for all t. We show this by the following three invariants:

(i) Xt, Y t are fixed on J t and IndJtm (XtJt , Y

tJt) = ρt[J t]

(ii) XtJt

has blockwise min-entropy at least 0.95 logm(iii) |Y t

Jt| ≥ 2m·|Jt|−t−2|Jt| logm.

This is enough to show Rt is ρt structured as 2m·|Jt|−t−2|Jt| logm > 2m·|Jt|−n logm by assumption on|J t|. All invariants hold at the start of the algorithm since ρ0 = ∗n and X0×Y 0 = [m]n× (0, 1m)n.Inductively consider the (t + 1)-th iteration assuming all invariants holds for the t-th iteration.After applying Rectangle Partition invariant (i) follows by Claim 4.2 and invariant (ii) follows byClaim 4.3. For invariant (iii) we first show that it is valid to apply Lemma 4.5 in the (t + 1)-th

7We understand that this notation is somewhat overloaded with Xj , Y j,β , and ρj,β . Since the proof that theinvariants hold is short and we only ever use t (or t+ 1) for the time stamps and j for the indices, hopefully this won’tcause any confusion.

14

Page 15: Lifting with Sunflowers

iteration. First, because |Xt∩Xv| ≥ |Xt|/2 we know that the blockwise min-entropy of (Xt∩Xv)Jtis at most one less than the blockwise min-entropy of Xt

Jt, which is at least 0.95 logm. Second,

we have |(Y t ∩ Yv)Jt | ≥ |Yt|/2 > 2m·|Jt|−t−2|Jt| logm−1 > 2m·|Jt|−O(d logm), and recall that d = o(n).

Thus we can apply Lemma 4.5 and we get

|Y t+1Jt+1 | = |Y jt,βt

It|

≥ |(Y t ∩ Yv)Jt |/2m·|It|+2|It| logm

≥ 2m·|Jt|−t−2|Jt| logm−1/2m·|It|+2|It| logm

≥ 2m·(|Jt|−|It|)−t−1−2(|Jt|+It|) logm = 2m·|Jt+1|−(t−1)−2|Jt+1| logm

To show that |J t| ≤ O(d)—and by extension that our simulation is guaranteed to be efficient—it isenough to show that D∞(Xt

Jt) ≤ 2t− 0.05|J t| logm for every t ≤ d logm, as this gives a bound of

|J t| ≤ 2t/0.05 logm = O(d) by the non-negativity of deficiency. When t = 0 then J t is empty andD∞(X) = 0. Now in the t-th iteration recall that we query the set It, and by Claim 4.4 we get that

D∞(Xt+1Jt+1) = D∞(Xjt

It)

≤ D∞((X ∩Xv)Jt)− 0.05|It| logm+ 1≤ 1 + (2t− 0.05|J t| logm)− 0.05|It| logm+ 1 = 2(t+ 1)− 0.05|J t+1| logm

We finally have to argue that if we reach a leaf v of Π while maintaining R and ρ, then the solutiono ∈ O output by Π is also valid solution to the values of z, of which the decision-tree knows thatz[fix(ρ)] = ρ[fix(ρ)]. Suppose Π outputs o ∈ O at the leaf v, and assume for contradiction that thereexists β ∈ 0, 1n consistent with ρ such that β /∈ f−1(o). Since Indfix(ρ)

m (x, y) = ρ[fix(ρ)] = β[fix(ρ)]for all (x, y) ∈ R, we focus on free(ρ), and let N := | free(ρ)|. Since R is ρ-structured, Xfree(ρ)has blockwise min-entropy 0.95 logm and |Yfree(ρ)| > 2m·| free(ρ)|−n logm. Thus applying Full RangeLemma, we know that that there exists (x, y) ∈ R such that Indnm(x, y) = β, which is a contradictionas R ⊆ Rv ⊆ (f Indnm)−1(o).

5 Optimizing the gadget size

In Section 4 we loosely chose m = n1.1 for the purpose of showing the basic lifting statement. Inthis section we improve from n1.1; more specifically we show a tradeoff between the gadget size andthe strength of the lifting theorem. Ultimately our tradeoff gives an optimal gadget size of m beingquasilinear in n.

Theorem 5.1 (Improved Lifting Theorem). Let f be a search problem over 0, 1n, and letm = Ω(n logn). Then

Pcc(f Indm) ≥ Ω(Pdt(f))

Warm-up: m = n1+ε. First we improve on Basic Lifting Theorem to get a gadget of size n1+ε

for any ε > 0, with no changes in the asymptotic strength of the lifting theorem nor anythingnon-trivial in the proof. This comes from two observations. First, we only use the size of m inthe two places we apply Full Range Lemma, and in both cases we can apply Blockwise RobustSunflower Lemma as long as 0.95 logm−O(1) ≥ log(Kn logm). Second, from the perspective ofour simulation, the constant 0.95 is only used to set the blockwise min-entropy threshold for thedensity-restoring partition, and was chosen arbitrarily.

So for δ > 0 we can instead choose to put the threshold at (1 − δ) logm, at which point ourcondition on m changes to (1 − δ)m ≥ log(Kn logm). Clearly this can be made to fulfill our

15

Page 16: Lifting with Sunflowers

condition m ≥ n1+ε with an appropriate choice of δ. The proof itself then simply becomes a matterof replacing 0.95 with 1− δ and 0.05 with δ throughout the proof, as well as a few other constants.Since Claim 4.4 now gives a drop in deficiency of δ for every coordinate we query, the non-negativityof deficiency gives us |fix(ρt)| ≤ 2t/δ logm at any time t ≤ d logm, which gives us a decision tree ofdepth (2/δ) · d = O(d) as required.

Near-linear gadget: m = Θ(n logn). Building off the intuition from our warm-up, what happensif δ is chosen to be subconstant? We cannot hope to get a tight lifting theorem, as our decision treewill be of depth (2/δ) · d. Furthermore choosing δ = o(1/ logm) makes our blockwise min-entropythreshold (1− δ) logm trivial, as logm is the maximum possible blockwise min-entropy for X. Thusby choosing δ = Ω(1/ logm) we can get the following general lower bound, which gives ImprovedBasic Lifting Theorem as a special case.

Theorem 5.2 (Scaling Basic Lifting Theorem). Let f be a search problem over 0, 1n, andlet m, δ be such that δ ≥ Ω( 1

logm) and m1−δ ≥ Ω(n logm). Then

Pcc(f Indm) ≥ Pdt(f) · Ω(δ logm)

Proof sketch. We start with a given communication protocol Π of depth d · δ logm for the composedproblem f Indnm and construct a decision-tree of depth O(d) for f . We define a ρ-structuredrectangle R as before except now with the condition that X has blockwise min-entropy (1− δ) logm.Then in Rectangle Partition we set the blockwise min-entropy threshold for a violating assignment(Ij , αj) at (1− δ) logm as well.

To prove Full Range Lemma, note that we can apply Claim 2.3 regardless of m and N , andwe can still apply Blockwise Robust Sunflower Lemma as long as we can choose m such that(1 − δ) logm − 2 > log(Kn logm). Thus for this altered rectangle partition procedure, by thesame proofs as before, Claim 4.3 states that Xj

Ijhas blockwise min-entropy at least (1− δ) logm,

Claim 4.4 states that D∞(Xj

Ij) ≤ D∞(X)− |Ij | · δ logm+ 1, and Lemma 4.5 states that if X has

blockwise min-entropy (1− δ) logm− 2 and |Y | > 2mN−n logm, then there exists a j such that forall β, |Y j,β| ≥ |Y |/2m·|Ij |+2|Ij | logm.

Now our simulation procedure is the same as Simulation Protocol. Again at the start of the t-thiteration we are maintaining Rt = Xt × Y t, ρt, and J t := fix(ρt), where now t ≤ d · δ logm. By thesame argument our procedure is well-defined as long as the precondition of Rt being ρt-structuredholds, and by a deficiency argument using our new Claim 4.4 we get that D∞(Xt

Jt) ≤ 2t−|J t|·δ logm,

which implies |J t| ≤ 2t/δ logm ≤ 2d. Our precondition holds by applying the new versions ofClaim 4.2, Claim 4.3, and Lemma 4.5 as before. Finally our simulation is correct again by theinvariants and Full Range Lemma.

6 Lifting for dag-like protocols

In this section we show that we can perform our lifting theorem in the dag-like model, going fromdecision dags to communication dags. This was originally proven by Garg et al. [GGKS18] using analternate proof of Full Range Lemma, and we follow their proof exactly; in fact the only differenceis that the parameters in Full Range Lemma require them to define ρ-structured with |Y | ≥ 2mn−n3 ,whereas our definition of ρ-structured is the stricter |Y | ≥ 2mn−n logm, which will again allow us toshow the same improvements as in Section 5.

16

Page 17: Lifting with Sunflowers

6.1 Decision Trees and Dags

To begin we generalize the definition of decision trees and communication protocols for solvingsearch problems [Raz95, Pud10, Sok17]. Let f ⊆ Z × O be a search problem where Z = 0, 1nand O is the set of potential solutions to the search problem. Let Q be a family of functions fromZ to 0, 1. A Q-decision tree T for f is a tree where each internal vertex v of T is labeled witha function qv ∈ Q, each leaf vertex of T is labelled with some o ∈ O and satisfying the followingproperties:

− q−1v (1) = Z when v is the root of T

− q−1v (1) ⊆ q−1

u (1) ∪ q−1w (1) for any node v with children u and w

− q−1v (1) ⊆ f−1(o) for any leaf node v labeled with o ∈ O

We can see that ordinary decision trees are Q-decision trees where Q is the set of juntas (i.e.,conjunctions of literals). The root corresponds to the trivially satisfiable junta 1, and the leavesare labeled with some junta that is sufficient to guarantee some answer o ∈ O. At any node v withchildren u and w, since v(z), u(z), and w(z) are all juntas and q−1

v (1) ⊆ q−1u (1) ∪ q−1

w (1), it is nothard to see that there is some variable zi such that u(z) is a relaxation of qv(z) ∧ zi and qw(z) is arelaxation of qv(z) ∧ zi, or vice-versa.

This notion also generalizes to communication complexity search problems F ⊆ X ×Y×O, wherenow Q is the family of functions from X × Y to 0, 1 corresponding to combinatorial rectanglesX×Y ⊆ X ×Y ; more specifically qX×Y (x, y) = 1 iff (x, y) ∈ X×Y . Since q−1

v (1) ⊆ q−1u (1)∪ q−1

w (1),it must be that the rectangles we test membership for at u and w cover the rectangle being testedat v, and again it is not hard to see that this corresponds to a relaxation of testing membership inXv = Xu tXw or Yv = Yu t Yw.

Now we generalize this notion to dags. For a search problem f ⊆ Z×O and a family of functionsQ from Z to 0, 1, a Q-dag is a directed acyclic graph D where each internal vertex v of the dag islabeled with a function qv(z) ∈ Q and each leaf vertex is labelled with some o ∈ O and satisfyingthe following properties:

− q−1v (1) = Z when v is the root of D

− q−1v (1) ⊆ q−1

u (1) ∪ q−1w (1) for any node v with children u and w

− q−1v (1) ⊆ f−1(o) for any leaf node v labeled with o ∈ O

For Z = 0, 1n a conjunction dag D solving f is a Q-dag where Q is the set of all juntas over Z. 8

For conjunction dags our measure of complexity will be a bit different than size. The width of Π isthe maximum number of variables occurring in any junta v(z). We define

w(f) := least width of a conjunction dag solving f .

For a communication complexity search problem F ⊆ X × Y ×O and a family of functions Q fromX × Y to 0, 1, we define a Q-dag solving F analogously. A rectangle dag Π solving F is a Q-dagwhere Q is the set of all indicator vectors of rectangles X × Y ⊆ X × Y. We define

rect-dag(F ) := least size of a rectangle dag solving F .

8As noted above the terms “conjunction” and “junta” are closely related, but conjunctions are usually thought ofas syntactic objects while juntas are functions. We keep the term conjunction dag from [GGKS18] for consistencyeven though we switch to using junta for the functions in Q.

17

Page 18: Lifting with Sunflowers

6.2 Main theorem

The following is our dag-like deterministic lifting theorem. Again an earlier version was originallyproven in [GGKS18] with m = n2, and we improve this to a near linear dependence on n.

Theorem 6.1 (Dag-like Lifting Theorem [GGKS18]). Let f be a search problem over 0, 1n,and let m = n1+ε for some constant ε > 0. Then

log rect-dag(f Indnm) = w(f) ·Θ(logm)

In fact one can easily check that the same scaling argument as Scaling Basic Lifting Theoremcan also be applied to the proof of Dag-like Lifting Theorem, as noted above.

Proof. For n sufficiently large let m = n1+ε and let d ≤ n. Again one direction is simple; given aconjunction dag D for f of width d we can construct a rectangle dag Π for f Indnm of size mO(d)

by simply replacing each edge in D with a short protocol that queries all variables fixed by theedge. Thus we will prove that given a rectangle dag Π for f Indnm of size md we can construct aconjunction dag D of width O(d) for f (again we can assume d = o(n) since the problem is trivialotherwise).

Our procedure is similar to before, maintaining a ρ-structured rectangle R ⊆ Rv at every step,but now there’s a slight twist: the protocol may have depth greater than d and can decide to “forget”some bits at each stage, at which point we will have to make sure the assignment ρ we maintainalso stays small.

This presents two problems. First off, it won’t be enough to find a subrectangle of our currentrectangle R, since R has some bits fixed that may be forgotten by the protocol. We circumvent thisby applying the rectangle partition procedure to the actual rectangle Rv, which allows us to findthe “important bits” as before, and then shift to a good rectangle Xj × Y j,β, leaving R behind.

The second challenge is that whenever we apply Rectangle Partition we need to ensure thatevery set Ij we find is of size O(d). The Rectangle Lemma is the main technical lemma of [GGKS18],establishing extra properties of Rectangle Partition. We give a new proof of the Rectangle Lemma,showing that that by slightly modifying Rectangle Partition we can remove some “error sets” fromX and Y and afterwards assume that all our rectangles Xj × Y j,β are ρ-structured for some smallrestriction ρ, aka one that fixes O(d) coordinates. Here we don’t require that X has high blockwisemin-entropy or Y is large; recall that in Rectangle Partition these conditions were only needed tofind a “good” j. We prove this at the end of the section.

Lemma 6.2 (Rectangle Lemma, [GGKS18]). Let R = X × Y ⊆ [m]N × 0, 1mN and let d < n.Then there exists a procedure which outputs Xj×Y j,βj,β, Xerr, Yerr, where Xerr ⊆ X and Yerr ⊆ Yhave density 2−2d logm in [m]n and (0, 1m)n respectively, and for each j, β one of the followingholds:

− structured: Xj × Y j,β is ρj,β structured for some ρj,β of width at most O(d)− error: Xj × Y j,β ⊆ Xerr × 0, 1mn ∪ [m]n × Yerr

Finally, a query alignment property holds: for every x ∈ [m]n rXerr there exists a subset Ix ⊆ [n]with |Ix| ≤ O(d) such that every “structured” Xj×Y j,β intersecting x×0, 1mn has fix(ρj,β) ⊆ Ix.

With the Rectangle Lemma at hand, the simulation algorithm (Algorithm 3) and proof ofcorrectness essentially follows [GGKS18]. In particular Algorithm 3 starts with a preprocessing stepwhere for each vertex v in the communication dag, we apply Lemma 6.2. Then in a bottom-upfashion. and for each v we remove from Rv all error sets appearing in descendants of v.

18

Page 19: Lifting with Sunflowers

Algorithm 3: Dag-like Simulation ProtocolPREPROCESSING: initialize X∗err = ∅ and Y ∗err = ∅, and for all v ∈ Π letRv := Xv × Yv be the rectangle corresponding to v;for v ∈ Π starting from the leaves and going up to the root do

Update Xv ← Xv rX∗err and Yv ← Yv r Y ∗err;Apply Lemma 6.2 to Xv × Yv and let Xj

vj , Y j,βv j,β , Xerr, Yerr, Ixx be the outputs;

Update X∗err ← X∗err ∪Xerr and Y ∗err ← Y ∗err ∪ Yerr;endInitialize v := root of Π; R := Rv; ρ = ∗n;while v is not a leaf do

Precondition: R = X × Y is ρ-structured, for convenience define J := fix(ρ), andfurthermore |J | ≤ O(d);Apply Full Range Lemma to XJ × YJ to get x∗ ∈ X;Let v`, vr be the children of v, let j`, jr be the indices such that x∗ ∈ Xj`

v`and x∗ ∈ Xjr

vr ,and let Ij` and Ijr be the query alignment sets Ix∗ for v` and vr respectively;

Query each variable zi for every i ∈ (Ij` ∪ Ijr) r J , let βj` ∈ 0, 1Ij` be the result

concatenated with ρ[J ] and restricted to Ij` , and let βjr ∈ 0, 1Ijr be definedanalogously;

Let y∗ ∈ Y be such that IndIj`m (x∗, y∗) = βj` and IndIjrm (x∗, y∗) = βjr , and let c ∈ `, rbe such that (x∗, y∗) ∈ Xjc

vc × Yjc,βjcvc ;

Update X × Y = Xjcvc × Y

jc,βjcvc and ρ← ρjc,βjc ;

endOutput the same value as v does;

After preprocessing to remove the error sets, we enter the main while loop of the algorithm, whichiteratively walks down the communication dag, maintaining the invariant that when processing nodev, we have a ρ-structured rectangle R associated with v with at most O(d) bits fixed. We applyFull Range Lemma to R to find some x∗. Since all error sets were removed in the preprocessingstep, we are guaranteed that x∗ is contained not only in R (the rectangle associated with v), butalso in the rectangles associated with the children of v. That is, x∗ ∈ Xj`

v`and x∗ ∈ Xjr

vr for someXj`v`

generated in the left child and Xjrvr generated in the right child. Furthermore x∗ has full range

in R ⊆ Rv` ∪Rvr , and so there cannot be any Y j`,βv`

or Y jr,βvr that is missing.

We use the query alignment property for Ij` and Ijr corresponding to Xj`v`

and Xjrvr , and query

all unknown bits for both sets. Then because of the full range of x∗ we find a y∗ compatible withall bits fixed, and move to the (structured) rectangle output by the partition at whichever child of vcontains y∗, since R is in the union of the rectangles at v’s children. Thus when we move down to achild of v, we maintain our invariant of being in a ρ-structured rectangle with at most O(d) fixedbits. We state the algorithm (Algorithm 3) formally in the next page.

We briefly go over the invariants needed to run Dag-like Simulation Protocol. For the pre-processing step consider any node v. Since the number of descendants of v is at most |Π| = md,we know that after having removed all error sets below the current node v we’ve only lost amd×2−2d logm 1/2 fraction of Xv and Yv. At the root of Π, after processing Rv in total we’ve lostan md · 2−2d logm 1/2 fraction of [m]n and (0, 1m)n each, meaning we start with |Xv| = mn/2and |Yv| = 2mn/2. After this the rectangle associated with the root we will never encounter an errorrectangle in our procedure. This will be the only place where we use the fact that |Π| = md.

19

Page 20: Lifting with Sunflowers

Rv`

Rvr

R

x∗

y∗

Figure 4: One iteration of Dag-like Simulation Protocol. We perform Rectangle Partition (greenlines) on both Rv` and Rvr separately, use Full Range Lemma find an x∗ ∈ R with full range,query all bits in the sets Ij` and Ijr corresponding to Xj` , Xjr 3 x∗ (shaded in blue), find a y∗ forwhich Indnm(x∗, y∗) matches the result, and set R to Xjc × Y jc,βc 3 (x∗, y∗) (shaded in brown) forc ∈ `, r (shaded in purple).

In the main procedure, assuming the precondition of R being ρ-structured holds we meet allconditions for applying Full Range Lemma. Since x∗ has full range we know that every Y j`,βj`

v` andYjr,βjrvr exists, and since we removed all error sets the rectangle Xjc

vc × Yjc,βcvc we end up in must be

in the “structured” case of Lemma 6.2. Thus again end up in an R which is ρ′-structured for someρ′ which fixes at most O(d) coordinates, and so we’ve met the preconditions for the next round.

Our argument at the leaves is identical to the proof of Basic Lifting Theorem, but we restate itfor completeness. We have to argue that if we reach a leaf v of Π while maintaining R and ρ, then thesolution o ∈ O output by Π is also valid solution to the values of z, of which the decision-dag knowsthat z[fix(ρ)] = ρ[fix(ρ)]. Suppose Π outputs o ∈ O at the leaf v, and assume for contradiction thatthere exists β ∈ 0, 1n consistent with ρ such that β /∈ f−1(o). Since Indfix(ρ)

m (x, y) = ρ[fix(ρ)] =β[fix(ρ)] for all (x, y) ∈ R, we focus on free(ρ), and let N := | free(ρ)|. Since R is ρ-structured,Xfree(ρ) has blockwise min-entropy 0.95 logm and |Yfree(ρ)| > 2m·| free(ρ)|−n logm. Thus applying FullRange Lemma, we know that that there exists (x, y) ∈ R such that Indnm(x, y) = β, which is acontradiction as R ⊆ Rv ⊆ (f Indnm)−1(o).

Proof of Lemma 6.2. Our procedure for generating rectangles Xj × Y j,β will be nearly the same asRectangle Partition, with the one small caveat that we run Phase I until X≥j is empty instead ofstopping after partitioning half of X.9 Let R ⊆ [m]n × 0, 1mn, let Xj × Y j,βj,β be the output

9Our procedure doesn’t require a drop in deficiency anymore, since it’s enough to maintain the invariant thatwe’ve fixed at most O(d) coordinates.

20

Page 21: Lifting with Sunflowers

Yerr

Xerr

Figure 5: Error rectangles shaded in blue. Xj is added to Xerr if Ij is too large (bottom), whileY j,β is added to Yerr if Y j,β is too small (right).

of this procedure on R, and let F = (Ij , αj) be the set of assignments found in the procedure asin Rectangle Partition. We first need to define the error rectangles Xerr and Yerr. Intuitively every“structured” Xj × Y j,β is ρj,β structured for some ρj,β , and furthermore we want to ensure that thenumber of bits fixed in ρj,β is at most O(d). For X this means ensuring that Ij is small, while forY this means ensuring that Y j,β is large. We initialize Jgood = [|F|], and we repeatedly find “bad”j ∈ Jgood and add either Xj or Y j,β to Xerr or Yerr.

− Xerr: while there exists j ∈ Jgood such that |Ij | > 40d, update Xerr ← Xerr ∪ Xj andJgood ← Jgood r j

− Yerr: while there exists j ∈ Jgood and β such that |Y j,β r Yerr| < 2m·|Ij |−5d logm, updateYerr ← Yerr ∪ Y j,β for all such β and Jgood ← Jgood r j

We prove a series of short claims about Xerr and Yerr, most of which immediately follow in thesame way as Claim 4.2, Claim 4.3, and Lemma 4.5. The first puts these claims together to showthat all rectangles corresponding to j ∈ Jgood fulfill the “structured” case of Lemma 6.2.

Claim 6.3. For all j ∈ Jgood and all β ∈ 0, 1Ij , Xj × Y j,β is ρj,β structured for some ρj,β whichfixes at most O(d) coordinates.

Proof. As usual, for all j and for all β ∈ 0, 1Ij , define ρj,β ∈ 0, 1, ∗N to be the restriction wherefix(ρj,β) = Ij and ρj,β[Ij ] = β. Then

− by Claim 4.2, Xj × Y j,β is fixed on Ij and outputs IndIjm(XjIj, Y j,β

Ij) = ρj,β[Ij ].

− by Claim 4.3, XIjhas blockwise min-entropy 0.95 logm.

− since j ∈ Jgood, it must be that |Y j,β| ≥ 2m·|Ij |−5d logm ≥ 2m·|Ij |−n logm. 10

and so Xj × Y j,β is ρj,β-structured. Furthermore, since j ∈ Jgood it must be the case that|fix(ρj,β)| = |Ij | ≤ 40d.

10Note that even if some of Y j,β is lost when we expand Yerr for some j′, the loop will not terminate until everyY j,β that is currently too small is added, and so if Y j,β survives then it must be large enough.

21

Page 22: Lifting with Sunflowers

Finally we handle the density of the error rectangles. In our simulation this will be used toensure we can apply Full Range Lemma at every step.

Claim 6.4. |Xerr| ≤ mn · 2−2d logm and |Yerr| ≤ 2mn · 2−2d logm

Proof. For Xerr we have two cases: either Xerr is empty, in which case the claim is trivial,or Xerr is not empty and there is some minimal j such that Xj gets added to Xerr, and byextension |Ij | > 40d. Recall that we showed |Xj | ≤ |X≥j | · 2−0.95|Ij | logm, and by extensionH∞(Xj) ≥ H∞(X≥j)−|Ij |·0.95 logm. Then because Xj is a set in [m]n fixed on coordinates Ij ⊆ n,H∞(Xj) ≤ (n− |Ij |) logm. Combining these two bounds gives us H∞(X≥j) ≤ (n− 0.05|Ij |) logm,and note that Xerr ⊆ X≥j . Thus by our choice of j we get that

|Xerr| ≤ |X≥j | < 2(n−0.05·40d) logm < mn · 2−2d logm

For Yerr, as in the proof of Lemma 4.5 for all k ∈ [40d] there are(nk

)·mk · 2k < 23k logm choices of

(Ij , αj , βj) such that |Y j,βj | < 2m·(N−k)−5d logm, and taking a union bound we get that

|Yerr| ≤40d∑k=1

23k logm · 2m·(N−k)−5d logm ≤ 40d · 2m(N−1)−2d logm 2mn · 2−2d logm

which completes the proof.

The proof of Lemma 6.2 is now fairly immediate. The density of Xerr and Yerr follows fromClaim 6.4. For any Xj ×Y j,β , if j ∈ Jgood then by Claim 6.3 this fulfills the structured case, while ifj /∈ Jgood then either Xj ⊆ Xerr or Y j,β ⊆ Yerr by definition. The query alignment property holdsby taking Ix = Ij for all x /∈ Xerr, where j is such that x ∈ Xj .

7 Lifting that scales with simulation length

In this section we prove a variant on Basic Lifting Theorem, which allows us to set the gadget sizem in terms of the target decision tree depth d. This theorem was originally proven in [GKMP20]but here we get a simpler proof via the sunflower lemma together with simple counting.

Theorem 7.1 (Low-depth Lifting Theorem [GKMP20]). Let f be a search problem over0, 1n, and let m ≥ (Pdt(f))5+ε for some constant ε > 0. Then

Pcc(f Indm) ≥ Pdt(f) · Ω(logm)

Proof sketch. Again we start with a given real protocol Π of depth d logm for the composedproblem f Indnm and construct a decision-tree of depth O(d) for f . Putting aside Full RangeLemma for the moment, we can apply Rectangle Partition as in Basic Lifting Theorem, and Claims4.2, 4.3, and 4.4 hold with no change in proof. For Lemma 4.5 we impose the slightly strongerprecondition that |Y | ≥ 2mN−O(d logm),11 which will become useful when we go to prove FullRange Lemma. Our simulation proceeds according to to Simulation Protocol, and the same proofshold to show our simulation’s efficiency. For correctness note that with our new condition that|Y | ≥ 2m·| free(ρ)|−O(d logm) the same argument holds as before, as long as m1−δ > O(d logm), where(1− δ) logm is our blockwise min-entropy threshold in Rectangle Partition.

11Since our invariants actually give |Y t| ≥ 2m·| free(ρt)|−t−2d logm for t ≤ d logm, we could have stated Lemma 4.5this way originally, but to simplify the presentation we omitted any reference to d in Rectangle Partition

22

Page 23: Lifting with Sunflowers

Now we return to Full Range Lemma. To see why this is our main technical challenge, notethat we can apply Claim 2.3 as before, but the rest of the argument no longer holds for m n,as we cannot apply Blockwise Robust Sunflower Lemma for (1− δ) logm < log(K log(N/ε)). Wewill prove a variant of Lemma 4.1, from which Lemma 4.5 follows using our additional constraint|Y | > 2mN−O(d logm). As usual we work with the relatively simple blockwise min-entropy threshold0.95 logm and gadget size d7 as a matter of clarity, and at the end of the section we prove it moregenerally to get better parameters. For convenience we reuse the shorthand γj := (Ij , αj).

Lemma 7.2. Let N ≤ n, let d = o(n), and let m > d7. Let X be such that X has blockwisemin-entropy 0.95 logm, and let F = γjj be a block-respecting set system over [m]N such that 1)for all x ∈ X there exists a γj ∈ F consistent with x, and 2) |γj | ≤ O(d) for all j. Then

Pry⊆[mN ](∀j : γj 6⊆ y) < 2−Ω(d logm)

We prove this lemma in Section 7.1, as it is our main technical contribution.

7.1 Proof of Lemma 7.2

Proof sketch. Our goal will be to show that F contains an 2−Ω(d logm)-robust sunflower with anempty core. Note that in proving Full Range Lemma we invoked Blockwise Robust SunflowerLemma, which stated that X contained an ε-robust sunflower with an empty core. However, whilewe still have blockwise min-entropy on X, for m N we cannot use Blockwise Robust SunflowerLemma on X.

Instead we turn to F , and since we don’t have a notion of blockwise min-entropy for F we switchto using the basic Robust Sunflower Lemma, and instead use the blockwise min-entropy of X toensure F is large. More specifically, because the blockwise min-entropy of X is at least 0.95 logm,for any non-empty set γj ∈ F the set of all x ∈ X consistent with γj can only cover a 2−0.95|γj | logm

fraction of X. Since each x ∈ X must be consistent with some γj , there must be a huge number ofsets γj in F , and so by Robust Sunflower Lemma F contains some ε-robust sunflower FS even forvery small ε.

If |S| = 0 then we are done, but unfortunately using Robust Sunflower Lemma we have nocontrol over |S|. Instead, we employ the strategy an iterative strategy where we drive down thesize of the smallest core S for which FS is an ε-robust sunflower. For simplicity assume there issome s ≤ 20d such that every set in F has size s, and so in the worst case we can assume that everycore S for which FS is an ε-robust sunflower has size s− 1. We want to show now that there existenough such cores S that the collection of these cores itself is an ε-robust sunflower, and so it musthave a core S′ of size at most s− 2. If this is true then it turns out FS′ is an ε′-robust sunflower forε′ only slightly larger than ε. From this we’ve made progress; by increasing ε slightly we’ve found acore of a smaller size.

Using this idea, at a high level we will perform an iterative procedure, where we repeat thefollowing three steps until we find a sunflower with an empty core in F : 1) repeatedly pluck ε-robustsunflowers from F ; 2) when we have enough sunflowers, pluck an robust sunflower from their cores;3) increase ε enough so that the core of this new sunflower is the core of an ε-robust sunflower in Fas well. In our actual calculations we will need to keep track of the sets of cores of each size, as wellas to focus only on the sets in F of a certain size. This will allow us to know when we should plucka sunflower from the cores, and will give us a measure of progress towards finding an empty core,which will allow us to choose our ε small enough to get 2−Ω(d logm) at the end.

The last remaining piece is showing that we can actually pluck enough sunflowers from F torepeat this procedure enough to get an empty core, without running out of sets in F . Unfortunately

23

Page 24: Lifting with Sunflowers

when we find a core S and pluck the sunflower FS , we have no control over how many sets areactually in FS , and so it seems hopeless to control how many rounds we can run for. However,note that the 0.95 logm lower bound on the blockwise min-entropy of X holds for any S over [m]N ,which applies to a) the original sets γj ∈ F , and b) the cores S that we pluck. Thus instead ofarguing that each FS we find is small, we instead argue that the fraction of X covered by setsremaining in F is large, using the blockwise min-entropy of X for all (non-empty) cores S we’vefound so far. Then, again using the blockwise min-entropy of X on F , we know that F must stillhave many sets to cover the remaining fraction of X, as we did when showing that F was originallybig enough to apply Robust Sunflower Lemma.

Proof. It is sufficient to show that F contains an ε-robust sunflower with an empty core for someε ≤ 2−Ω(d logm). Again we assume ∅ /∈ F as the lemma is trivial otherwise, and so for all s ∈ [O(d)] letF(s) be the set of all sets in F of size exactly s, and let X(s) be the set of all x ∈ X consistent witha set in F(s). Since every x is consistent with some γ ∈ F = ∪sF(s), we know that X = ∪sX(s).Therefore by averaging there must exist some s ∈ [O(d)] such that |X(s)| ≥ 1

O(d) |X|, and so we fixan arbitrary such s.

We define an iterative procedure to find an robust sunflower with an empty core in F(s). Sett = 0, set ε0 := 2−Ω(d logm)−s2 logm, and set F0 := F(s). For k = 0 . . . s− 1 set Sk := ∅. We repeatthe following until we ever add a set to S0:

0. abort if the following invariants ever do not hold:(a) |Sk| ≤ 20.5k logm

(b) |F t| ≥ 20.5s logm

(c) for every k and every S ∈ Sk, |S| = k and F(s)S is an εt-robust sunflower(d) εt < 2−Ω(d logm)

1. let F tS

be an εt-robust sunflower in F t; if none exists, abort2. increment t and set εt ← εt−13. set S |S| ← S |S| ∪ S and set F t ← F t−1 −F t−1

S

4. while there exists k such that |Sk| = 20.5k logm:(a) if k = 0, exit and return εt(b) let Sk

Sbe an ε0-robust sunflower in Sk; if none exists, abort

(c) increment t and set εt ← εt−1 + ε0(d) set S |S| ← S |S| ∪ S, set Sk′ ← Sk′ − Sk′

Sfor all k′ > |S|, and set F t ← F t−1 −F t−1

S

If this process exits without aborting, clearly by invariants (c) and (d), F(s) is a 2−Ω(d logm)-robustsunflower with an empty core as desired (note that when the procedure exits, |S0| = 20.5·0 logm = 1).Thus we prove that the process never aborts.

First we show that by Robust Sunflower Lemma, in steps 1 and 4b we always find an robustsunflower. Recall that m > d7.12 For step 1, by invariant (b) and the fact that 1/εt ≤ 1/ε0 =2Ω(d logm)+s2 logm we have

|F t| ≥ 20.5s logm = (m0.5)s (Ω(d3 logm))s ≥ (s · log exp(Ω(d logm) + s2 logm))s ≥ (s · log 1/εt)s

and for step 4b by the inner loop condition the same calculation shows

|Sk| = 20.5k logm = (m0.5)k (Ω(d3 logm))k ≥ (k · log exp(Ω(d logm) + s2 logm))k = (k · log 1/ε0)k

12Here is the first place where we use the gadget size.

24

Page 25: Lifting with Sunflowers

We now prove that the invariants hold. For invariant (a), clearly after exiting the inner loop|Sk| < 20.5k logm for all k. Before the inner loop runs we add at most one element to at most oneset Sk, and thus for that set |Sk| < 20.5k logm + 1, or in other words |Sk| ≤ 20.5k logm. At the startof each iteration of the inner loop at most one set Sk has size 20.5k logm, and since we remove atleast one element from it and add at most one element to at most one other set we maintain thatinvariant.

For invariant (b), assume for contradiction that |F t| < 20.5s logm. Recall that X has blockwisemin-entropy at least 0.95 logm, meaning that every set S over [m]N covers at most 2−0.95|S| logm · |X|elements in X, and by extension in X(s). In particular this applies to every set γj ∈ F t as wellas every set S ∈ Sk. Lastly by assumption |F t| < 20.5s logm, and likewise by invariant (a) we knowthat |Sk| < 20.5k logm for every k. Therefore since m > d7,13

|X(s)| ≤ |F t| · (2−0.95s logm · |X|) +s−1∑k=1|Sk| · (2−0.95k logm · |X|)

< 20.5s logm · 2−0.95s logm · |X|+s−1∑k=1

20.5k logm · 2−0.95k logm · |X|

= (s∑

k=12−0.45k logm) · |X|

≤ (s · 2−0.45 logm) · |X|≤ (s · d−3) · |X| = 1

ω(d) |X|

which is a contradiction of our choice of s.For invariant (c), we first note the following simple observation about sunflowers.

Fact 7.3. Let F and H be any two set systems such that H ⊆ F , let ε, ε′ > 0 be such that ε ≤ ε′,and let S be any set. Then if HS is an ε-robust sunflower, FS is also an ε′ robust sunflower.

Consider S ∈ Sk. Clearly |S| = k by construction, and so we show that F(s)S is an εt-robustsunflower. We consider only the value of t when S was added to Sk, as εt only grows, and we dothis by induction on t. First observe that for any t, if S was added to Sk in step 1 then the claimfollows immediately since F t ⊆ F(s). This establishes the base case since at t = 0 we are at thestart of the procedure, and so we consider t > 0. We show this with induction on k in reverse orderfrom s− 1 to 0. If k = s− 1, since there is no Sk′ for k′ > s− 1 it must have been added in step 1,and so again the claim follows immediately. Thus we consider k < s− 1 and assume S was added instep 4b.

Let k′ > k be such that Sk′S was the sunflower discovered in step 4b which made us add Sto Sk. We claim that F(s)S is an (εt−1 + ε0)-robust sunflower, which completes the claim sinceεt = εt−1 + ε0. Consider the probability that a random set y ⊆ [mN ]− S doesn’t contain any set inF(s)S . For this to happen, for every set S′ ∈ Sk′

Seither y contains no sets in F(s)S′ or it does not

contain S′ itself. If there is some S′ such that S′ ⊆ y, then by the inductive hypothesis on t and kwe know that F t′S′ is an εt′-robust sunflower, where t′ ≤ t− 1 was the value of t when S′ was addedto Sk. Since εt′ ≤ εt−1 and F t′ ⊆ F(s), by extension y avoids every set in F(s)S′ with probabilityat most εt−1. In the other case where no such S′ exists, then because Sk′

Sis an ε0-robust sunflower y

avoids every set S′ ∈ Sk′S

with probability at most ε0. Taking a union bound over these two eventsgives us our claim.

13Here is the second place where we use the gadget size.

25

Page 26: Lifting with Sunflowers

Finally for invariant (d), we claim that t ≤ 2s2 logm − 1 when the process ends. Putting this facttogether with ε0 := 2−Ω(d logm)−s2 logm and εt ≤ εt−1 + 2−Ω(d logm)−s2 logm for all t gives us

εt ≤ ε0 + t · ε0 ≤ 2s2 logm · 2−Ω(d logm)−s2 logm = 2−Ω(d logm)

We associate each tuple S := (Sk)k=1...s−1 with the string τ(S) = |S1|#|S2|# . . .#|Ss−1|. We claimthat for every t there is a unique string τt corresponding to τ(S) at the time t was incremented.This is simply because in every round of the outer loop we increase the size of at least one setSk, and in every round of the inner loop that we cause some Sk to shrink in some round of theinner loop, we also cause some set Sk′ to grow where k′ < k. By invariant (a) and the inner loopcondition, |Sk| ≤ 20.5k logm for every k whenever we updated t, and so as long as |S0| = 0—in otherwords for all t except the very last one—we have

t ≤ |τ(S)| =s−1∏k=1

20.5k logm = (20.5 logm)∑s−1

k=1 k < 2s2 logm − 2

and so at the end of the procedure t ≤ 2s2 logm − 1.

7.2 Better gadget size

By balancing the parameters in Lemma 7.2 we can prove our scaling lifting theorem.

Lemma 7.4. Let N ≤ n, let d = o(n), and let m > d5+ε for any constant ε > 0. Let X and δbe such that X has blockwise min-entropy (1 − δ) logm, and let F = γjj be a block-respectingset system over [m]N such that 1) for all x ∈ X there exists a γj ∈ F consistent with x, and 2)|γj | ≤ O(d) for all j. Then

Pry⊆[mN ](∀j : γj 6⊆ y) < 2−Ω(d logm)

Proof. As usual we set our blockwise min-entropy threshold at (1− δ) logm, and furthermore weset our cutoff for the size of the sets Sk at m(1−δ′)k. There are two conditions that need to befulfilled: 1) to show we can apply the sunflower lemma to F t and Sk we need m1−δ′ = Ω(d3 logm);2) to bound |X(s)| we need mδ−δ′ < ω(d−2). Clearly our first step is to set δ to be smaller thanan arbitrarily small constant ε′—recall that this does not affect the asymptotic strength of ourlifting theorem—and for simplicity we replace logm with dε

′ in the first condition, and so we getm1−δ′ = d3+ε′ and mδ′−ε′ = d2. Setting δ′ = 0.4 gives us m = d5+ε for some constant ε = O(ε′), andsince we make ε′ arbitrarily small we can do the same for ε.

8 Open problems

Towards poly(log) gadget size. As discussed in the paper, one of the core issues in improvinggadget size with current techniques is to prove the extractor or disperser like analogues of Lemma 4.1for small gadget sizes. To this end, we pose the following concrete conjecture:

Conjecture 1. There exist a constant c such that for all large enough m the following holds.Let X,Y be distributions on [m]N , (0, 1m)N with entropy deficiency at most ∆ each. Then,IndNm(X,Y ) contains a subcube of co-dimension at most c∆. That is, there exists I ⊆ [N ], |I| ≤ c∆,and α ∈ 0, 1I such that for all z ∈ 0, 1N with zI = α, we have

PrX,Y [IndNm(X,Y ) = z] > 0.

26

Page 27: Lifting with Sunflowers

Proving the above statement seems necessary for obtaining better lifting theorems with currenttechniques. Further, while there are other obstacles to be overcome, proving the conjecture forsmaller gadget-sizes would be a significant step toward improving gadget size (e.g., at least inthe non-deterministic setting as considered in [GLM+16]). Our work proves the conjecture wherem = O(N logN), whereas previous techniques needed m N2. The robust-sunflower theoremof [ALWZ20] can be seen as proving a related statement: For gadget-size m = poly(logN), if X hasdeficiency at most ∆, Y is the p-biased distribution, then we get the stronger guarantee that for someI ⊆ [N ], |I| = O(∆), α ∈ 0, 1I we have that for all z with zI = αI , PrY [∃x ∈ X, IndNm(X,Y ) =z] ≈ 1. We believe that these arguments could be useful in proving the above conjecture when thegadget-size is m = poly(logN).

Acknowledgements

The authors thank Paul Beame for comments.

A A Rosetta Stone for lifting and sunflower terminology

In this appendix we draw relations between concepts in lifting theorems and in sunflowers lemmas.

Spreadness and min-entropy. One way of understanding the jump in [GPW17] from singlecoordinates to blocks of coordinates is as a movement to a “higher moment method”, similar tothe one used in [ALWZ20] to improve the parameters in the classic Sunflower Lemma. For thiswe will be focusing on universes UN split into N blocks. A set system F over U is r-spread if|FS | ≤ |F|/r|S| for every ∅ 6= S ⊆ U . Recall that the blockwise min-entropy of a set system F overU is min∅6=I⊆[N ]

1|I|H∞(FI). We will draw the following equivalence:

F is r − spread⇔ F has blockwise min-entropy log r

⇒: consider a set I ⊆ [N ] and a block-respecting subset S containing elements exactly from theblocks I. Since |FS | ≤ |F|/r|S| for every ∅ 6= S ⊆ U , it follows that Prγ∼F [S ⊆ γ] ≤ r−|S|. Applyingthis to every such S gives us H∞(FI) ≥ 1

|I| log r, and taking the minimum over all I completes theproof.⇐: for every set ∅ 6= S ⊆ U we have two cases: either S is also block respecting or it is not. In

the former case then by blockwise min-entropy we have |FS | ≤ |F| · 2−|S| log r. In the latter case,note that |FS | = 0 since every set γ ∈ F is block respecting and therefore S cannot be a subset ofγ—in other words the blockwise-respecting nature of F is irrelevant, as claimed in Section 2. Eitherway S meets the spreadness condition.

We put a last note in, which is that we derived Blockwise Robust Sunflower Lemma from Lemma4 of [Rao19] using this equivalence. There there is an additional condition that |F| ≥ rN ; if theblockwise min-entropy of F is log r then this follows by averaging, just considering all sets S of sizeN .

Disperser property and full range. In [GPW17] there was an improvement to Lemma 4.5which showed that IndNm(x,y) is multiplicatively close to uniform given blockwise min-entropy andlargeness. In combinatorics, this uniformity is what is often called an extractor property. By contrast

27

Page 28: Lifting with Sunflowers

a disperser property is one that only ensures that IndNm(X,Y ) has full range, similar to our key FullRange Lemma. While this coarseness means we cannot achieve a lifting theorem for BPP, it wasthe key to applying our sunflower techniques, and ultimately necessary for going to a quasilinearsize gadget.

Covers and the rectangle partition. For two set systems F and X over U , F is a cover of Xif for every x ∈ X there exists a γ ∈ F such that γ ⊆ x. Furthermore F is an r-tight cover if forevery γ ∈ F , |Xγ | > k−|γ||X|. Going by our translation from spreadness to blockwise min-entropy,it is clear that Rectangle Partition is designed to find a tight cover of X, since γ corresponds to anassignment that is too likely in X, and in each resulting part Xj we want to ensure that blockwisemin-entropy is restored. There is a bit more work involved with turning a tight cover into a rectanglepartition, but the principle is exactly the same.

Link of F at S. The link of F at S is γ r S : γ ∈ F , S ⊆ γ. While a comparatively minorpoint, it is worth pointing out that we use the terminology FS to refer to the link, while in sunflowerpapers this is often written as FS . We did so to keep consistency with the rest of our set notation.

References

[ALWZ20] Ryan Alweiss, Shachar Lovett, Kewen Wu, and Jiapeng Zhang. Improved bounds forthe sunflower lemma. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani,Gautam Kamath, and Julia Chuzhoy, editors, Proccedings of the 52nd Annual ACMSIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June22-26, 2020, pages 624–630. ACM, 2020. doi:10.1145/3357713.3384234.

[CFK+19] Arkadev Chattopadhyay, Yuval Filmus, Sajin Koroth, Or Meir, and Toniann Pitassi.Query-to-communication lifting using low-discrepancy gadgets. Technical ReportTR19-103, Electronic Colloquium on Computational Complexity (ECCC), 2019. URL:https://eccc.weizmann.ac.il/report/2019/103/.

[CKLM17] Arkadev Chattopadhyay, Michal Koucky, Bruno Loff, and Sagnik Mukhopadhyay.Simulation theorems via pseudorandom properties. CoRR, abs/1704.06807, 2017. URL:http://arxiv.org/abs/1704.06807, arXiv:1704.06807.

[CLRS16] Siu On Chan, James R. Lee, Prasad Raghavendra, and David Steurer. Approximateconstraint satisfaction requires large LP relaxations. J. ACM, 63(4):34:1–34:22, 2016.doi:10.1145/2811255.

[dRMN+19] Susanna de Rezende, Or Meir, Jakob Nordstrom, Toniann Pitassi, Robert Robere,and Marc Vinyals. Lifting with simple gadgets and applications to circuit and proofcomplexity. Technical Report TR19-186, Electronic Colloquium on ComputationalComplexity (ECCC), 2019. URL: https://eccc.weizmann.ac.il/report/2019/186/.

[dRNV16] Susanna de Rezende, Jakob Nordstrom, and Marc Vinyals. How limited interactionhinders real communication (and what it means for proof and circuit complexity).In Proceedings of the 57th Symposium on Foundations of Computer Science (FOCS),pages 295–304. IEEE, 2016. doi:10.1109/FOCS.2016.40.

[ER60] Paul Erdos and R. Rado. Intersection theorems for systems of sets. Journal of theLondon Mathematical Society, 35(1):85–90, 1960.

28

Page 29: Lifting with Sunflowers

[GGKS18] Ankit Garg, Mika Goos, Pritish Kamath, and Dmitry Sokolov. Monotone circuit lowerbounds from resolution. In Proceedings of the 50th Symposium on Theory of Computing(STOC), pages 902–911. ACM, 2018. doi:10.1145/3188745.3188838.

[GJW18] Mika Goos, Rahul Jain, and Thomas Watson. Extension complexity of independentset polytopes. SIAM J. Comput., 47(1):241–269, 2018. doi:10.1137/16M109884X.

[GKMP20] Mika Goos, Sajin Koroth, Ian Mertz, and Toniann Pitassi. Automating cutting planesis np-hard. In Proceedings of the 52nd Symposium on Theory of Computing (STOC),2020.

[GLM+16] Mika Goos, Shachar Lovett, Raghu Meka, Thomas Watson, and David Zuckerman.Rectangles are nonnegative juntas. SIAM J. Comput., 45(5):1835–1869, 2016. doi:10.1137/15M103145X.

[Goo15] Mika Goos. Lower bounds for clique vs. independent set. In Venkatesan Guruswami,editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS2015, Berkeley, CA, USA, 17-20 October, 2015, pages 1066–1076. IEEE ComputerSociety, 2015. doi:10.1109/FOCS.2015.69.

[GP18] Mika Goos and Toniann Pitassi. Communication lower bounds via critical blocksensitivity. SIAM Journal on Computing, 47(5):1778–1806, 2018. doi:doi.org/10.1137/16M1082007.

[GPW15] Mika Goos, Toniann Pitassi, and Thomas Watson. Deterministic communication vs.partition number. In Proceedings of the 56th Symposium on Foundations of ComputerScience (FOCS), pages 1077–1088. IEEE, 2015. doi:10.1109/FOCS.2015.70.

[GPW17] Mika Goos, Toniann Pitassi, and Thomas Watson. Query-to-communication liftingfor BPP. In Proceedings of the 58th Symposium on Foundations of Computer Science(FOCS), pages 132–143, 2017. doi:10.1109/FOCS.2017.21.

[HN12] Trinh Huynh and Jakob Nordstrom. On the virtue of succinct proofs: Amplifyingcommunication complexity hardness to time–space trade-offs in proof complexity. InProceedings of the 44th Symposium on Theory of Computing (STOC), pages 233–248.ACM, 2012. doi:10.1145/2213977.2214000.

[HR00] Danny Harnik and Ran Raz. Higher lower bounds on monotone size. In F. FrancesYao and Eugene M. Luks, editors, Proceedings of the Thirty-Second Annual ACMSymposium on Theory of Computing, May 21-23, 2000, Portland, OR, USA, pages378–387. ACM, 2000. doi:10.1145/335305.335349.

[Juk12] Stasys Jukna. Boolean function complexity: advances and frontiers, volume 27. SpringerScience & Business Media, 2012.

[KMR17] Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra. Approximating rectanglesby juntas and weakly-exponential lower bounds for LP relaxations of csps. In HamedHatami, Pierre McKenzie, and Valerie King, editors, Proceedings of the 49th AnnualACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC,Canada, June 19-23, 2017, pages 590–603. ACM, 2017. doi:10.1145/3055399.3055438.

[Kra98] Jan Krajıcek. Interpolation by a game. Mathematical Logic Quarterly, 44:450–458,1998. doi:10.1002/malq.19980440403.

29

Page 30: Lifting with Sunflowers

[Kus97] Eyal Kushilevitz. Communication complexity. In Advances in Computers, volume 44,pages 331–360. Elsevier, 1997.

[LLZ18] Xin Li, Shachar Lovett, and Jiapeng Zhang. Sunflowers and quasi-sunflowers fromrandomness extractors. In Eric Blais, Klaus Jansen, Jose D. P. Rolim, and David Steurer,editors, Approximation, Randomization, and Combinatorial Optimization. Algorithmsand Techniques, APPROX/RANDOM 2018, August 20-22, 2018 - Princeton, NJ,USA, volume 116 of LIPIcs, pages 51:1–51:13. Schloss Dagstuhl - Leibniz-Zentrum furInformatik, 2018. doi:10.4230/LIPIcs.APPROX-RANDOM.2018.51.

[LRS15] James R. Lee, Prasad Raghavendra, and David Steurer. Lower bounds on the size ofsemidefinite programming relaxations. In Rocco A. Servedio and Ronitt Rubinfeld,editors, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory ofComputing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 567–576. ACM,2015. doi:10.1145/2746539.2746599.

[PR17] Toniann Pitassi and Robert Robere. Strongly exponential lower bounds for mono-tone computation. In Hamed Hatami, Pierre McKenzie, and Valerie King, editors,Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing,STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 1246–1255. ACM, 2017.doi:10.1145/3055399.3055478.

[PR18] Toniann Pitassi and Robert Robere. Lifting nullstellensatz to monotone span programsover any field. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors,Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing,STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 1207–1219. ACM, 2018.doi:10.1145/3188745.3188914.

[Pud10] Pavel Pudlak. On extracting computations from propositional proofs (a survey). LeibnizInternational Proceedings in Informatics, LIPIcs, 8:30–41, 01 2010.

[Rao19] Anup Rao. Coding for sunflowers. CoRR, abs/1909.04774, 2019.

[Raz95] A A Razborov. Unprovability of lower bounds on circuit size in certain fragments ofbounded arithmetic. Izvestiya: Mathematics, 59(1):205–227, feb 1995.

[RM99] Ran Raz and Pierre McKenzie. Separation of the monotone NC hierarchy. Combina-torica, 19(3):403–435, 1999. doi:10.1007/s004930050062.

[Ros10] Benjamin Rossman. Approximate sunflowers. Manuscript, 2010.

[RPRC16] Robert Robere, Toniann Pitassi, Benjamin Rossman, and Stephen A. Cook. Exponentiallower bounds for monotone span programs. In Irit Dinur, editor, IEEE 57th AnnualSymposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016,Hyatt Regency, New Brunswick, New Jersey, USA, pages 406–415. IEEE ComputerSociety, 2016. doi:10.1109/FOCS.2016.51.

[She11] Alexander A. Sherstov. The pattern matrix method. SIAM J. Comput., 40(6):1969–2000, 2011. doi:10.1137/080733644.

[She14] Alexander A Sherstov. Communication lower bounds using directional derivatives.Journal of the ACM (JACM), 61(6):1–71, 2014.

30

Page 31: Lifting with Sunflowers

[Sok17] Dmitry Sokolov. Dag-like communication and its applications. In Proceedings of the12th Computer Science Symposium in Russia (CSR), pages 294–307. Springer, 2017.doi:10.1007/978-3-319-58747-9 26.

[WYY17] Xiaodi Wu, Penghui Yao, and Henry S. Yuen. Raz-mckenzie simulation with the innerproduct gadget. Electronic Colloquium on Computational Complexity (ECCC), 24:10,2017. URL: https://eccc.weizmann.ac.il/report/2017/010.

31ECCC ISSN 1433-8092

https://eccc.weizmann.ac.il