Hardness Amplification within NP against Deterministic Algorithms
Parikshit Gopalan
U Washington & MSR-SVC
Venkatesan Guruswami
U Washington & IAS
Why Hardness Amplification
Goal: Show there are hard problems in NP. Lower bounds out of reach. Cryptography, Derandomization require average
case hardness. Revised Goal: Relate various kinds of hardness
assumptions. Hardness Amplification: Start with mild
hardness, amplify.
Hardness Amplification
Generic Amplification Theorem:
If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z.
NP, EXP, PSPACE
P/poly, BPP, P
PSPACE versus P/poly, BPP
Long line of work:
Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP).
NP versus P/poly O’Donnell.
Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard.
Starts from average-case assumption. Healy-Vadhan-Viola.
NP versus BPP Trevisan’03.
Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.
NP versus BPP Trevisan’05.
Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard.
BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. Optimal up to .
Our resultsAmplification against P.
Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard.
Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard.
Trevisan: 1 - hardness to 7/8 + for PSPACE.Goldreich-Wigderson: Unconditional hardness for EXP against P.
= 1/n100
= 1/(log n)100
Outline of This Talk:
1. Amplification via Decoding.
2. Deterministic Local Decoding.
3. Amplification within NP.
Outline of This Talk:
1. Amplification via Decoding.
2. Deterministic Local Decoding.
3. Amplification within NP.
Amplification via DecodingTrevisan, Sudan-Trevisan-Vadhan
101100
Encode
101100101
f: Mildly hard
g: Wildly hard
100110011
Decode
101100
Approx. to g
f
Amplification via Decoding.
Case Study: PSPACE versus BPP.
101100
Encode
101100101
f: Mildly hard g: Wildly
hard
• f’s table has size 2n.
• g’s table has size 2n2.
• Encoding in space n100.PSPACE
Amplification via Decoding.
Case Study: PSPACE versus BPP.
100110011
Decode
101100
BPP
• Randomized local decoder.
• List-decoding beyond ¼ error.
Approx. to g
f
Amplification via Decoding.
Case Study: NP versus BPP.
101100
Encode
101100101
f: Mildly hard
g: Wildly hard
• g is a monotone function M of f.
• M is computable in NTIME(n100)
• M needs to be noise-sensitive.NP
Amplification via Decoding.
Case Study: NP versus BPP.
100110011
Decode
101000
• Randomized local decoder.
• Monotone codes are bad codes.
• Can only approximate f.
BPPApprox.
to g
Approx. to f
Outline of This Talk:
1. Amplification via Decoding.
2. Deterministic Local Decoding.
3. Amplification within NP.
Deterministic Amplification.
100110011
Decode
101100
P
Deterministic local decoding?
Deterministic Amplification.
100110011
Decode
101100
• Can force an error on any bit.
• Need near-linear length encoding.
• Monotone codes for NP.P
Deterministic local decoding?
2n
2nn100
Deterministic Local Decoding …
… up to unique decoding radius. Deterministic local decoding up to 1 - from ¾ + agreement. Monotone code construction with similar parameters.
Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan]
Open Problem: Go beyond Unique Decoding.
The ABNNR Construction.
Expander graph.
• 2n vertices.
• Degree n100.
The ABNNR Construction.
0
0
1
0
1
Expander graph.
• 2n vertices.
• Degree n100.
The ABNNR Construction.
0
0
1
1 0 0
1 0 1
0 0 0
1 0 1
0 1 0
0
1
• Start with a binary code with small distance.
• Gives a code of large distance over large alphabet.
Expander graph.
• 2n vertices.
• Degree n100.
Concatenated ABNNR Codes.
0
0
1
1 0 0
1 0 1
0 0 0
1 0 1
0 1 0
0
1 1 0 1 0 1 1
0 1 1 0 0 1
0 0 0 0 0 0
0 1 1 0 0 1
0 1 0 1 1 0
Inner code of distance ½.
• Binary code of distance ½.
• [GI]: ¼ error, not local.
• [T]: 1/8 error, local.
Decoding ABNNR Codes.
1 1 1 0 0 1
0 1 0 0 0 1
0 0 1 0 0 0
0 1 0 0 1 1
0 1 1 1 0 0
Decoding ABNNR Codes.
1 0 0
0 0 1
0 0 0
0 0 1
0 1 0
1 1 1 0 0 1
0 1 0 0 0 1
0 0 1 0 0 0
0 1 0 0 1 1
0 1 1 1 0 0
Decode inner codes.
• Works if error < ¼.
• Fails if error > ¼.
Decoding ABNNR Codes.
0
0
1
1 0 0
0 0 1
0 0 0
0 0 1
0 1 0
0
0 1 1 1 0 0 1
0 1 0 0 0 1
0 0 1 0 0 0
0 1 0 0 1 1
0 1 1 1 0 0
Majority vote on the LHS.
[Trevisan]: Corrects 1/8 fraction of errors.
GMD decoding [Forney’67]
1 0 0 1 1 1 0 0 1c 2 [0,1]
If decoding succeeds, error 2 [0, ¼].
• If 0 error, confidence is 1.
• If ¼ error, confidence is 0.
c = (1 – 4).
Could return wrong answer with high confidence…
… but this requires close to ½.
GMD Decoding for ABNNR Codes.
1 0 0 c1
0 0 1 c2
0 0 0 c3
0 0 1 c4
0 1 0 c5
1 1 1 0 0 1
0 1 0 0 0 1
0 0 1 0 0 0
0 1 0 0 1 1
0 1 1 1 0 0
GMD decoding: Pick threshold, erase, decode. Non-local.
Our approach: Weighted Majority.
Thm: Corrects ¼ fraction of errors locally.
GMD Decoding for ABNNR Codes.
0
0
1
1 0 0 c1
0 0 1 c2
0 0 0 c3
0 0 1 c4
0 1 0 c5
0
1Thm: GMD decoding corrects
¼ fraction of error.
Proof Sketch:
1. Globally, good nodes have more confidence than bad nodes.
2. Locally, this holds for most neighborhoods of vertices on LHS.
Proof similar to Expander Mixing Lemma.
Outline of This Talk:
1. Amplification via Decoding.
2. Deterministic Local Decoding.
3. Amplification within NP.• Finding an inner monotone code [BOKS].
• Implementing GMD decoding.
The BOKS construction.
101100
101100101
k kr
x
T(x)
• T(x) : Sample an r-tuple from x, apply the Tribes function.
• If x, y are balanced, and (x,y) > , (T(x),T(y)) ¼ ½.
• If x, y are very close, so are T(x), T(y).
• Decoding: brute force.
GMD Decoding for Monotone codes.
0
0
1
1 0 1 0 c1
0 1 1 0 c2
1 1 0 0 c3
0 1 1 0 c4
1 0 1 0 c5
0
1
• Start with a balanced f, apply concatenated ABNNR.
• Inner decoder returns closest balanced message.
• Apply GMD decoding.
Thm: Decoder corrects ¼ fraction of error approximately.
• Analysis becomes harder.
GMD Decoding for Monotone codes.
0
0
1
1 0 1 0 c1
0 1 1 0 c2
1 1 0 0 c3
0 1 1 0 c4
1 0 1 0 c5
0
1
• Inner decoder finds the closest balanced message.
• Assume 0 error: Decoder need not return message.
• Good nodes have few errors, Bad nodes have many.
Thm: Decoder corrects ¼ fraction of error approximately.
Beyond Unique Decoding…
100110011
Deterministic local list-decoder:
Set L of machines such that:
- For any received word
- Every nearby codeword is computed by some M 2 L.
Is this possible?