computational complexity and physics scott aaronson (mit) new insights into computational...

20
“Computational Complexity and Physics” Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Upload: elizabeth-bartlett

Post on 26-Mar-2015

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

“Computational Complexity and Physics”

Scott Aaronson (MIT)New Insights Into Computational Intractability

Oxford University, October 3, 2013

Page 2: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Title is too broad!So, I’ll just give two “tales from the trenches”:-One about BosonSampling, one about black holes

What you should take away:

1990s:Computational Complexity Quantum PhysicsShor &

Grover

Today:Computational Complexity Quantum Physics

Page 3: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

BosonSampling (A.-Arkhipov 2011)

A rudimentary type of quantum computing, involving only non-interacting photons

Classical counterpart: Galton’s Board

Replacing the balls by photons leads to famously counterintuitive phenomena,

like the Hong-Ou-Mandel dip

Page 4: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

In general, we consider a network of beamsplitters, with n input “modes” (locations) and m>>n output modesn identical photons enter, one per input modeAssume for simplicity they all leave in different modes—there are possibilities

The beamsplitter network defines a column-orthonormal matrix ACmn, such that

nS

n

iiixX

1,Per

n

m

2PeroutcomePr SAS

where

is the matrix permanent

nn submatrix of A corresponding to S

For simplicity, I’m ignoring outputs with ≥2 photons per mode

Page 5: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

ExampleFor Hong-Ou-Mandel experiment,

02

1

2

1

2

1

2

12

1

2

1

Per1,1 outputPr2

2

In general, an nn complex permanent is a sum of n! terms, almost all of which cancelHow hard is it to estimate the “tiny residue” left over?Answer: #P-complete, even for constant-factor approx

(Contrast with nonnegative permanents!)

Page 6: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

So, Can We Use Quantum Optics to Solve a #P-Complete Problem?

Explanation: If X is sub-unitary, then |Per(X)|2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)|2 for a given X, we’d generally need to repeat the optical experiment exponentially many times

That sounds way too good to be true…

Page 7: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Better idea: Given ACmn as input, let BosonSampling be the problem of merely sampling from the same distribution DA that the beamsplitter network samples from—the one defined by Pr[S]=|Per(AS)|2

Upshot: Compared to (say) Shor’s factoring algorithm, we get different/stronger evidence that a

weaker system can do something classically hard

Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P#P=BPPNP

Better Theorem: Suppose we can sample DA even approximately in classical polynomial time. Then in BPPNP, it’s possible to estimate Per(X), with high probability over a Gaussian random matrix nn

CΝX 1,0~

Page 8: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Experiments

# of experiments > # of photons!

Last year, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents

Page 9: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Goal (in our view): Scale to 10-30 photonsDon’t want to scale much beyond that—both because(1)you probably can’t without fault-tolerance, and (2)a classical computer probably couldn’t even verify the results!

Obvious Challenges for Scaling Up:-Reliable single-photon sources (optical multiplexing?)-Minimizing losses-Getting high probability of n-photon coincidence

Theoretical Challenge: Argue that, even with photon losses and messier initial states, you’re still solving a classically-intractable sampling problem

Page 10: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Recent Criticisms of Gogolin et al. (arXiv:1306.3995)

Suppose you ignore which actual photodetectors light up, and count only the number of times each output configuration occurs. In that case, the BosonSampling distribution DA is exponentially-close to the uniform distribution U

Response: Why would you ignore which detectors light up??The output of almost any algorithm is also gobbledygook if you ignore the order of the output bits…

Page 11: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Recent Criticisms of Gogolin et al. (arXiv:1306.3995)

OK, so maybe DA isn’t close to uniform. Still, the very same arguments we gave for why polynomial-time classical algorithms can’t sample DA, suggest that they can’t even distinguish DA from U!

Response: That’s why we said to focus on 10-30 photons—a range where a classical computer can verify a BosonSampling device’s output, but the BosonSampling device might be “faster”!(And 10-30 photons is probably the best you can do anyway, without quantum fault-tolerance)

Page 12: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

More Decisive Responses(A.-Arkhipov, arXiv:1309.7460)

Theorem: Let ACmn be a Haar-random BosonSampling matrix, where m≥n5.1/. Then with 1-O() probability over A, the BosonSampling distribution DA has Ω(1) variation distance from the uniform distribution U

Under UHistogram of (normalized) probabilities under DA

Necessary, though not sufficient, for approximately sampling DA to be hard

Page 13: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Theorem (A. 2013): Let ACmn be Haar-random, where m>>n. Then there is a classical polynomial-time algorithm C(A) that distinguishes DA from U (with high probability over A and constant bias, and using only O(1) samples)Strategy: Let AS be the nn submatrix of A corresponding to output S. Let P be the product of squared 2-norms of AS’s rows. If P>E[P], then guess S was drawn from DA; otherwise guess S was drawn from U

P under uniform distribution (a lognormal random variable)

P under a BosonSampling distributionA

AS

?22

1

n

n m

nvvP

Page 14: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Using Quantum Optics to Prove that the Permanent is #P-Complete

[A., Proc. Roy. Soc. 2011]

Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets

We gave a new, arguably more transparent proof by combining three facts:(1)n-photon amplitudes correspond to nn permanents(2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001](3) Quantum computations can encode #P-complete quantities in their amplitudes

Page 15: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Black Holes and Computational Complexity??

YES!!Amazing connection made this year by Harlow & HaydenBut first, we need to review 40 years of black hole history

SZK

QSZK

BPPBQP

AMQAM

Page 16: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Bekenstein 1972, Hawking 1975: Black holes have entropy and temperature! They emit radiation

The Information Loss Problem: Calculations suggest that Hawking radiation is thermal—uncorrelated with whatever fell in. So, is infalling information lost forever? Would violate the unitarity / reversibility of QM

OK then, assume the information somehow gets out!

The Xeroxing Problem: How could the same qubit | fall inexorably toward the singularity, and emerge in Hawking radiation? Would violate the No-Cloning Theorem

Black Hole Complementarity (Susskind, ‘t Hooft): An external observer can describe everything unitarily without including the interior at all! Interior should be seen as “just a scrambled re-encoding” of the exterior degrees of freedom

Page 17: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Violates “monogamy of entanglement”! The same qubit can’t be maximally entangled with 2 things

The Firewall Paradox (AMPS 2012)

B = Interior of “Old”

Black Hole

R = Faraway Hawking Radiation

H = Just-Emitted Hawking Radiation

Near-maximal entanglement

Also near-maximal entanglement

Page 18: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Harlow-Hayden 2013 (arXiv:1301.4504): Striking argument that Alice’s decoding task would require exponential timeComplexity theory to the rescue of quantum field theory??The HH decoding problem: Given an n-qubit pure state |BHR produced by a known, poly-size quantum circuit. Promised that, by acting only on R (the “Hawking radiation”), it’s possible to distill an EPR pair (|00+|11)/√2 between R and B (the “black hole interior”). Distill such a pair, by applying a unitary transformation UR to the qubits in R.

Theorem (HH): This decoding problem is QSZK-hard.

So presumably intractable, unless QSZK=BQP (which we have oracle evidence against)!

Page 19: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Improvement (A. 2013): Suppose there are poly-size quantum circuits for the HH decoding problem. Then any OWF f:0,1n0,1p(n) can be inverted in quantum polynomial time.Proof: Assume for simplicity that f is injective. Consider

nxHBRBR

nnp

nRBHxxfx

1,01

11,00,02

1

Suppose applying UR to R decodes an EPR pair between R and B. Then for some |xx, |xx, we must have

11,,00,0 xRxnnp

R xfUxU

xxnnp xfWxV ,0

Furthermore, to get perfect entanglement, we need |x=|x for all x! So from UR, we can get unitaries V,W such that

nnpxxfWV 01

Generalizing to arbitrary OWFs requires techniques similar to those used in HILL Theorem!

Page 20: Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

Other Exciting Recent Complexity/Physics Connections

Quantum PCP Conjecture. “Is approximate quantum MAX-k-SAT hard for quantum NP?” If so, will require the construction exotic condensed matter systems, exhibiting room-temperature entanglement! Beautiful recent survey by Aharonov et al. (arXiv:1309.7495)

Using Entanglement to Steer Quantum Systems. Can use Bell inequality violation for guaranteed randomness expansion (Vazirani-Vidick), blind and authenticated QC with a classical polynomial-time verifier, QMIP=MIP* (Reichardt-Unger-Vazirani)…

I hope for, and expect, many more such striking connections!

Indeed, making these connections possible might be the most important result of

quantum computing research—even if useful QCs eventually get built!