hayes - accidental algorithms

Upload: mauro-sellitto

Post on 13-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Hayes - Accidental Algorithms

    1/5

    2008 JanuaryFebruary 9www.americanscientist.org

    CS

    2008Brian Hayes. Reproduction with permission only.Contact [email protected].

    Accidental Algorithms

    Brian Hayes

    Wcomputationalproblems so hard and otherseasy? This may sound like a childish,whining question, to be dismissed witha shrug or a wisecrack, but if you dressit up in the fancy jargon of computation-al complexity theory, it becomes quitea serious and grownup question: Is P

    equal to NP? An answeraccompaniedby a proofwill get you a million bucksfrom the Clay Mathematics Institute.

    Ill return in a moment to P and NP,but first an example, which offers aglimpse of the mystery lurking beneaththe surface of hard and easy problems.Consider a mathematical graph, a col-lection of vertices (represented by dots)and edges (lines that connect the dots).Heres a nicely symmetrical example:

    Is it possible to construct a path thattraverses each edge exactly once andreturns to the starting point? For anygraph with a finite number of edges,we could answer such a question bybrute force: Simply list all possiblepaths and check to see whether any ofthem meet the stated conditions. Buttheres a better way. In 1736 LeonhardEuler proved that the desired path

    (now called an Eulerian circuit) existsif and only if every vertex is the endpoint of an even number of edges. Wecan check whether a graph has thisproperty without any laborious enu-meration of pathways.

    Now take the same graph and aska slightly different question: Is there acircuit that passes through every vertex

    exactly once? This problem was posedin 1858 by William Rowan Hamilton,and the path is called a Hamiltoniancircuit. Again we can get the answerby brute force. But in this case there isno trick like Eulers; no one knows anymethod that gives the correct answerfor all graphs and does so substantially

    quicker than exhaustive search. Super-ficially, the two problems look almostidentical, but Hamiltons version is farharder. Why? Is it because no shortcutsolution exists, or have we not yet beenclever enough to find one?

    Most computer scientists and math-ematicians believe that Hamiltonsproblem really is harder, and no short-cut algorithm will ever be foundbutthats just a conjecture, supported byexperience and intuition but not byproof. Contrarians argue that wevehardly begun to explore the space of

    all possible algorithms, and new prob-lem-solving techniques could turn upat any time. Before 1736, the Eulerian-circuit problem also looked hard.

    What prompts me to write on thistheme is a new and wholly unexpectedfamily of algorithms that provide ef-ficient methods for several problemsthat previously had only brute-forcesolutions. The algorithms were invent-ed by Leslie G. Valiant of Harvard Uni-versity, with extensive further contri-butions by Jin-Yi Cai of the University

    of Wisconsin. Valiant named the meth-ods holographic algorithms, but he

    also refers to them as accidental algo-rithms, emphasizing their capricious,rabbit-from-the-hat quality; they seemto pluck answers from a tangle of un-likely coincidences and cancellations.I am reminded of the famous SidneyHarris cartoon in which a long seriesof equations on a blackboard hinges on

    the notation Then a miracle occurs.

    The Coffee-Break CriterionFor most of us, the boundary betweenfast and slow computations is clearlymarked: A computation is slow if itsnot finished when you come backfrom a coffee break. Computer scienceformalizes this definition in terms ofpolynomial-time and exponential-timealgorithms.

    Suppose you are running a comput-er program whose input is a list of n

    numbers. The program might be sort-ing the numbers, or finding their great-est common divisor, or generating per-mutations of them. No matter what thetask, the running time of the programwill likely depend in some way on n,the length of the list (or, more precisely,on the total number of bits needed torepresent the numbers). Perhaps thetime needed to process nitems growsas n2. Thus as nincreases from 10 to 20to 30, the running time rises from 100to 400 to 900. Now consider a programwhose running time is equal to 2n. In

    this case, as the size of the input growsfrom 10 to 20 to 30, the running timeleaps from a thousand to a million toa billion. Youre going to be drinking alot of coffee.

    The function n2 is an example of apolynomial; 2ndenotes an exponential.The distinction between these catego-ries of functions marks the great divideof computational complexity theory.Roughly speaking, polynomial algo-rithms are fast and efficient; exponen-tial algorithms are too slow to bother

    with. To speak a little less roughly:When n becomes large enough, any

    Brian Hayes is Senior Writer for American Sci-entist.Additional material related to the Comput-ing Science column appears in Hayess Weblog at

    http://bit-player.org. Address: 211 Dacian Avenue,Durham, NC 27701. Internet: [email protected]

    A strange new family

    of algorithms probes

    the boundary

    between easyand hard problems

  • 7/27/2019 Hayes - Accidental Algorithms

    2/5

    10 American Scientist, Volume 96 2008Brian Hayes. Reproduction with permission only.Contact [email protected].

    polynomial-time program is fasterthan any exponential-time program.

    So much for the classification of al-gorithms. What about classifying theproblems that the algorithms are sup-posed to solve? For any given prob-lem, there might be many different al-

    gorithms, some faster than others. Thecustom is to rate a problem accordingto the worst-case performance of thebest algorithm. The class known as Pincludes all problems that have at least

    one polynomial-time algorithm. Thealgorithm has to give the right answerand has to run in polynomial time onevery instance of the problem.

    Classifying problems for which wedontknow a polynomial-time algo-rithm is where it gets tricky. In the first

    place, there are some problems thatrequire exponential running time forreasons that arent very interesting.Think about a program to generate allsubsets of a set of nitems; the compu-tation is easy, but because there are 2nsubsets, just writing down the answerwill take an exponential amount oftime. To avoid such issues, complex-ity theory focuses on problems withshort answers. Decision problems aska yes-or-no question (Does the graphhave a Hamiltonian circuit?). Thereare also counting problems (How

    many Hamiltonian circuits does thegraph have?). Problems of thesekinds might conceivably have a poly-nomial-time solution, and we knowthat some of them do. The big questionis whether allof them do. If not, whatdistinguishes the easy problems fromthe hard ones?

    Conscientious CheatingThe letters NP might well be translatednotorious problems, but the abbrevi-ation actually stands for nondetermin-

    istic polynomial. The term refers to ahypothetical computing machine thatcan solve problems through systematicguesswork. For the problems in NP,you may or may not be able to com-

    pute an answer in polynomial time, butif you happen to guess the answer, or ifsomeone whispers it in your ear, thenyou can quickly verify its correctness.NP is the complexity class for conscien-tious cheatersstudents who dont dotheir own homework but who at least

    check their cribbed answers before theyturn them in.Detecting a Hamiltonian circuit is

    one example of a problem in NP. Eventhough I dont know how to solve theproblem efficiently for all graphs, ifyou show me a purported Hamiltoni-an circuit, I can readily check whetherit passes through every vertex once:

    (Note that this verification schemeworks only when the answer to thedecision problem is yes. If you claimthat a graph doesnthave a Hamilto-nian circuit, the only way to prove it isto enumerate all possible paths.)

    Within the class NP dwells theelite group of problems labeled NP-complete. They have an extraordinaryproperty: If any one of these problemshas a polynomial-time solution, thenthat method can be adapted to quick-

    ly solve allproblems in NP (both thecomplete ones and the rest). In otherwords, such an algorithm would es-tablish that P = NP. The two categorieswould merge.

    The very concept of NP-complete-ness has a whiff of the miraculousabout it. How can you possibly be surethat a solution to one problem willwork for every other problem in NP aswell? After all, you cant even know inadvance what all those problems are.The answer is so curious and improb-able that its worth a brief digression.

    The first proof of NP-completeness,published in 1971 by Stephen A. Cookof the University of Toronto, concernsa problem called satisfiability. You aregiven a formula in Boolean logic, con-structed from a set of variables, eachof which can take on the values trueorfalse, and the logical connectives , and . The decision problemasks: Is there a way of assigning trueand falsevalues to the variables thatmakes the entire formula true? With nvariables there are 2npossible assign-

    ments, so the brute-force approach isexponential and unappealing. But a

    The perfect-matching problem pairs up the vertices of a mathematical graph. The number of

    possible matchings grows exponentially with the size of the graph; nevertheless, the match-

    ings can be counted in polynomial time on a planar graph (one without crossed edges). The

    problem was first studied on graphs with a periodic structure, such as the rectilinear grid at

    left, but the algorithm also works on less-regular planar graphs, such as the one in the middle.The graph at right is nonplanar, and its perfect matchings cannot be counted quickly.

    Polynomial and exponential functions define

    the poles of computational efficiency. The

    running time of an algorithm is measured as

    a function of the size of the input, n. If thefunction is a polynomial one, such as n or

    n2, the algorithm is considered efficient; an

    exponential growth rate, such as 2n, makes

    the algorithm impractically slow.

  • 7/27/2019 Hayes - Accidental Algorithms

    3/5

    2008 JanuaryFebruary 11www.americanscientist.org 2008Brian Hayes. Reproduction with permission only.Contact [email protected].

    lucky guess is easily verified, so theproblem qualifies as a member of NP.

    Cooks proof of NP-completeness isbeautiful in its conception and a sham-bling Rube Goldberg contraption in itsdetails. The key insight is that Bool-ean formulas can describe the circuits

    and operations of a computer. Cookshowed how to write an intricate for-mula that encodes the entire operationof a computer executing a programto guess a solution and check its cor-rectness. If and only if the Booleanformula has a satisfying assignment,the simulated computer program suc-ceeds. Thus if you could determine inpolynomial time whether or not anyBoolean formula is satisfiable, youcould also solve the encoded decisionproblem. The proof doesnt dependon the details of that problem, only on

    the fact that it has a polynomial-timechecking procedure.

    Thousands of problems are nowknown to be NP-complete. They forma vast fabric of interdependent compu-tations. Either all of them are hard, oreverything in NP is easy.

    The Match GameTo understand the new holographicalgorithms, we need one more ingredi-ent from graph theory: the idea of aperfect matching.

    Consider the double-feature festi-val. You want to show movies in pairs,with the proviso that any two filmsscheduled together should have a per-former in common; also, no film canbe screened more than once. Theseconstraints lead to a graph where thevertices are film titles, and two titlesare connected by an edge if the filmsshare an actor. The task is to identifya set of edges linking each vertex toexactly one other vertex. The brute-force method of trying all possiblematchings is exponential, but if you

    are given a candidate solution, you canefficiently verify its correctness:

    Thus the perfect-matching problemlies in NP.

    In the 1960s Jack Edmonds, now ofthe University of Waterloo, devised anefficient algorithm that finds a perfect

    matching if there is one. The Edmondsalgorithm works in polynomial time,

    which means the decision problem forperfect matching is in P. (Indeed, Ed-mondss 1965 paper includes the firstpublished discussion of the distinctionbetween polynomial and exponentialalgorithms.)

    Another success story among match-

    ing methods applies only to planargraphsthose that can be drawn with-out crossed edges. On a planar graphyou can efficiently solve not only thedecision problem for perfect matchingbut also the counting problemthat is,you can learn how many different sub-sets of edges yield a perfect matching.In general, counting problems seemmore difficult than decision problems,since the solution conveys more infor-mation. The main complexity class forcounting problems is called #P (pro-nounced sharp P); it includes NP

    as a subset, so #P problems must be atleast as hard as NP.

    The problem of counting planar per-fect matchings has its roots in phys-ics and chemistry, where the originalquestion was: If diatomic moleculesare adsorbed on a surface, forming asingle layer, how many ways can theybe arranged? Another version askshow many ways dominos (2-by-1 rect-angles) can be placed on a chessboardwithout gaps or overlaps. The answersexhibit clear signs of exponential

    growth; when you arrange dominoson square boards of size 2, 4, 6 and 8,the number of distinct tilings is 2, 36,6,728 and 12,988,816. Given this rapidproliferation, it seems quite remark-able that a polynomial-time algorithmcan count the configurations. The in-genious method was developed in theearly 1960s by Pieter W. Kasteleyn and,independently, Michael E. Fisher andH. N. V. Temperley. It has come to beknown as the FKT algorithm.

    The mathematics behind the FKT al-gorithm takes some explaining. In out-

    line, the idea is to encode the structureof an n-vertex graph in an n-by-nma-trix; then the number of perfect match-ings is given by an easily computedproperty of the matrix. The illustrationon this page shows how the graph isrepresented in matrix form.

    The computation performed on thematrix is essentially the evaluation of adeterminant. By definition, a determi-nant is a sum of n! terms, where eachterm is a product of nelements chosenfrom the matrix. The symbol n! denotes

    the factorial of n, or in other wordsn(n1) . . .321. The trouble is,

    n! is not a polynomial function of n; itqualifies as an exponential. Thus, un-der the rules of complexity theory, the

    whole scheme is really no better thanthe brute-force enumeration of all per-fect matchings. But this is where therabbit comes out of the hat. There arealternative algorithms for computingdeterminants that do achieve polyno-mial performance; the best-known ex-ample is the technique called Gaussianelimination. With these methods, allbut a polynomial number of terms inthat giant summation magically cancelout. We never have to compute them,or even look at them.

    (The answer sought in the perfect-

    matching problem is actually not thedeterminant but a related quantitycalled the Pfaffian. However, the Pfaf-fian is equal to the square root of thedeterminant, and so the computationalprocedure is essentially the same.)

    The existence of a shortcut for evalu-ating determinants and Pfaffians is likea loophole in the tax codea windfallfor those who can take advantage of it,but you can only get away with suchspecial privileges if you meet verystringent conditions.

    Closely related to the determinantis another quantity associated with

    The fast algorithm for counting planar per-

    fect matchings works by translating the

    problem into the language of matrices and

    linear algebra. The pattern of connections

    within the graph is encoded in an adjacencymatrixan array of numbers with rows and

    columns labeled by the vertices of the graph.

    If two vertices are joined by an edge, the cor-

    responding element of the matrix is either +1

    or 1; otherwise the element is 0. Elements of

    the matrix are combined in a sum of products

    called the Pfaffian, which yields the number

    of perfect matchings. For the simple graph

    shown here there are two perfect matchings.

  • 7/27/2019 Hayes - Accidental Algorithms

    4/5

    12 American Scientist, Volume 96 2008Brian Hayes. Reproduction with permission only.Contact [email protected].

    matrices called the permanent. Itsanother sum of n! products, but evensimpler. For the determinant, a compli-cated rule assigns positive and nega-

    tive signs to the various terms of thesummation. For the permanent, theresno need to bother keeping track of thesigns; theyre all positive. But the al-ternation of signs is necessary for thecancellations that allow fast computa-tion of determinants. As a result, thepolynomial loophole doesnt work forpermanents. In 1979 Valiant showedthat the calculation of permanents is

    #P-complete. (It was in this work thatthe class #P was first defined.)

    At a higher level, too, the conspiracyof circumstances that allows perfectmatchings to be counted in polynomialtime seems rather delicate and sensi-tive to details. The algorithm works

    only for planar graphs; attempts to ex-tend it to larger families of graphs havefailed. Even for planar graphs, it worksonly for perfectmatchings; countingthe total number of matchings is a #P-complete task.

    Algorithmic HolographyThe engine that drives the FKT algo-rithm is the linear-algebra shortcut forevaluating determinants (or Pfaffians)in polynomial time. This prime moveris harnessed to solve a counting prob-lem in another area of mathematics,

    namely graph theory. Such translations,or reductions, from one problem toanother are standard fare in complex-ity theory. Holographic algorithms alsorely on reductions, and indeed theyultimately translate problems into thelanguage of determinants. But the na-ture of the reductions is novel.

    A typical non-holographic reduc-tion is a one-to-one mapping betweenproblems in two domains. If you canreduce problem A to problem B, andthen find a solution to an instance of

    B, you know that the correspondinginstance of Aalso has a solution. De-vising transformations that set up thisone-to-one linkage between problemsis a demanding art form. Holographicreductions exploit a broader class oftransformations that dont necessarilylink individual problem instances, butthe reductions do preserve the numberof solutions or the sum of the solu-

    tions. For certain counting problems,thats enough.

    A problem with the cryptic name#PL-3-NAE-ICE supplies an exampleof the holographic process. The prob-lem concerns a planar graph of maxi-mum degree three; that is to say, no

    vertex has more than three edges. Eachedge is to be assigned a direction, sub-ject to the constraint that no vertex ofdegree two or three can have all of itsedges directed either inward or out-ward. Graphs of this general kind havebeen studied as models of the structureof ice; the vertices represent moleculesand the directed edges are chemicalbonds. The decision version of theproblem asks whether the edges can beassigned directions so that the not-all-equal constraint is obeyed everywhere.Here we are interested in the counting

    version, which asks how many waysthe constraints can be satisfied.

    The strategy is to build a new pla-nar graph called a matchgrid, whichencodes both the structure of the icegraph and the not-all-equal constraintsthat have to be satisfied at each vertex.Then we calculate a weighted sum ofthe perfect matchings in the match-grid, using the efficient FKT algorithm.Although there may be no one-to-onemapping between individual match-ings in the matchgrid and valid as-

    signments of bond directions in the icegraph, the weighted sum of the perfectmatchings is equal to the number ofvalid assignments.

    The matchgrid is constructed fromcomponents called matchgates, whichare planar graph fragments that actmuch like the logic gates of Booleancircuits. To understand how this com-puter built of graphs works, it helps toconsider first an idealized and simpli-fied version, in which we can manu-facture matchgates to meet any speci-fications we please.

    Given such an unlimited stock ofgates, we can assemble a model of PL-3-NAE-ICE as follows. A degree-threevertex in the ice graph is representedby a recognizer matchgate with threeinputs. The recognizer has a signa-ture that implements the not-all-equalfunction: the gate is designed so thatits contribution to the sum of the per-fect matchings is 0 if the three inputsare 000 or 111, but the contribution is 1for any of the other six possible inputpatterns (001, 010, 011, 100, 101, 110).

    Each edge of the ice graph is repre-sented in the matchgrid by a genera-

    Counting the configurations of a structure

    called three-ice is an example of a problem

    solved in polynomial time by a holograph-

    ic algorithm. Three-ice is a directed graph(each edge has an arrow attached), and no

    more than three edges can meet at any vertex.

    Where two or three edges come together, the

    arrows must not all be either converging or

    diverging. One valid configuration is shown

    here; the algorithm computes the number of

    ways the arrows can be placed while satisfy-

    ing the not-all-equal constraint.

    Matchgates are graph-theory devices analogous to the logic gates of digital circuitry. The gate

    shown here implements the not-all-equal function for two inputs. It is a chain of five vertices

    connected by four edges; the orange exterior vertices accept the inputs. An input pattern is

    presented to the gate by removing an orange vertex for each 1 in the input, then checking the

    remaining graph to see if it accommodates a perfect matching. A 00 input removes neither or-

    ange vertex; the resulting graph cannot have a perfect matching because it has an odd number

    of vertices. The inputs 01 and 10 each remove one vertex and thus doallow a perfect matching;

    the 11 input again leaves an odd number of vertices. Although the two-input not-all-equal gate

    is fairly simple, many other functionsincluding the analogous gate for three inputscannotbe realized in this direct way.

  • 7/27/2019 Hayes - Accidental Algorithms

    5/5

    2008 JanuaryFebruary 13www.americanscientist.org 2008Brian Hayes. Reproduction with permission only.Contact [email protected].

    tor matchgate that has two outputs,corresponding to the two ends of theedge. To capture the idea that an edgehas a directionpointing toward oneend and away from the otherthegenerator contributes 1 to the weight-ed sum when the outputs are either

    01 or 10, but the contribution is 0 foroutputs 00 and 11.In this scheme, the concept of per-

    fect matching becomes a computa-tional mechanism. If a graph fragment,packaged up as a matchgate, allowsa perfect matching, then the gate out-puts a logical 1, or true. If no perfectmatching is possible, the output is 0,orfalse. This is a novel and interestingway to build a computer, but theres acatch. Many of the needed matchgatescannot be implemented in the directmanner described above. In particu-

    lar, the three-input not-all-equal gatecannot be implemented. The reasonis easy to demonstrate. Perfect match-ing is all about parity; a graph can-not possibly have a perfect matchingunless the number of vertices is even.The three-input not-all-equal gate isrequired to respond in the same wayto the inputs 000 and 111. But thesepatterns have opposite parity; one iseven and the other is odd.

    The remedy for this impediment isyet more linear algebra. Although no

    simple matchgate can directly imple-ment the three-input not-all-equalfunction, the desired behavior can begenerated as a linear superpositionof other functions. Finding an appro-priate set of equations to create suchsuperpositions is the most essentialand also the most difficult aspect of ap-plying the holographic method. It hasmostly been a hit-or-miss proposition,requiring both inspiration and expertuse of computer-algebra systems. Caiand Pinyan Lu of Tsinghua Universityhave recently made progress on sys-

    tematizing the search process.So far about a dozen counting prob-

    lems have been solved by the holo-graphic method, none of them havingany immediate practical use or conse-quences for the further developmentof complexity theory. Indeed, they arean odd lotapart from the ice prob-lem they are mostly specialized andrestricted versions of satisfiability andmatching. One particularly curious re-sult gives a fast algorithm for countinga certain set of solutions modulo 7, but

    counting the same set modulo 2 is atleast as hard as NP-complete.

    Why are the methods called holo-graphic algorithms? Valiant explainsthat their computational power comesfrom the mutual cancellation of manycontributions to a sum, as in the opticalinterference pattern that creates a ho-logram. This sounds vaguely like the

    superposition principle in quantumcomputing, and that is not entirely acoincidence. Valiants first publicationon the topic, in 2002, was titled Quan-tum circuits that can be simulated clas-sically in polynomial time.

    P or NP, That Is the QuestionDo holographic algorithms reveal any-thing we didnt already know aboutthe P = NP question? Lest there be anymisunderstanding, one point bearsemphasizing: Although some of theproblems solved by holographic meth-

    ods were not previously known to bein P, none of them were NP-completeor #P-complete. Thus, so far, the bar-rier between P and NP remains intact.

    Suggesting that P might be equalto NP is deeply unfashionable. A fewyears ago William Gasarch of the Uni-versity of Maryland took a poll on thequestion. Of 100 respondents, only ninestood on the side of P = NP, and Gas-arch reported that some of them tookthe position just to be contrary. Theidea that all NP problems have easy

    solutions seems too good to be true, anexercise in wishful thinking; it wouldbe miraculous if we lived in a universewhere computing is so effortless. Butthe miracle argument cuts both ways:For NP to remain aloof from P, we haveto believe that not even one out of allthose thousands of NP-complete prob-lems has an efficient solution.

    Valiant suggests a comparison withthe Goldbach conjecture, which holdsthat every even number greater than2 is the sum of two primes. Nearly ev-eryone believes it to be true, but in the

    absence of a proof, we dont know whyit should be true. We cant rule out thepossibility that exceptions exist but areso rare we havent stumbled on oneyet. Likewise with the P and NP ques-tion: A polynomial algorithm for justone NP-complete problem would for-ever alter the landscape.

    The work on holographic algorithmsdoesnt have to be seen as some sortof wildcat drilling expedition, hopingto strike a P = NP gusher. It would beworthwhile just to have a finer survey

    of the boundaries between complexityclasses, showing more clearly what can

    and cant be accomplished with polyno-mial resources. Valiant writes that anyproof of P NP will need to explain,and not only to imply, the unsolvabilityof our polynomial systems.

    Finally, theres the challenge of un-derstanding the algorithms themselves

    at a deeper level. To call them acci-dentalor exotic, or freak, whichare other terms that turn up in the lit-eraturesuggests that they are sportsof nature, like weird creatures foundunder a rock and put on exhibit. Butone could also argue, on the contrary,that these algorithms are not at all ac-cidental; they are highly engineeredconstructions. The elaborate systemsof polynomials needed to create sets ofmatchgates are not something foundin the primordial ooze of mathematics.Someone had to invent them.

    BibliographyCai, Jin-Yi. Preprint. Holographic algorithms.

    http://pages.cs.wisc.edu/~jyc/papers/HA-survey.pdf.

    Cai, Jin-Yi, Vinay Choudhary and Pinyan Lu.2007. On the theory of matchgate com-putations. In Proceedings of the 22nd IEEEConference on Computational Complexity, pp.305318.

    Cai, Jin-Yi, and Pinyan Lu. 2007. Holographicalgorithms: From art to science. In Proceed-ings of the 39th ACM Symposium on the Theo-ry of Computing, STOC 07, pp. 401410.

    Cai, Jin-Yi, and Pinyan Lu. 2007. Holographicalgorithms: The power of dimensionalityresolved. In Proceedings of the 34th Interna-tional Colloquium on Automata, Languages andProgramming, ICALP 2007, pp. 631642.

    Cook, Stephen A. 1971. The complexity oftheorem-proving procedures. In Proceedingsof the Third ACM Symposium on the Theory ofComputing, pp. 151158.

    Edmonds, Jack. 1965. Paths, trees, and flowers.Canadian Journal of Mathematics17:449467.

    Gasarch, William I. 2002. The P=?NP poll.SIGACT News33(2):3447.

    Jerrum, Mark. 2003. Counting, Sampling andIntegrating: Algorithms and Complexity. Basel,Switzerland: Birkhauser.

    Kasteleyn, P. W. 1961. The statistics of dimers

    on a lattice. Physica27:12091225.Mertens, Stephan. 2002. Computational com-plexity for physicists. Computing in Scienceand Engineering4(3):3147.

    Valiant, L. G. 1979. The complexity of comput-ing the permanent. Theoretical Computer Sci-ence8:189201.

    Valiant, Leslie G. 2002. Quantum circuits thatcan be simulated classically in polynomialtime. SIAM Journal on Computing 31:12291254.

    Valiant, Leslie G. 2005. Holographic algorithms.Electronic Colloquium on Computational Com-plexity, Report No. 99.l

    Valiant, Leslie G. 2006. Accidental algorithms.In Proceedings of the 47th IEEE Symposium on

    Foundations of Computer Science, FOCS 06,pp. 509517.