towards p = n p via k-sat: a k-sat algorithm using linear algebra on finite fields

22
arXiv:1106.0683v1 [cs.DS] 3 Jun 2011 Towards P = NP via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields c Matt Groff 2011 Incomplete First Draft MATT GROFF P.O. Box 642 Camp Hill, PA, USA 17001-0642 mgroff[email protected] June 6, 2011 Abstract The problem of P vs. NP is very serious, and solutions to the problem can help save lives. This article is an attempt at solving the problem using a computer algorithm. It is presented in a fashion that will hopefully allow for easy understanding for many people and scientists from many diverse fields. In technical terms, a novel method for solv- ing k-SAT is explained. This method is pri- marily based on linear algebra and finite fields. Evidence is given that this method may require only O(n 7 ) time and space for deterministic mod- els. It’s concluded that significant evidence exists that P=NP. There is a forum devoted to this paper at http://482527.ForumRomanum.com. All are in- vited to correspond here and help with the anal- ysis of the algorithm. 1 Introduction There are many problems proposed to computer sci- entists that have been thought to be too difficult for computers to solve quickly. In fact, perhaps the most fundamental question in computer science is to find if certain types of problems, collectively known as the class NP, can be solved quickly by a computer. If so, a world of opportunities would open up, and many new problems that were supposed to be almost im- possible to solve could be solved quickly. This paper attempts to provide a proof that they can be solved quickly, and also shows a way to do it. This will hopefully invite researchers from many diverse fields to contribute to the research and work of solving NP hard problems. The level of interest in this question is so great that in 2000, the Clay Mathematics Institute listed the 7 Millennium Prize Problems, and offered $1,000,000 for someone who could prove the relationship between P and NP[7], which would answer the question. 1.1 Turing Machines To understand the classes P and NP, first consider the basic notion of a Turing machine, which is a sci- entific definition of a computer. It has one or more tapes that hold information. At any time, the Turing machine uses the tape or tapes to decide what to do next. The machine has a prescribed set of instruc- tions to it to help it decide. To develop the ideas behind Turing machines more, a distinction is made between what types of deci- sions can be made. A deterministic Turing machine (DTM) has a predetermined decision for every type of situation it encounters; thus the term deterministic. A nondeterministic Turing machine (NTM), on the other hand, can have more than one action it can do for any given situation. In other words, it’s actions aren’t determined ahead of time. Therefore, it’s said that it’s nondeterminstic. Figure 1 makes the difference more clear. NTMs have been thought to be able to solve more problems than DTMs because of the availability to make more choices. Here the DTM has only one choice avail- able at each step, while the DTM has three choices available. So this evidence points to DTMs as be- ing more limited in potential than NTMs. In fact, there is enough difference between these two com- 1

Upload: terminatory808

Post on 10-Oct-2014

37 views

Category:

Documents


0 download

DESCRIPTION

Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields c Matt Groff 2011Incomplete First DraftarXiv:1106.0683v1 [cs.DS] 3 Jun 2011MATT GROFF P.O. Box 642 Camp Hill, PA, USA 17001-0642 mgroff[email protected] June 6, 2011quickly, and also shows a way to do it. This will hopefully invite researchers from many diverse fields to contribute to the research and work of solving NP hard problems. The level of interest in this question is so great that in 2000, the Clay Mathe

TRANSCRIPT

Page 1: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

arX

iv:1

106.

0683

v1 [

cs.D

S] 3

Jun

201

1

Towards P = NP via k-SAT:

A k-SAT Algorithm Using Linear Algebra on Finite Fieldsc©Matt Groff 2011

Incomplete First Draft

MATT GROFFP.O. Box 642

Camp Hill, PA, USA [email protected]

June 6, 2011

Abstract

The problem of P vs. NP is very serious, andsolutions to the problem can help save lives. Thisarticle is an attempt at solving the problem usinga computer algorithm. It is presented in a fashionthat will hopefully allow for easy understandingfor many people and scientists from many diversefields.In technical terms, a novel method for solv-

ing k-SAT is explained. This method is pri-marily based on linear algebra and finite fields.Evidence is given that this method may requireonly O(n7) time and space for deterministic mod-els. It’s concluded that significant evidence existsthat P=NP.There is a forum devoted to this paper at

http://482527.ForumRomanum.com. All are in-vited to correspond here and help with the anal-ysis of the algorithm.

1 Introduction

There are many problems proposed to computer sci-entists that have been thought to be too difficult forcomputers to solve quickly. In fact, perhaps the mostfundamental question in computer science is to find ifcertain types of problems, collectively known as theclass NP, can be solved quickly by a computer. If so,a world of opportunities would open up, and manynew problems that were supposed to be almost im-possible to solve could be solved quickly. This paperattempts to provide a proof that they can be solved

quickly, and also shows a way to do it. This willhopefully invite researchers from many diverse fieldsto contribute to the research and work of solving NPhard problems.The level of interest in this question is so great that

in 2000, the Clay Mathematics Institute listed the 7Millennium Prize Problems, and offered $1,000,000for someone who could prove the relationship betweenP and NP[7], which would answer the question.

1.1 Turing Machines

To understand the classes P and NP, first considerthe basic notion of a Turing machine, which is a sci-entific definition of a computer. It has one or moretapes that hold information. At any time, the Turingmachine uses the tape or tapes to decide what to donext. The machine has a prescribed set of instruc-tions to it to help it decide.To develop the ideas behind Turing machines more,

a distinction is made between what types of deci-sions can be made. A deterministic Turing machine(DTM) has a predetermined decision for every type ofsituation it encounters; thus the term deterministic.A nondeterministic Turing machine (NTM), on theother hand, can have more than one action it can dofor any given situation. In other words, it’s actionsaren’t determined ahead of time. Therefore, it’s saidthat it’s nondeterminstic.Figure 1 makes the difference more clear. NTMs

have been thought to be able to solve more problemsthan DTMs because of the availability to make morechoices. Here the DTM has only one choice avail-able at each step, while the DTM has three choicesavailable. So this evidence points to DTMs as be-ing more limited in potential than NTMs. In fact,there is enough difference between these two com-

1

Page 2: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 2

A decision tree is shown below. As an algorithm (computer program) progresses, it must pick achoice from three possibilities. Nondeterministic turing machines(NTMs) have been thought to havean advantage in this case, because, at each step, they can pick from any choice. Deterministic turingmachines(DTMs) don’t really have the ability to pick, so they’re thought to be at a disadvantage.

START

Choice available to NTM

Choice available to NTM and DTM

Figure 1: A Scenario Involving Choices

puters that different classes of computation (compu-tational power) were proposed for each.

1.2 P and NP

Right around 1970, Leonid Levin, in the USSR, andStephen Cook, of the US, independently arrived atthe concept of NP-completeness[1].

The idea of NP-completeness comes from twoclasses of algorithms. Once again, these two classescome from the ideas of DTMs and NTMs.

The classes are defined partially by how much canbe accomplished within a limited amount of time.This time limit, which will be explained briefly here,is defined by the mathematics of asymptotic analysis,which is described in [?]. Basically, the time limit isgiven as a function of the problem size. The functioncan take on many forms, and two common forms areshown above in Figure 2. Here a polynomial function(t = O(nα)) is contrasted with an exponential func-tion (t = O(αn)). Note that the polynomial func-tions may take more time for smaller problems, butthe larger problems, which computer scientists aremainly concerned with, have smaller times for poly-nomial functions.

The two classes mentioned above are defined forpolynomial functions. One class, P , is defined as allproblems that can be solved by a DTM in polynomialtime (a polynomial function of the problem size). The

Page 3: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 3

Timelog(t)

Problem Size (n)

t =O(

αn )

t = O(nα )

Figure 2: The difference between functions

other class, NP , is defined as the class of all prob-lems that can be solved by an NTM in polynomialtime. NP -complete problems are then problems inNP that are at least as hard to solve as the hardestproblems in NP .

The fundamental question that this paper attemptsto answer is if NP -complete problems can be solvedin polynomial time by a DTM; in other words, isP = NP?

For more information on P versus NP , one canrefer to Sipser [6].

1.3 SAT and k-SAT

SAT essentially asks if, given a Boolean formula, canthe formula be satisfied. In other words, it asks if thevariables in the formula be given a truth assignmentthat makes the entire formula true. It’s required, tothoroughly answer the question, that a certificate isgiven if the answer is true. This is usually in theform of a particular solution to the question, such asa satisfying assignment for all variables.

k-SAT is a particular variation of SAT, in which theformula is organized into clauses. All variables insidea clause are connected via disjunction. All clausesare connected via conjunction. The k in k-SAT thenrefers to the number of variables in each clause. Moreinformation on SAT and k-SAT can be found in [12].

Conventional SAT and k-SAT solvers function bylearning clauses or making random guesses at solu-

tions ([14] and the recent survey [15]).

The current best upper bounds for 3-SAT isO(1.32113n) time, and can be found in ISAAC 2010by Iwama et al[16]. There’s an arXived paper byHertli, Moser and Scheder which gives an O(1.321n)time algorithm[17]. These are all randomized algo-rithms. There is an arXived, deterministic 3-SATalgorithm by Kutzkov and Scheder that runs in timeO(1.439n)[18].

There is a paper that proposed a polynomial run-time for a SAT variant. V.F. Romanov proposed a 3-SAT algorithm that uses “discordant structures”, orset-like operations on a lattice, to represent informa-tion and determine solutions in polynomial time[19].However, Liao presents an argument that P is notequal to NP, using a 3-SAT variant, in [9].

Perhaps Baker, Gill, and Solovay’s theory of rela-tivization helps explain why there are not more algo-rithms that attempt to solve NP-complete problemsin polynomial time[20]. Their paper “Relativizationsof the P != NP Question” states that consulting anoracle can lead to situations in which P != NP.

Page 4: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 4

x2x1x0

???

0?? 1??

00? 01? 10? 11?

000 001 010 011 100 101 110 111

x1 =

0 1 2 3 4 5 6 7

0 0 1 1 0 0 1 1

f(x1) = 0x0 0x1 1x2 1x3 0x4 0x5 1x6 1x7

Figure 3: Representations of x1

2 Data Structure(s)

One of the most fundamental components of the al-gorithm will be refered to as the clause polynomial.Figure 3 shows the basic organization of clause poly-nomials. Essentially, the complete information of oneclause is represented by a polynomial of one vari-able. That is, for eaxh particular truth assignmentof all variables, the polynomial “records” whether theclause satisfies this assignment. This is then attachedto the polynomial’s variable, which orders the truthassignments.

Note that the particular truth assingnments areshown in red. In this fashion, the same tree thatorganizes the variables can be repeatedly used, andtherefore provides some standard organization to theinformation. Again, this tree organizes all possiblevariable assignements. It does so by starting at theroot, and then proceeds through all Boolean variablesin order, and assigns one node to each possible assign-ment of true or false for each variable. The leaves (atthe bottom) thus represent a complete truth assing-ment for all variables.

The clause polynomial for a single variable is equiv-alent to a digit in a binary number. In order to under-stand this, Figure 4 shows binary numbers composedof one to three bits. The tree in Figure 3 uses a threebit example, since there are three variables; thus one

1 Bit

0(0)

1(1)

2 Bits

00(0)

01(1)

10(2)

11(3)

3 Bits

000(0)

001(1)

010(2)

011(3)

100(4)

101(5)

110(6)

111(7)

Figure 4: Binary Representations

Page 5: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 5

x0 In System Of x0, x1, x2

x20(1 + x21)(1 + x22) =

x1 + x3 + x5 + x7 =

0x0 + 1x1 + 0x2 + 1x3 + 0x4 + 1x5 + 0x6 + 1x7

x1 In System Of x0, x1, x2

(1 + x20)x21 (1 + x22) =

x2 + x3 + x6 + x7 =

0x0 + 0x1 + 1x2 + 1x3 + 0x4 + 0x5 + 1x6 + 1x7

Figure 5: Constructing Polynomials For Variables

bit for each variable. Note then, that the middle vari-able, x1, out of x0, x1, and x2, corresponds with themiddle bit. This can be seen in the figure by lookingat the middle bit for each binary representation ofthree bits (which is displayed larger than the othertwo bits). The sequence is the same as the binarytree above.

2.1 Constructing ClausePolynomials

A basic clause is of the form

xa0∨ xa1

∨ · · · ∨ xaz(2.1)

The algorithm seeks to construct clause polynomialsfor clauses in this form. This is a two step process.The basic theory behind the process is fairly easily ex-plained here, although the technicalities and a proofare saved for Appendix A on page 20.To begin to understand how clause polynomials are

constructed, a few patterns are observed. Figure 6,on the next page, helps to demonstrate these. First,the obvious pattern of binary digits appears for thevariables on the lefthand side. All variables have apattern of 2i zeros and then 2i ones for variable xi,which then repeats. These variables, in conjunction,correspond with binary numbers. In other words, forthe least significant bit, the pattern is zero, then one,and then it repeats. For the next least significant bit(or variable), the pattern has two zeros and then twoones, which then repeats. This pattern continues forall variables.

Observing this pattern, an easy way to constructthe clause polynomials for a single variable becomesclear. In fact, it can be summarized as a simple for-mula:

f(xm) =

(

m−1∏

k=0

(1 + x2k)

)

x2m

(

n∏

k=m+1

(1 + x2k)

)

(2.2)Here

(f(x)) denotes the (indefinite) product of afunction of x. More information can be found aboutindefinite products in [26]. This is taken over a sys-tem of n+ 1 variables.Figure 5 shows how the clause polynomials are ac-

tually constructed using the formula. Note that theseare all for a single variable. For the x0 example,m = 0 since the variable is x0. In other words, ifthe variable was x32, then the algorithm would plugin m = 32 into the formula. Note, then, that n is oneless than the total number of variables, so in bothcases it is two, since there are three variables. Re-turning to the x0 example, the lefthand product hasnothing inside it, the middle follows from knowingn, and the right side product is determined from mand n and the rules of products. For the x1 example,there is a product on the left side to work with, sincem is now one, and k is set to zero for this. Then it

follows that (1 + x2k) = (1 + x1) since k = 0. Theright side follows similarly, this time with k = 2.Returning attention to Figure 6 on the previous

page, a second pattern can be observed. This isappearant in the blue portion of the figure. Here,

Page 6: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 6

x4 x3 x2 x1 x0 x1 ∨ x3 x0 ∨ x3 x0 ∨ x2 x0 ∨ x1 x0 ∨ x1 ∨ x3 x0 ∨ x1 ∨ x2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

0

0

1

1

1

1

1

1

0

0

1

1

1

1

1

1

0

0

1

1

1

1

1

1

0

0

1

1

1

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

Figure 6: Repeating Patterns of Clauses

For Clauses of the Form xa0 ∨ xa1 ∨ · · · ∨ xaz

For each xai

Calculate g(xai) =

(

∏ai−1k=0 (1 + x2k)

)

x2ai

Let Result = 0

For h = 0 to z

If (h = ai) for some ai

Result = Result + g(xh)

Else

Result = Result·(1 + x2h)

Return Result

Figure 7: Clause Polynomial Construction Algorithm

Page 7: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 7

(x0 ∨ x1) From x0, x1, x2

g(x0) = x20 = 1x1 g(x1) =(

1 + x20)

x21 = 1x2 + 1x3

f(x0 ∨ x1) =(

(x1) + (x2 + x3))

(

1 + x22)

f(x0 ∨ x1) = 0x0 + 1x1 + 1x2 + 1x3 + 0x4 + 1x5 + 1x6 + 1x7

(x0 ∨ x1 ∨ x2) From x0, x1, x2

g(x0) = x20 = 1x1 g(x1) =(

1 + x20)

x21 = 1x2 + 1x3

g(x2) =(

1 + x20)(

1 + x21)

x22 = 1x4 + 1x5 + 1x6 + 1x7

f(x0 ∨ x1 ∨ x2) = (x1) + (x2 + x3) + (x4 + x5 + x6 + x7)

f(x0 ∨ x1 ∨ x2) = 0x0 + 1x1 + 1x2 + 1x3 + 1x4 + 1x5 + 1x6 + 1x7

Figure 8: Clause Polynomial Examples

the second pattern is shown in boxes for two-variableclauses. There is the original one-variable pattern ofones and zeros, which is then followed by a series ofones. Note that the series of ones is exactly the samesize as the original single-variable pattern. This en-tire pattern then repeats.

The observed pattern comes from the combinationof two variables. The long series of ones comes fromthe variable that is “larger” than the other. Theshort series of repeating ones and zeros comes fromthe “smaller” of the two variables. Together, theyform a series that repeats. Another way of looking atit is that both variables form a repeating series, andcombining these two repeating series creates anotherrepeating series.

In fact, the pattern of repetition continues even asmore variables are added to the clause. The purpleportion of the figure shows the repitition(s) involvedwith three variables. Again, combining two variablesproduces a short, repeating series. When a third vari-able (that is strictly “larger” than the other two) isadded, a longer, yet repeating, series emerges.

Again, the techinicalities of all of this are presented(and solved) in Appendix A on page 20.

There is a very simple algorithm to calculate clausepolynomials for any amount of variables, as long asthey are in the form given in Equation 2.1. Figure7 shows the algorithm. The basic idea is to first cal-culate the the (possibly long) sequence of ones that

repeats for each variable. This is then shifted intothe proper position. It is identical to the clause cal-culations for one variable, with the exception thatthe series is not made to repeat. The idea is that us-ing “smaller” variables, the algorithm will constructshort repeating sequences, and add in the appropri-ate larger variable when the repeating sequence getslarge enough. So essentially, it constructs one largerepeating sequence by making the smaller variable se-quences repeat, and adding in larger variables whenthe sequence gets large enough.

Figure 8 shows examples of creating clause polyno-mials for given problems. Note that the complete setof variables for the original problem must be knownahead of time, just as for clauses of individual vari-ables. As can be seen, the results from this figurecorrespond with the first eight values from the corre-sponding clauses in Figure 6.

There is really only one operation remaining to be-ing able to construct essentially any clause polyno-mial of the form in Equation 2.1; negation. As itturns out, negation is not much more difficult thanconstructing non-negated clause polynomials.

Page 8: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 8

x4 x3 x2 x1 x0 x0 x0 ∨ x1 x0 ∨ x1 x0 ∨ x1 x0 ∨ x1 ∨ x2 x0 ∨ x1 ∨ x2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

Figure 9: Patterns of Clauses With Negation

For Clauses of the Form xa0 ∨ xa1 ∨ · · · ∨ xaz

g(xai) =

(

∏ai−1k=0 (1 + x2k)

)

. . .

If (h = ai) for some ai

Result = Result·(

x2ai)

+ g(xai)

Else If (h = ai) for some ai

Result = Result + g(xh)

Else

Result = Result·(1 + x2h). . .

Figure 10: Negated Clause Polynomial Construction Algorithm

Page 9: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 9

(x0 ∨ x1) From x0, x1, x2

g(x0) = x20 = 1x1 g(x1) =(

1 + x20)

= 1x0 + 1x1

f(x0 ∨ x1) =((

(1x1)x21)

+ (1x0 + 1x1))

·(

1 + x22)

f(x0 ∨ x1) = 1x1 + 1x1 + 0x2 + 1x3 + 1x4 + 1x5 + 0x6 + 1x7

(x0 ∨ x1 ∨ x2) From x0, x1, x2

g(x0) = x20 = 1x1 g(x1) =(

1 + x20)

= 1x0 + 1x1

g(x2) =(

1 + x20)(

1 + x21)

x22 = 1x4 + 1x5 + 1x6 + 1x7

f(x0 ∨ x1 ∨ x2) =((

(1x1)x21)

+ (1x0 + 1x1))

·(

1 + x22)

+ (1x4 + 1x5 + 1x6 + 1x7)

f(x0 ∨ x1 ∨ x2) = 1x0 + 1x1 + 0x2 + 1x3 + 1x4 + 1x5 + 1x6 + 1x7

Figure 11: Clause Polynomial Examples With Negation

Figure 9, on the previous page, displays clause pat-terns with negated variables. As can be noted bythe negated variable x0 near the left, it is just a re-versed version of ones and zeros, compared to thenon-negated x0 displayed just to the left of it. Inother words, just as negation reverses true and falsevalues in Boolean algebra, negation reverses zeros andones in clause polynomials. In fact, this is how allsingle-variable clauses are negated; simply swap thezeros and ones. There is actually a very simple andeffective way to do this - the negated clauses are thesame as the regular clauses except for the fact thatthey are not multiplied by a power of x2m , as theregular clauses are. This is the equivalent as remov-ing this term from Equation 2.2, hence the resultingequation:

f(xm) =

(

m−1∏

k=0

(1 + x2k)

)(

n∏

k=m+1

(1 + x2k)

)

(2.3)

Again, this is for a system of n+1 variables, and ncan be adjusted accordingly for the size of the prob-lem.Next, attention can be turned to the new pat-

terns observed for multiple-variable clauses. Figure9 shows the negated values of x0 next to the com-bined clause x0 ∨ x1. Note that the pattern of onescoming from x1 has not changed in any way; however,

the repeating pattern of zeros and ones coming fromx0 has been switched, compared to the non-negatedequation x0∨x1 beside it on the right. So really, it isbecoming evident that negating variables only swapsthe pattern of ones that the variable creates.

This is more evident if the clause x0 ∨ x1 is ob-served beside it. Here the pattern of ones comingfrom the variable x1 has changed, since x1 has beennegated. So again, there is evidence that the patternof ones coming from a variable is simply “exchanged”or placed elsewhere. It can be seen, though, that thisdoes not change the pattern for x2, as evidenced inthe equation x0 ∨ x1 ∨ x2 beside it.

Finally, it’s seen that swapping the pattern of onesmay entail moving a smaller group of ones and zeros.This is evidenced in the final clause in the figure.Here, the variable x2 has been negated, which causesa shift in the largest pattern of ones, which is due tox2.

Two examples have been worked out in slight detailin Figure 11. Here the pattern is mostly the same,with the exception that negating variables causessome values to be swapped. This is the equivalentof moving the pattern of ones. The adjustments tothe algorithm are shown in Figure 10 on the previouspage.

Page 10: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 10

??

0? 1?

00 01 10 11

0x0 1x1 0x2 1x3+ + +

= f(x0)

??

0? 1?

00 01 10 11

0x0 0x1 1x2 1x3+ + +

= f(x1)

0x0 1x1 0x2 1x3

0x0 0x1 1x2 1x3

0 Clauses 1 Clause 2 Clauses

Satisfied Satisfied Satisfied

Figure 12: Two Clause Satisfaction

3 Two Clause Problems

Now that sufficient background has been presented,the major ideas behind the algorithm can be intro-duced. Two clause problems are some of the simplestcases, yet they allow the fundamental ideas to be in-troduced and studied.

Figure 12 shows the four basic possible combina-tions between two clauses. If a clause polynomial isconsidered with any amount of terms (powers of x),there are only two possible values (known in math-ematics as coefficients) that can be associated witheach term; zero or one. If two clauses are considered,there are two possible values for the coefficient of thefirst polynomial, and two possible values for the co-efficient of the second polynomial, for a total of fourdifferent combinations.

The figure shows two clause polynomials and thetrees associated with them. In the middle, the fourpossible combinations can be seen. Rememberingthat the coefficients represent satisfaction for a par-ticular truth assignment, it’s possible to note thetotal satisfaction for two clauses considered simul-taneously. When both clauses have a zero coeffi-cient, that indicates that neither clause will satisfythe original question of satisfaction for that partic-ular truth assignment. Looking through the cluasetrees, it’s seen that this assignment corresponds to(x0 = false, x1 = false). So this truth assignmentwon’t satisfy any of the clauses. On the other hand,the two truth assignments in the middle of the fig-ure each satisfy one clause. The truth assignment

(x0 = true, x1 = true) is seen to satisfy both clauses,and so it is a solution to the original problem. Again,the reason it satisfies both clauses is that it has a onefor both coefficients, thus signifying that both clausesare satisfied.

One important thing to note here is that betweentwo clauses, there are really only three possible sat-isfaction results. Either zero, one, or both of theclauses are satisfied (for any particular truth assign-ment). One general idea that, although perhaps triv-ial, will be important is that there are really onlythree types of satisfaction between two clauses. Thisbasic concept will be extended to see that there arereally only n+1 possible types of satisfaction betweenn clauses.

The fundamental idea here is that the problemscan be simplified so that large problems can really bedealt with by simply considering the types of satis-faction, which are fairly simple. For two clause prob-lems, it will really only be necessary to deal with thethree types of satisfaction, and interactions betweenthe different types of satisfaction.

It will be shown how operations of multiplicationand addition can be used to work with the three dif-ferent types of satisfaction, and to solve questionsabout them. This will be essentially simpler and insome ways necessary to overcome the complicationsof working with all of the possibilities between two ormore clauses.

Page 11: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 11

??

0? 1?

00 01 10 11

0x0 1x1 0x2 1x3+ + +

= f(x0)

f(x0) = 0x0 + 1x1 + 0x2 + 1x3

f(1) = (1 + x20)(1 + x21)

= (1 + x)(1 + x2)

= 1x0 + 1x1 + 1x2 + 1x3

h(x0) = (a · f(x0)) + (f(1)− f(x0))

= (0x0 + ax1 + 0x2 + ax3)+

(1x0 + 0x1 + 1x2 + 0x3)

= (1x0 + ax1 + 1x2 + ax3)

Figure 13: Example Pre-Multiplication Calculation

3.1 Manipulating ClausePolynomials

In order to simplify clause calculations into the theequivalence classes of satisfaction, it is necessary tomodify the original clause polynomials.The first modification used is a simple one. The

algorithm will need to change all of the coefficientsthat are equal to one into coefficients that are equalto a. This is easy; it is just multiplication by a. Here’san example:

f(x) = 0x0 + 1x1 + 0x2 + 1x3

a · f(x) = 0x0 + ax1 + 0x2 + ax3

The next modification is a bit tougher; it is ex-changing the one and zero coefficients. This can bedone by subtracting the original function from a func-tion of ones. This inverts the bits, since the ones aresubtracted from ones to become zeros, and the thezeros are subtracted from ones to become ones. How-ever, the function of all ones is needed. This functionis, for a system of V variables:

f(1) =

V−1∏

k=0

(1 − x2k) (3.1)

Note that:

f(1) = 1x0 + 1x2 + 1x3 + · · ·+ 1x2V −1

This completes the requirements for multiplication.The algorithm uses this knowledge to change the one

coefficients into a coefficients and the zero coefficientsinto one coefficients. Then multiplication can be per-formed, as will be seen.

Figure 13 exhibits the steps taken before a functionis ready for multiplication. f(x0) is given through atree, before multiplication. The ones must be trans-formed into as, and the zeros must be transformedinto ones. Note the final result, h(x0). It is the fin-ished calculation, ready for multiplication.

To do this, the function of ones for a two-variablesystem must be calculated. Then the original func-tion can be multiplied by a to transform the one co-efficients, and it is also subtracted from the functionof ones (seperately) to transform the zero coefficients.These are added together for the final result.

Note that this procedure works for any clause inany system with any amount of variables. It simplyprepares the clause polynomials for special process-ing, or interactions, with other clause polynomials.This may not seem like much, but it greatly simpli-fies the system.

Page 12: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 12

x0 In System Of x0, x1, x2

(x20)2(

1 + (x21 )2)(

1 + (x22 )2)

=

x2 + x6 + x10 + x14 =

0x0 + 1x2 + 0x4 + 1x6 + 0x8 + 1x10 + 0x12 + 1x14

(x0 ∨ x1 ∨ x2) From x0, x1, x2

g(x0) = (x20)2 = 1x2 g(x1) =(

1 + (x20)2)

(x21 )2 = 1x4 + 1x6

g(x2) =(

1 + (x20)2)(

1 + (x21)2)

(x22 )2 = 1x8 + 1x10 + 1x12 + 1x14

f(x0 ∨ x1 ∨ x2) = (x2) + (x4 + x6) + (x8 + x10 + x12 + x14)

f(x0 ∨ x1 ∨ x2) = 0x0 + 1x2 + 1x4 + 1x6 + 1x8 + 1x10 + 1x12 + 1x14

Figure 14: Example Pre-Addition Calculation

Addition requires modifications too. However,these modifications are of a different variety thanthose of multiplication. Essentially, the power of xneeds to be doubled.

Doubling the power of x can actually be fairly sim-ple. Figure 14 displays a retake of the creation ofclause polynomials. The first part of the example isreally just Figure 5 on page 5, modified for doublingthe power of x. Note that each power of x, once it isessentially calculated, is doubled. The result is thatall of the powers of x are multiplied by two in thefinished clause polynomial.

The second part of the figure is also a redo, thistime of a multiple variable clause taken from Figure8 on page 7. Again, the main idea is that the powersof the variable x is doubled.

The actual operation of addition is also performedon one special value, which comes in part from theideas of multiplication. A constant is added to eachcoefficient of the polynomials. This constant is thesame value for all coefficients, so the function of onesonce again becomes useful. Unfortunately, in its orig-inally derived form, the powers of x are not the sameas the powers of x used in addition. So once again,the same principles are used to double the powers ofx for the function of ones. It is really as simple todo this as altering every power of x in the originalequation (Equation B on page 20). The new formulafor the modified function of ones is as follows:

f(1modified) =

V −1∏

k=0

(

1 + (x2k)2)

(3.2)

These operations will be very useful in the materialahead.

Page 13: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 13

??

0? 1?

00 01 10 11

0x0 1x1 0x2 1x3+ + +

= f(x0)

??

0? 1?

00 01 10 11

0x0 0x1 1x2 1x3+ + +

= f(x1)

· 0x0 1x1 0x2 1x3

0x0

0x0

1x0

1x0

0x0 0x1 0x2 0x3

0x1 0x2 0x3 0x4

0x2 1x3 0x4 1x5

0x3 1x4 0x5 1x6

Figure 15: Clause Multiplication

∧ False True

False False FalseTrue False True

· 0 1

0 0 01 0 1

Figure 16: Operation Equivalence

3.2 Multiplication

It could be said that the algorithm is focused around(arithmetic) multiplication of clause polynomials.

Figure 15 details an example. It begins with twoclause polynomials, f(x0) and f(x1). The associatedtrees are shown that go with the clause polynomials.In the middle, the clause polynomials are broken intopieces and multiplied piecewise. Every part of eachpolynomial is multiplied with every part of the otherpolynomial. This corresponds with plain old arith-metic multiplication of two polynomials. However,an interesting event happens here. The result thatlies along the diagonal (shown in yellow) correspondswith the result f(x0 ∧ x1). That is, the diagonal rep-resents the corresponding clause polynomial for theresult of conjunction. The idea here is that multipli-cation of clause polynomials can be used to evaluatethe original Boolean equation.

Figure 16 helps give a partial explanation of whythis occurs. As can be seen from the truth tables, the

logical operation of conjunction (∧) and the arith-metic operation of multiplication(·) are equivalenton bit values. So it’s not totally unpredictable thatmultiplication of a clause polynomial contains resultssimilar to conjunction. The diagonal in Figure 15contains like terms multiplied by like terms. Onlythe coefficients (or numbers attached to the terms)may differ. The result (along the diagonal) is knownin mathematics as the Hadamard product, and thealgorithm is concentrated on seperating this diagonalterm from the off-diagonal terms.

The reason why the algorithm is so closely asso-ciated with the Hadamard product, or the diagonalterms, is that it is essentially the solution to the origi-nal problem. Since it tells which terms are satisfying,the algorithm can simply count the number of sat-isfying terms along the diagonal. If there are anysatisfying terms, then the original problem can besatisfied. Otherwise, it can’t be satisfied.

So at this point in the ideaology, the originalBoolean problem has been transformed from a ques-tion about Boolean arithmetic, to a question concern-ing how to get information about the diagonal (orHadamard product).

Page 14: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 14

·

=

·

=

·

=

·

=

0 · x 1 · x 0 · x 1 · x

0 · x 0 · x 1 · x 1 · x

1 · x a · x 1 · x a · x

1 · x 1 · x a · x a · x

1x2 ax2 ax2 a2x2

0 Clauses

Satisfied

1 Clause

Satisfied

2 Clauses

Satisfied

0 Clauses

Satisfied

1 Clause

Satisfied

2 Clauses

Satisfied

Figure 17: Multiplication and Satisfaction

3.3 Multiplication andSatisfaction

Figure 17 shows another viewpoint of arithmetic mul-tiplication. This time the four different cases basedon the coefficients are shown. In other words, be-tween any two coefficients being multiplied, the inputcoefficients must be one of four cases. Also, on theleft, the satisfaction between the two original coeffi-cients is shown. Then, on the right, the correspond-ing modified coefficients and their results are shown.Note that there are still really three cases of satisfac-tion, resulting in 1x2, ax2, and a2x2.

It’s seen that modifying the coefficients for multi-plication has allowed for the three cases to appear inthe results of multiplication. This is one of the keysto getting things to work correctly. The algorithmis really interested in the case when both clauses aresatisfied.

Unfortunately, the results for multiplication mixthe diagonal and off-diagonal cases together. In or-der to isolate the diagonal, there will be some trickyinterplay between multiplication and addition ahead.

One thing that can be noted is that multiplica-tion with the original clause polynomials isolates thecase where everything is maximally satisfied (bothclauses are satisfied). This can be seen in Figure18. Note that only the case on the far right (whereboth clauses have nonzero coefficients) returns any-thing other than zero. Now this will occur for both

0 · x 1 · x 0 · x 1 · x

0 · x 0 · x 1 · x 1 · x

0 · x2 0 · x2 0 · x2 1 · x2

Figure 18: Unmodified Multiplication

diagonal and off-diagonal values, but it’s very closeto a solution. The algorithm wants the diagonal por-tion of this result, and seeks to somehow eliminatethe off-diagonal portion of it.

The next subsection will show how the algo-rithm can begin to seperate diagonal terms fromoff-diagonal terms. The algorithm’s efforts will beconcentrated on seperating diagonal values from off-diagonal values, since the result will lead to a solu-tion.

Page 15: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 15

c0(1x2, a0x

2, a0x2, a0

2x2)

c1(1x2, a1x

2, a1x2, a1

2x2)

(0x2, 1x2, 1x2, 2x2)

+

(dx2, dx2, dx2, dx2)

≡ (c0x2, c0a0x

2, c0a02x2)

≡ (c1x2, c1a1x

2, c1a12x2)

≡(

dx2, (d+ 1)x2, (d+ 2)x2)

(0, 0, 0) + off-diagonal

Figure 19: Combining Operations For Elimination

3.4 Eliminating The Diagonal

Figure 20, on the following page, presents the casesfor addition alongside the better-explored operationof multiplication. The main thing to note is that ad-dition can return one of three results, just like theequivalence classes of satisfaction. In fact, they areonce again related in the same way that multiplica-tion is related to satisfaction.Figure 21 shows what is proposed for the algo-

rithm. It will take two multiplication operations(three are shown, but this is more than needed), andcombine them with one addition, to eliminate ev-erything except for the off-dagonal values, which areshown as gray triangles.Everything is really summarized as arithmetic in

Figure 19. Here, the original results are shown onthe left, being modified so that they can be com-bined together. It can be noted that the middle twocases have always had equal results, so they have beencombined. The right side shows only three cases inparanthesis, which are the satisfaction equivalences.The algorithm seeks to eliminate these, thus the sumsat the bottom are zero, except for the off-diagonal.Here the algorithm comes up with three equations

that must be satisfied:

c0 + c1 − d = 0 (3.3)

c0a0 + c1a1 − (d+ 1) = 0 (3.4)

c0a02 + c1a1

2 − (d+ 2) = 0 (3.5)

Again, these come from the three cases on the rightside of Figure 19, summing the columns.To summarize all of this again, what is essentially

happening is that two multiplications, together withan addition, cancel out all coefficients along the di-agonal. However, multiplication creates off-diagonal

values, and these values will remain. So in effect, thealgorithm isolates off-diagonal values. It does so byusing the values calculated from addition to cancelout the diagonal from multiplication.

So now the algorithm can concentrate fully on elim-inating the terms along the diagonal to isolate the off-diagonal terms. The following subsection will explorehow to get these terms to cancel.

First, it’s time to introduce modular arithmetic.Subsection C.11 in the Appendix notes some sourcesthat go over modular arithmetic. To simplify theideas, essentially all calculations are performed asusual, except that an additional operation is per-formed afterwards. After the calculations are done,to get the result modulo a prime p, the algorithmdoes the equivalent of taking the remainder after di-viding the result by p. Thusly, all calculations arerepresented by an integer greater than or equal tozero and less than p. So all calculations have a verylimited range of values that can result.

This introduces a notion of equivalence, where twovalues are equivalent if they are the same modulop. This allows the algorithm to find solutions moreeasily, since the normal restriction that calculationsmust be equal is relaxed so that calculations mustonly be equivalent.

Equivalence is really introduced so that the equa-tions that must be satisfied can be solved. Therelaxed restrictions allow for solutions that can befound easily.

Page 16: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 16

·

=

·

=

·

=

·

=

+

=

+

=

+

=

+

=

1 · x a · x 1 · x a · x

1 · x 1 · x a · x a · x

1 · x2 a · x2 a · x2 a2 · x2

0 · x2 1 · x2 0 · x2 1 · x2

0 · x2 0 · x2 1 · x2 1 · x2

0x2 1x2 1x2 2x2

+

Figure 20: Cases of Multiplication and Addition

+ +

Multiplication

-

Addition

=

Result

Figure 21: Eliminating The Diagonal

Page 17: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 17

c0 = 1 c0 = 2 c0 = 3 c0 = 4

c0 = 5 c0 = 6 c0 = 0

1 ≡ 1

a0 ≡ 2

a02 ≡ 4

1

21

2 · 1 ≡ 2

2 · a0 ≡ 4

2 · a02 ≡ 1

2

42

3 · 1 ≡ 3

3 · a0 ≡ 6

3 · a02 ≡ 5

3

63

4 · 1 ≡ 4

4 · a0 ≡ 1

4 · a02 ≡ 2

4

14

5 · 1 ≡ 5

5 · a0 ≡ 3

5 · a02 ≡ 6

5

35

6 · 1 ≡ 6

6 · a0 ≡ 5

6 · a02 ≡ 3

6

56

0 · 1 ≡ 0

0 · a0 ≡ 0

0 · a02 ≡ 0

0

00

Figure 22: Multiplication Results Modulo 7

To find solutions that eliminate the diagonal, a fi-nite field is introduced, allowing operations to be con-ducted modulo a prime p. Figure 22 displays calcu-lations inside a field modulo 7. Here the algorithmselects a0 = 2, although it could select really anyvalue other than one or zero. It then calculates theresults of multiplication (times a constant c0), whichfor the various equivalence classes are c0 · 1, c0 · a0,and c0 · a0

2.

The algorithm is really interested in the second-order difference. This is illustrated in Figure 23.That is, it takes the difference between results, andthen takes the difference of these differences.

It’s fairly straightfoward to get the initial results;using arithmetic multiplication works fine. Then theresults have to be modulated. As an example, inFigure 23, a1 is set as three. Then, to calculate a1

2,take 32 = 9. Then 9/2 = 1 with remainder 2. So theresult is the remainder, which is 2.

That gets the initial results, in the field. Then toget the first order differences, take successive resultsand subtract the first from the successive. This is il-lustrated very clearly in the figure, which does a bet-ter job of explaining. The take the difference again,which is called the second-order difference.

The reason that it’s so important to get thesesecond-order differences is that they help match upmultiplication results. Note that both figures are ac-tually two seperate multiplications; Figure 22 uses a0and c0 while Figure 23 uses a1 and c1. They both usethe same prime, so they are two seperate multiplica-tion results that can be matched up. The goal is to

1 ≡ 1 mod 7

a1 ≡ 3 mod 7

a12 ≡ 2 mod 7

3− 1 ≡ 2

2− 3 ≡ 66− 2 ≡ 4

Figure 23: Differences of Multiplication

Results

find the second-order differences that add up to zeromod p. Note that the example with a1 has a second-order difference of four. 3 + 4 = 7 ≡ 0 mod 7, so theidea is to find a second-order difference of three inthe other equation. It is done with c0 = 3. So nowtwo equations have been found that match up.

To check this result, the equations can be com-bined:

c0 · 1 + c1 · 1 ≡ 3 · 1 + 1 · 1 ≡ 4 ≡ d+ 0e

c0 · a0 + c1 · a1 ≡ 3 · 2 + 1 · 3 ≡ 2 ≡ d+ 1e

c0 · a02 + c1 · a1

2 ≡ 3 · 4 + 1 · 2 ≡ 0 ≡ d+ 2e

The main thing to note is that the sequence of re-sults in the equations (4, 2, 0) can be recreated by anaddition operation (on clause polynomials). That is,the first equation = 4+0(−2), the second = 4+1(−2),and the third = 4 + 2(−2).

Page 18: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 18

Diagonal

(c0 · 1) + (c1 · 1)− d

(c0 · a0) + (c1 · a1)− (d− e)

(c0 · a02) + (c1 · a1

2)− (d− 2e)

= 0

= 0

= 0

0

(3 · 1) + (1 · 1)− 4

(3 · 2) + (1 · 3)− (4 − 2)

(3 · 22) + (1 · 32)− (4− 2 · 2)

= 0

= 0

= 0

0

Off-Diagonal

(c0 · 1) + (c1 · 1)

(c0 · a0) + (c1 · a1)

(c0 · a02) + (c1 · a1

2)

(3 · 1) + (1 · 1)

(3 · 2) + (1 · 3)

(3 · 22) + (1 · 32)

= 4

= 2

= 0

4b0 + 2b1 + 0b2

Figure 24: Two Clause Results

Figure 24 shows the results of combining equations(addition and multiplication). Here the the resultsof clause operations (multiplication and addition) arecombined, and are shown according to whether or notthey lie along the diagonal. As mentioned previously,the addition operation cancels out the multiplicationalong the diagonal. However, the results off the diag-onal are not cancelled.

Thusly, the results off the diagonal become multi-pliers for unknown quantities. This is because thepolynomials that underline the equations are alsopart of this mix, and the resulting polynomial quan-tities are unknown, even though the multipliers areknown. Thus these quantities are represented as b0,b1, and b2.

This allows for a new result. Remember that theoriginal quantities are combined together in the previ-ous equations that were used. Thus, this new resultrepresents a sum of the new quantities, along withtheir respective multipliers. Therefore, the algorithmarrives at a new result which is a single equation inthree unknowns.

Again, time for some ideas. Back in Section 3.3on page 14, Figure 18 is discussed. Especially impor-tant is the fact that the maximally satisfied clauses,both along the diagonal and off the diagonal, can beisolated. In this section it has been shown that equa-tions for off-diagonal values can be obtained. Think-ing about off-diagonal values, it is possible to cre-

ate other equations using multiplication and addi-tion - and these equations can help determine theoff-diagonal values by combining them with the firstequation and then using linear algebra to determinethe values.

So essentially, the algorithm will use multiplicationand addition of modified clause polynomials to cre-ate equations in three unknowns. These unknownsare the off-diagonal values. Then, the equations cre-ated can be combined via linear algebra to detetminethe off-diagonal values. As mentioned previously, thealgorithm can already isolate the maximally satis-fied clause values together. Unfortunately, the off-diagonal and diagonal values are combined together.However, since the off-diagonal portion can be deter-mined via linear algebra, the diagonal portion can beisolated from the off-diagonal portion using simple al-gebra, and the results from linear algebra (which givethe off-diagonal portion).

The diagonal value of maximally satisfied clauses isthus determined, and this corresponds to the numberof satisfied solutions. Now if this number is nonzero,the whole problem can be satisfied. Otherwise, theoriginal problem can’t be satisfied.

Page 19: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 19

Thank You!

4 Acknowledgements

I would like to thank God and my family for help-ing to provide me with this wonderful opportunity.I would also like to thank Javier Humberto OspinaHolguin for showing me an interesting new world andencouraging me to get involved. I would like to alsothank the creator of Bricks; Andreas Rottler, for pro-viding a great game that helped get me excited aboutproblems like this, as well as a way to meet interestingpeople such as Javier.

I would also like to thank the people who helpedto save my life when I was in danger; the Mexi-can/American family in Putla, Holy Spirit Hospital(which also helped me with my schizophrenia), andsome people from Harrisburg, Pennsylvania.

I’d like to thank Timothy Wahls of Dickinson Col-lege (formerly of Penn State Harrisburg) for first in-troducing me to the problem and getting me excited.

I’d also like to thank my friend Holly Dudash forbeing with me during tough times and helping methrough.

I’d like to thank Michael Sipser of MIT and myfriend Ruben Spaans for analyzing my ideas and theirinvaluable suggestions.

I’d like to thank the many great organizations,communities, and schools that helped me learn andfostered my intellectual growth including SOS Math-ematics (online), Stackexchange - Mathoverflow.com,stackoverflow.com, and cstheory.stackexchange.com.Particularly Ryan O’ Donnel, Arturo Magadin, CarlBrannen, and Qiaochu Yuan. Also Penn State Uni-versity, and in particular Penn State Harrisburg.Especially Drs. Null, Bui, Walker, and Wagner.Also from other various Universities Jacques Car-rette, George Frederick Viamontes, Will Jagy, andLeonid Levin.

I’d like to thank everyone that has worked on solv-ing the problem(s) of schizophrenia, and I hope thatthis paper (although perhaps indirectly) will helpwith more research.

There are many more people that I regretablyhaven’t mentioned, but I hold them in high esteemand send out my thanks to. I’m just very thankfulthat I have a chance to be a part of a project thatwill hopefully do lots of good things.

Page 20: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 20

A Clause PolynomialTechnicalities

B Function of Ones

Page 21: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 21

C References

C.1 NP

[1] http://en.wikipedia.org/wiki/Cook%E2%80%93Levin theorem

[2] Cook, S.A. The complexity of theorem prov-ing procedures Proceedings, Third AnnualACM Symposium on the Theory of Com-puting, ACM, New York. pp. 151158., 1971.doi:10.1145/800157.805047.

[3] http://en.wikipedia.org/wiki/ Nondeterminis-tic Turing machine

[4] http://en.wikipedia.org/wiki/ Complexity class

[5] http://en.wikipedia.org/wiki/ Polyno-mial reducibility

C.2 P vs. NP

[6] M. Sipser http://www.eecs.berkeley.edu/ luca/cs172/sipser92history.pdf The History and Sta-tus of the P versus NP Question Proceedings ofthe 24th Annual ACM Symposium on Theory ofComputing 1992, invited paper.

[7] Kevin J. Devlin The Millenium Problems: TheSeven Greatest Unsolved Mathematical PuzzlesOf Our Time Keith Devlin, 2002.

[8] http://en.wikipedia.org/wiki/Np hard

[9] “The Complexity of 3SAT N and the P versusNP Problem” http://arxiv.org/abs/1101.2018

C.3 Asymptotic Analysis

[10] http://en.wikipedia.org/wiki/Asymptotic analysis

C.4 SAT

[11] http://en.wikipedia.org/wiki/ Cook-Levin theorem

[12] http://en.wikipedia.org/wiki/Boolean satisfiability problem

[13] Takayuki Yato and Takahiro Seta Complexityand completeness of finding another solution andits application to puzzles IEICE Transactions onFundamentals of Electronics, Communicationsand Computer Sciences, E86-A(5):10521060,May 2003.

[14] A. Biere Handbook of Satisfiability IOS Press,Volume 185 Frontiers in Artificial Intelligenceand Applications, February 15, 2009.

[15] Knot Pipatsrisawat and Adnan Darwiche “OnModern Clause-Learning Satisfiability Solvers”http://reasoning.cs.ucla.edu/fetch.php? id=108&type = pdf Journal of Automated Reasoning44 277301, 2010.

[16] Kazuo Iwama , Kazuhisa Seto, Tadashi Takaiand Suguru Tamaki “Improved Randomized Al-gorithms for 3-SAT” ISAAC 2010.

[17] Timon Hertli, Robin A. Moser, Dominik Scheder“Improving PPSZ for 3-SAT using Critical Vari-ables” http://arxiv.org/abs/1009.4830

[18] Konstantin Kutzkov and Dominik Scheder “Us-ing CSP To Improve Deterministic 3-SAT”http://arxiv.org/abs/1007.1166

[19] V.F. Romanov “Non-Orthodox Combinato-rial Models Based on Discordant Structures”http://arxiv.org/abs/1011.3944

C.5 Relativization

[20] Theodore P. Baker, John Gill, Robert SolovayRelativizatons of the P =? NP Question SIAMJournal on Computing Volume 4, Number 4,pp.431-442 1975.

C.6 Oracles

[21] http://en.wikipedia.org/wiki/Turing oracle

C.7 State Spaces

[22] http://en.wikipedia.org/wiki/State space

C.8 Boolean Variables

[23] http://en.wikipedia.org/wiki/Boolean algebra (logic)

[24] Douglas Smith, Maurice Eggen, Richard St.Andre A Transition To Advanced Mathematics.Brooks/Cole Publishing Company, 4th Edition,1997.

Page 22: Towards P = N P via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields

Towards P = NP via k-SAT c©Matt Groff 2011 22

C.9 Calculus

[25] George B. Thomas, Jr., Ross L. Finny Calculusand Analytic Geometry. Addison-Wesley Pub-lishing Company, 7th Edition, 1988.

C.9.1 Products

[26] http://en.wikipedia.org/wiki/Indefinite product

C.9.2 Generating Functions

[27] Herbert S. Wilf generatingfunctionology Aca-demic Press, Inc. Internet Edition, 1990, 1994.

[28] S. K. Lando Lectures on Generating FunctionsAmerican Mathematical Society 2003.

[29] Walter P. Kelley, Allan C. Peterson DifferenceEquations: An Introduction with ApplicationsAcademic Press Second Edition, 2001, 1991.

C.10 Imaginary Numbers

[30] http://en.wikipedia.org/wiki/Imaginary unit

C.11 Modular Arithmetic

[31] Michael Artin Algebra PHI Learning PrivateLimited 2009.

[32] http://en.wikipedia.org/wiki/Modular arith-metic

[33] http://en.wikipedia.org/wiki/Chinese remainder theorem