machine theorem discovery

7
T he idea of using a computer program to help discover theorems is an old one and will not surprise anyone nowadays. In late 1950s, logician Hao Wang at IBM wrote a code called Program II (Wang 1960) to discover deep theorems in propositional logic. For Wang, a theorem is deep if it is short but requires a long proof. Understandably, Wang’s program did not produce any really deep theorems, perhaps for the simple reason that there are no tautologies that are short and require a long proof. A better-known example in AI is the Automated Mathe- matician (AM) that Douglas Lenat developed in 1976 as part of his PhD dissertation (Lenat 1979). While not strictly about discovering theorems, AM uses heuristic search to simulate how mathematicians discover interesting concepts and con- jectures in number theory. It caused some excitement in the community as Lenet claimed that one of the Lisp functions it generated represented the fundamental theorem of arith- metic. Articles SUMMER 2018 53 Copyright © 2018, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 Machine Theorem Discovery Fangzhen Lin n This article describes a framework for machine theorem discovery and illustrates its use in discovering state invariants in planning domains and properties about Nash equilibria in game theory. It also discusses its poten- tial use in program verification in soft- ware engineering. The main message of the article is that many AI problems can and should be formulated as machine theorem discovery tasks.

Upload: others

Post on 11-May-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine Theorem Discovery

The idea of using a computer program to help discovertheorems is an old one and will not surprise anyonenowadays. In late 1950s, logician Hao Wang at IBM

wrote a code called Program II (Wang 1960) to discover deeptheorems in propositional logic. For Wang, a theorem is deepif it is short but requires a long proof. Understandably,Wang’s program did not produce any really deep theorems,perhaps for the simple reason that there are no tautologiesthat are short and require a long proof.

A better-known example in AI is the Automated Mathe-matician (AM) that Douglas Lenat developed in 1976 as partof his PhD dissertation (Lenat 1979). While not strictly aboutdiscovering theorems, AM uses heuristic search to simulatehow mathematicians discover interesting concepts and con-jectures in number theory. It caused some excitement in thecommunity as Lenet claimed that one of the Lisp functionsit generated represented the fundamental theorem of arith-metic.

Articles

SUMMER 2018 53Copyright © 2018, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

Machine TheoremDiscovery

Fangzhen Lin

n This article describes a frameworkfor machine theorem discovery andillustrates its use in discovering stateinvariants in planning domains andproperties about Nash equilibria ingame theory. It also discusses its poten-tial use in program verification in soft-ware engineering. The main message ofthe article is that many AI problems canand should be formulated as machinetheorem discovery tasks.

Page 2: Machine Theorem Discovery

A more impressive system is Siemion Fajtlowicz’sGraffiti (Fajtlowicz 1988), which generates conjec-tures in graph theory. More recently, Craig Larsonextended Fajtlowicz’s Graffiti into a more generalframework for mathematical conjecture making indiscrete math (Larson and Cleemput 2016; 2017). Foran early survey on these systems, see Larson (2005).

However, these examples all focus on a generaltheory like propositional logic, number theory, orgraph theory. For AI applications, more often wewant to prove theorems in a very specific theory forsome very specific purpose. For instance, we could begiven a planning domain such as the logisticsdomain, and we want to know if there are specialproperties about the domain that can help prune thesearch space for the planner in hand. Finding theseproperties is the same as discovering theorems, oncethe domain is axiomatized by a formal theory, forexample, in the situation calculus. As another exam-ple, in software verification, we may want to provethat a certain postcondition holds when the inputsatisfies a certain precondition. This relationship canbe naturally formulated as a theorem-proving prob-lem and is, in general, intractable. It helps, then, todiscover some useful properties about the program inquestion to make the theorem-proving task easier. Infact, much of the formal work on verifying sequentialprograms can be said to be about discovering suitableloop invariants as pioneered in Hoare’s logic (Hoare1969).

In the past, my colleagues and I have developedprograms for discovering state invariants in planning(Lin 2004), for discovering theorems that capturestrong equivalence in logic programming (Lin andChen 2007), for identifying conditions that capturethe uniqueness of pure Nash equilibria in games innormal form (Tang and Lin 2011a), and for checkingthe consistency of social choice axioms by translat-ing them to SAT (Tang and Lin 2009). Our work hasinspired others to apply similar approaches to otherproblems (for example, Geist and Endriss 2011; Bran-dl et al. 2015; Brandt and Geist 2016; Brandl, Brandt,and Geist 2016). For instance, instead of SAT, Brandl,Brandt, and Geist (2016) used SMT in their work onshowing the incompatibility of certain notions ofefficiency and strategy-proofness.

In this article, I will outline a general approach tomachine theorem discovery. I will review some of ourearlier work and recast it in a general framework. Asit will become clear, the key is to formulate hypothe-ses in a formal logical language.

Theorem DiscoveryIn a nutshell, theorem discovery can be viewed as aniterative two-step process:

while (true) do

1. Find a reasonable conjecture;

2. Verify the conjecture.

Articles

54 AI MAGAZINE

Figure 1. An Algorithm for Machine Theorem Discovery.

Input

1 Language description: sorts, predicates, constants, functions.

2 A collection of sets of objects to construct models of the language.

3 A model con!rmation relation between models M and sentences ϕ: M con!rms ϕ if M satis!es the required property about ϕ, provided that M is a model of the domain theory.

4 (Optional) A theorem prover for checking if a sentence is a theorem in the given theory.

5 A speci!cation of the space of hypotheses: by defaults, all atomic sentences, binary clauses, type information (unary predicates), functional dependencies will be in the space. The user can also specify more.

Output: Weakest conjectures.

Page 3: Machine Theorem Discovery

Articles

SUMMER 2018 55

Of course, it all comes down to what reasonable con-jectures are, how to search for them, and oncethey’ve been found, how to verify them as a theoremof the given background theory. A key point of thisarticle is that current knowledge representation (KR)formalisms, particularly those of first-order logic, aregood for specifying the space of conjectures. Further-more, solvers such as those for SAT and CSP go a longway in pruning the search space, identifying goodconjectures, and verifying them in the end.

In this article I focus on machine theorem discov-ery using the algorithm depicted in figure 1. Here, aconjecture is a hypothesis that is confirmed by allmodels that can be constructed using the domainsgiven in input 2, and a conjecture is a weakest one ifthere is no other conjecture that is properly entailedby it. If the optional theorem prover in input 4 is giv-en, it is used to check entailment. Otherwise, entail-ment is decided using all models that can be con-structed using the domains given in input 2. If notheorem prover is given, then the weakest conjec-tures thus generated may be too strong, and the sys-tem may miss some possible conjectures. For exam-ple, if conjecture A implies another conjecture B inall constructed models, A will be pruned. However, itmay well be possible that A does not entail B in gen-eral.

I will illustrate this algorithm by showing how twoof our earlier systems — one for discovering stateinvariants in planning, and the other on discoveringtheorems about pure Nash equilibria — can be cast inthis general framework.

Discovering State Invariants inDynamic Systems

The state of a dynamic system can change accordingto some natural events or as a result of some actionsdone by agents. However, there are some properties— for example, that at any given time an object canbe at only one location — that will persist no matterwhat. These properties are called state invariants andthey are useful in planning for pruning search spaces.In fact, several systems have been designed specifi-cally for learning state constraints (Huang, Selman,and Kautz 2000; Gerevini and Schubert 1998).

To make learning state invariants a theorem dis-covery problem, we need to make precise the notionof state invariants, as well as to formalize the givenplanning domain.

A dynamic system consists of states and the tran-sitions between states. A planning domain is like adynamic system, but typically with states describedby a logic language and transitions by actions. In mypaper on state invariants (Lin 2004), I start with adomain language to describe states and extend itwith actions. Briefly, states are models of a first-orderlanguage called a domain language. As usual, proposi-tions in the domain language whose truth values can

be changed by actions are called fluents. To formalizethe effects of actions, we extend the domain lan-guage with a special sort action. For each predicate inthe domain language, we introduce a new predicatewith the same name but with an extra argument ofsort action. Intuitively, if p is a fluent, and a anaction, then p(a) denotes the truth value of p after ais successfully performed in the current state. I callthese new predicates successor state predicates. Forinstance, if clear is a fluent, then we write clear(x) tomean that block x is clear in the given initial stateand we write clear(x, unstack(x, y)) to mean that x isclear in the successor situation of doing unstack(x, y)in the initial state. I also assume a special unary pred-icate, Poss, to denote the action precondition. Thus,Poss(unstack(x, y)) will stand for the precondition ofdoing unstack(x, y).

So formally, an action theory is a family of first-order theories {TA | A is an action type}, where foreach action type A, TA consists of the followingaxioms:

An action precondition axiom of the form

(1)where Ѱ is a formula in the domain languagewhose free variables are in x. (Thus Ѱ cannotmention Poss and any successor state predicates.)

For each domain predicate F, an axiom of the form

(2)where x and y do not share common variables,and ΦF is a formula in the domain languagewhose free variables are from x and y.

Example 1In the blocks world, for the action type stack, we havethe following axioms (all free variables below are uni-versally quantified from outside):

Poss(stack(x, y)) ≡ holding(x) ∧ clear(y), on(x, y, stack(u, v)) ≡ (x = u ∧ y = v) ∨ on(x, y), ontable(x, stack(u, v)) ≡ ontable(x), handempty(stack(u, v)) ≡ true,holding(x, stack(u, v)) ≡ false,clear(x, stack(u, v)) ≡ clear(x) ∧ x ≠ v.

The following definition captures the intuitionthat a state invariant is a formula that if true initial-ly, will continue to be true after the successful com-pletion of every possible action.

Definition 1Given an action theory {TA | A is an action type}, aformula W in the domain language is a state invari-ant if for each action type A,

(3)

where W(A(y)) is the result of replacing each atomF(t) in W by F(t, A(y)), and ⊨ is the logical entailmentin first-order logic.

Thus, the problem of discovering and learningstate invariants becomes a problem of discoveringtheorems of the form (equation 3). Based on this set-

!x.Poss(A(x)) " #,

(!x,y).F(x, A(y)) " #F (x,y),

TA |=!y.W " Poss(A(y))#W(A(y)),

Page 4: Machine Theorem Discovery

up, I described a system (Lin 2004) for discoveringvarious types of state invariants in planning domainssuch as functional constraints, type constraints, anddomain closure constraints. The system performsremarkably well for almost all known benchmarkplanning domains. For instance, for the popularlogistics domain, it returns a complete set of stateinvariants in the sense that a state is “legal” if it sat-isfies all the constraints.

In terms of the general algorithm outlined in thelast section, to discover state invariants in a planningdomain, the required inputs are as follows:

Language: domain language extended with actions.

Sets of objects: for each sort, sets of up to a small num-ber, say three, objects.

Model confirmation: given an interpretation M for thedomain language, and a sentence φ in domain lan-guage, M confirms φ if for every action A, when M isextended to be a model of TA, it satisfies ∀y.φ ∧Poss(A(y)) ⊃ φ(A(y)). In other words, φ is a state invari-ant in M according to equation 3.

Space of hypotheses:

Functional constraints: sentences of the form

Type constraints: sentences of the form

Domain closure constraints: these constraintssay that certain objects must belong to one of thepredefined categories, for example, a packageneeds to be either at a location or inside a vehi-cle.

A precise definition of these constraints can befound in Lin (2004).

A theorem in Lin (2004) shows that a theoremprover is often not needed, as it can be replaced bymodel checking. For many action theories and formost of the constraints considered here, once a con-straint is confirmed by all models up to a certainfinite size, it is guaranteed to be a state invariant. Asimilar, but simpler theorem is given in the next sec-tion in the context of discovering theorems in gametheory.

Two-Person GamesAs another example of machine theorem discovery,consider the problem of finding conditions underwhich a two-person game has a unique pure Nashequilibria payoff. The key idea again is to formulatethe background theory and the hypotheses in first-order logic. This was the joint work I undertook withPingzhong Tang (Tang and Lin 2011a).

A two-person game is a tuple (A, B, ≤1, ≤2), whereA and B are sets of (pure) strategies of players 1 and

!x, y1, y2.at(x, y1)"at(x, y2 )#

y1 = y2.

!x, y.at(x, y)"

package(x)#vehicle(x)[ ]$ location(y).

2, respectively, and ≤1 and ≤2 are total orders on A ⨉B, called preference relations for players 1 and 2,respectively.

For each b ∊ B, the set of best responses by player 1to the action b by player 2 is defined as follows:

Similarly, for each a ∊ A, the set of best responses byplayer 2 is:

A profile (a, b) ∊ A ⨉ B is a (pure-strategy) Nashequilibrium if both a ∊ B1(b) and b ∊ B2(a). A gamecan have one, more than one, or no Nash equilibria.Properties of pure Nash equilibria have been studiedextensively. For instance, if a game is strictly com-petitive (Friedman 1983; Moulin 1976), then it has aunique Nash equilibria payoff in the sense that ifboth s and sʹ are Nash equilibria of the game, thenthe payoffs for them are the same for each player,that is, s ≤i sʹ and sʹ ≤i s for i = 1, 2. This outcome isalso true for weakly unilaterally competitive games(Kats and Thisse 1992). It is also known that ordinalpotential games (Topkis 1998) and super-modulargames (Monderer and Shapley 1996) always haveNash equilibria.

I now show how two-person games and some keynotions about them can be formulated in first-orderlogic.

Consider a first-order language with two sorts, αand β. Sort α is for player 1’s actions, and β for play-er 2’s actions. In the following, I use variables x, x1,x2, .. to range over α, and y, y1, y2, ... to range over β.The players’ preferences are represented by two infixpredicates ≤1 and ≤2 of the type (α, β) ⨉ (α, β). Giv-en that the preferences are total orders, these twopredicates need to satisfy the following axioms (allfree variables in a displayed formula are assumed tobe universally quantified from outside):

(4)

(5)

(6)

where i = 1, 2. In the following, I denote by Σ the setof these sentences. Thus, two-person games corre-spond to first-order models of Σ, and two-person finitegames correspond to first-order finite models of Σ.

I write (x1, y1) <i (x2, y2) as shorthand for

and now the condition for a profile (ξ, ζ) to be a Nashequilibrium is captured by the following formula:

(7)

B1(b) = {a | a ! A,

and for all a" ! A,(a",b) #1 (a,b)}.

B2(a) = {b | b ! B,

and for all b" ! B,(a,b") #2 (a,b)}.

(x,y) !i (x,y),

(x1,y1) !i (x2 ,y2 )"(x2 ,y2 ) !i (x1,y1),

(x1, y1) !i (x2 , y2 )"(x2 , y2 ) !i (x3, y3 )#

(x1,y1) !i (x3,y3 ),

(x1,y1) !i (x2 ,y2 )"¬(x2 ,y2 ) !i (x1,y1),

!x.(x,! ) "1 (" ,! )#!y.(! ,y) "2 (! ," )

Articles

56 AI MAGAZINE

Page 5: Machine Theorem Discovery

In the following, I shall denote the above formula byNE(ξ, ζ). The following sentence expresses theuniqueness of Nash equilibria payoff:

(8)

where for i = 1, 2, (x1, y1) ≃i (x2, y2) is shorthand for

A game is strictly competitive if it satisfies the fol-lowing property:

(9)

Thus, it should follow that

(10)

Notice that I have assumed that all free variables in adisplayed formula are universally quantified fromoutside. Thus, formula 9 is a sentence of the form∀x1, x2, y1, y2 φ. Similarly for formula 8.

Thus, to discover conditions Q for a game to haveunique Nash equilibria payoff is to discover theoremsof the form Σ ⊨ Q ⊃ (8). Of course, there are an infi-nite number of such theorems, so practically speak-ing, we have to restrict the type of conditions wewant a program to search for. A good starting pointis to try prenex formulas Q of the following form:

where Q0 does not mention any quantifiers. A theo-rem from Lin (2007) says that such Q is a sufficientcondition for the uniqueness of Nash equilibria iff itis so for all games of sizes up to (|x1| + 2) ⨉ (|y1| + 2),and it is a sufficient condition for the nonexistenceof Nash equilibria iff it is so for all games of sizes upto (|x1|+1) ⨉ (|y1|+1). Effectively, this reduces theo-rem proving such as showing Σ ⊨ Q ⊃ (8) to finitemodel checking.

This also holds for many specialized games, such asthe class of strict games: a game is strict if for bothplayers, different profiles have different payoffs, thatis, (a, b) = (aʹ, b΄) whenever (a, b) ≤i (al, bl), and (aʹ, bʹ)≤i (a, b), where i = 1, 2.

Based on this axiomatization of two-person games,we wrote a computer program (Tang and Lin 2011a)that generates many interesting theorems. It redis-covers Kats and Thisse’s class of weakly unilaterallycompetitive games that correspond to the followingcondition:

It also returns some very strong conditions for stricttwo-person games to have unique Nash equilibrium,and led to some new theorems (Tang and Lin 2011b).

Again, this work can be easily cast into the gener-al algorithm given earlier. The inputs can bedescribed as follows:

NE(x1, y1)!NE(x2 , y2 )"

(x1,y1) ;1 (x2 ,y2 )!(x1,y1) ;2 (x2 ,y2 ),

(x1, y1) !i (x2 , y2 )"(x2 , y2 ) !i (x1, y1).

(x1, y1) !1 (x2 , y2 ) " (x2 , y2 ) !2 (x1, y1).

! |= 9( ) " 8( ).

!x1!y1"x2"y2Q0

(x1,y) !1 (x2 ,y)" (x2 ,y) !2 (x1,y)#

(x,y1) !2 (x,y2 )" (x,y2 ) !1(x,y1).

Sorts: α and β.

Predicates: ≤1 and ≤2, both of type (α, β) ⨉ (α, β).

Collection of domains: just two elements for each sort,so for both sorts, their domains are D = {1, 2}.

Confirmation relation: a model is an interpretation inD ⨉ D that satisfies Σ, and a model confirms

a hypothesis φ if either sentence φ is not true in themodel or the game corresponding to the model

has a unique Nash equilibria payoff. In other words, M⊨ φ ⊃ (8).

Theorem prover: there is no need for a separate theo-rem prover for the hypotheses defined next. In thiscase, checking all models up to a certain size is suffi-cient.

Hypotheses: conjunctions of binary clauses.

Computer Program VerificationComputer program verification is a perfect applica-tion domain for machine theorem discovery. It is avery difficult but important problem given how cru-cial it is to have reliable software these days. It is alsoclosely related to logic and theorem proving. Therehas been much work about it in software engineeringand formal methods. An example system is Facebookinfer1, a formal program analyzer initially developedby Monoidics in London and acquired by Facebookin 2013. It’s said that all Facebook software has to gothrough infer before being released. Infer is based onseparation logic (Reynolds 2002), an extension ofHoare’s logic for reasoning about mutable data struc-tures like pointers.

My colleagues and I are currently working on pro-gram verification based on a translation from pro-grams to first-order logic that I proposed recently (Lin2016). We have implemented a prototype system forprograms with integer operations (Rajkhowa and Lin2017). It makes use of off-the-shelf tools such as alge-braic simplification system SymPy2 and the SMTsolver Z3 (de Moura and Bjorner 2012), and canalready verify many nontrivial programs withoutuser-provided loop invariants. It can also be extend-ed to handle more complex data structures. In fact, itcan easily be extended to handle arrays, and a ver-sion of it finished second in the array subcategory ofthe 2018 SV-COMP competition.3

However, to be able to handle more difficult pro-grams, we need to have more effective proof strate-gies and this is where machine theorem discoverycomes in. Consider the verification problem in arecent SV-COMP competition,4 displayed in figure 2.

Effectively, our system (Rajkhowa and Lin 2017)will translate it into the problem of proving

under axioms:

a7(N1) + b7(N1) = 3!N

Articles

SUMMER 2018 57

Page 6: Machine Theorem Discovery

where N1 is a natural number constant denoting thenumber of times the loop is executed before exiting;N the initial value of program variable ;, f(n) an inte-ger value function representing the nondeterminis-tic function __VERIFIER_nondet_int() in the program;ite(c, e1, e2) the conditional expression “if c then e1else e2”; and ∀n ranges over natural numbers.

SMT solvers like Z3 (de Moura and Bjorner 2012)cannot prove this directly, but can be used to proveit from the following more general theorem:

which can be proved by simple induction. The chal-lenge is, of course, to formulate a space of hypothe-ses so lemmas like this can be discovered. I hope tohave results to report on this in the near future.

Concluding RemarksTheorem proving and theorem discovery are tradi-tionally the job of mathematicians and theoreticians.

0 ! N !1000000,

"n.a7(n+1) = ite(f (n) > 0,a7(n)+1,a7(n)+2),

"n.b7((n+ 1)) = ite(f (n) > 0, b7(n) + 2, b7(n) + 1),

a7 0( ) = 0,

b7 0( ) = 0,

N1# N ,

"n.n < N1$ n < N ,

!n.a7(n)+ b7(n) = 3"n,

Engineers just make use of discovered theories andtheorems. However, in computer science at least,many “engineering” problems such as finding a planto achieve a goal or writing an error-free program canbe formulated as theorem proving, and one messageof this article is that once a problem is formulatedthis way, machine theorem discovery becomes a partof the problem.

One may argue that machine theorem discovery isa type of machine learning. This depends on howbroadly one defines machine learning. What shouldbe clear is that machine theorem discovery is con-ceptually different from current machine learningproblems. However, this does not mean that currentmachine learning techniques cannot be used to dis-cover theorems. For example, to discover a closed-form formula W(n) to make

true for all n, one can generate as many data as oneneeds — W(0) = 0, W(1) = 1, W(2) = 5, and so on —and then use a supervised learning algorithm to“learn” W(n). However, this is not likely to workunless the target function W(n) is very simple. It isclear that machine theorem discovery requires newtools and techniques. This is a rich area with manypotential applications. I hope to have more results toreport soon.

AcknowledgmentsForemost, I want to thank Yin Chen with whom Ihave collaborated on theorem discovery in logic pro-gramming, and Pingzhong Tang with whom I’ve col-laborated on game theory. In particular, the workthat I described above on two-person games wasdone jointly with Pingzhong. I also thank my otherpast and current collaborators on this project: NingDing, Jianmin Ji, Pritom Rajkhowa, and HaodiZhang. I have also benefited from fruitful discussionson topics related to this work with Mordecai Golin,Jérôme Lang, Hector Levesque, Yidong Shen, YisongWang, Mingsheng Ying, Jiahua You, Mingyi Zhang,Yan Zhang, and Yi Zhou, among others.

Notes1. fbinfer.com.

2. www.sympy.org/en/index.html.

3. sv-comp.sosy-lab.org/2018/.

4. v-comp.sosy-lab.org/2018/.

ReferencesBrandl, F.; Brandt, F.; and Geist, C. 2016. Proving the Incom-patibility of Efficiency and Strategyproofness via SMT Solv-ing. In Proceedings of the 25th International Joint Conferenceon Artificial Intelligence, 116–122. Palo Alto, CA: AAAI Press.

Brandl, F.; Brandt, F.; Geist, C.; and Hofbauer, J. 2015. Strate-gic Abstention Based on Preference Extensions: Positive

0!i!n" i2 =W(n)

Articles

58 AI MAGAZINE

Figure 2. Verification Problem.

int i=0, a=0, b=0, n;

__VERIFIER_assume(n >= 0 && n <= 1000000);

while (i < n) {

if (__VERIFIER_nondet_int()) {

a = a + 1;

b = b + 2;

} else {

a = a + 2;

b = b + 1;

}

i = i + 1; }

__VERIFIER_assert(a + b == 3*n) ;

Page 7: Machine Theorem Discovery

Results and Computer-Generated Impossibilities. In Pro-ceedings of the 24th International Joint Conference on ArtificialIntelligence, 18–24. Palo Alto, CA: AAAI Press.

Brandt, F., and Geist, C. 2016. Finding Strategyproof SocialChoice Functions via SAT Solving. Journal of Artificial Intel-ligence Research 55: 565–602.

de Moura, L., and Bjorner, N. 2012. The Z3 SMT Solver. InInternational Conference on Tools and Algorithms for the Con-struction and Analysis of Systems, 337–340. Lecture Notes inComputer Science 4963. Berlin: Springer.

Fajtlowicz, S. 1988. On Conjectures of Graffiti. DiscreteMathematics 72(1–3): 113–118. doi.org/10.1016/0012-365X(88)90199-9

Friedman, J. 1983. On Characterizing Equilibrium Points inTwo-Person Strictly Competitive Games. International Jour-nal of Game Theory 12(4): 245–247. doi.org/10.1007/BF01769094

Geist, C., and Endriss, U. 2011. Automated Search forImpossibility Theorems in Social Choice Theory: RankingSets of Objects. Journal of Artificial Intelligence Research 40:143–174.

Gerevini, A., and Schubert, L. 1998. Inferring State Con-straints for Domain-Independent Planning. In Proceedings ofthe 15th National Conference on Artificial Intelligence, 905–912. Menlo Park, CA: AAAI Press.

Hoare, C. 1969. An Axiomatic Basis for Computer Program-ming. Communications of the ACM 12(10): 576–580.

Huang, Y.-C.; Selman, B.; and Kautz, H. A. 2000. LearningDeclarative Control Rules for Constraint-Based Planning. InProceedings of the 17th International Conference on MachineLearning, 415–422. San Francisco: Morgan Kaufmann.

Kats, A., and Thisse, J. 1992. Unilaterally CompetitiveGames. International Journal of Game Theory 21(3): 291–299.doi.org/10.1007/BF01258280

Larson, C. E. 2005. A Survey of Research in AutomatedMathematical Conjecture-Making. In Graphs and Discovery,297–318. DIMACS Series in Discrete Mathematics and The-oretical Computer Science 69. Providence, RI: AmericanMathematical Society.

Larson, C. E., and Cleemput, N. V. 2016. Automated Con-jecturing I: Fajtlowicz’s Dalmatian Heuristic Revisited. Arti-ficial Intelligence 231: 17–38. doi.org/10.1016/j.artint.2015.10.002

Larson, C. E., and Cleemput, N. V. 2017. Automated Con-jecturing III: Property-Relations Conjectures. Annals ofMathematics and Artificial Intelligence 81(3-4): 315–327.doi.org/10.1007/s10472-017-9559-5

Lenat, D. B. 1979. On Automated Scientific Theory Forma-tion: A Case Study Using the AM Program. In Machine Intel-ligence 9, edited by J. Hayes, D. Michie, and L. I. Mikulich,251–283. Chichester: Ellis Horwood

Lin, F. 2004. Discovering State Invariants. In Proceedings ofthe Ninth International Conference on Principles of KnowledgeRepresentation and Reasoning, 536–544. Palo Alto, CA: AAAIPress.

Lin, F. 2007. Finitely-Verifiable Classes of Sentences. In Log-ical Formalizations of Commonsense Reasoning: Papers from the2007 AAAI Spring Symposium. Technical Report SS-07-05.Palo Alto, CA: AAAI Press.

Lin, F. 2016. A Formalization of Programs in First-Order Log-ic with a Discrete Linear Order. Artificial Intelligence 235: 1–25. doi.org/10.1016/j.artint.2016.01.014

Lin, F., and Chen, Y. 2007. Discovering Classes of StronglyEquivalent Logic Programs. Journal of Artificial IntelligenceResearch 28: 431–451.

Monderer, D., and Shapley, L. S. 1996. Potential Games.Games and Economic Behavior 14(1): 124–143.doi.org/10.1006/game.1996.0044

Moulin, H. 1976. Cooperation in Mixed Equilibrium. Math-ematics of Operations Research 1(3): 273–286. doi.org/10.1287/moor.1.3.273

Rajkhowa, P., and Lin, F. 2017. VIAP: Automated System forVerifying Integer Assignment Programs with Loops. In 19thInternational Symposium on Symbolic and Numeric Algorithmsfor Scientific Computing. Piscataway, NJ: Institute of Electricaland Electronics Engineers.

Reynolds, J. C. 2002. Separation Logic: A Logic for SharedMutable Data Structures. In Logic in Computer Science: Pro-ceedings of 17th Annual IEEE Symposium, 55–74. Piscataway,NJ: IEEE. doi.org/10.1109/LICS.2002.1029817

Tang, P., and Lin, F. 2009. Computer-Aided Proofs of Arrow’sand Other Impossibility Theorems. Artificial Intelligence173(11): 1041–1053. doi.org/10.1016/j.artint.2009.02.005

Tang, P., and Lin, F. 2011a. Discovering Theorems in GameTheory: Two-Person Games with Unique Pure Nash Equi-librium Payoffs. Artificial Intelligence 175(14–15): 2010–2020. doi.org/10.1016/j.artint.2011.07.001

Tang, P., and Lin, F. 2011b. Two Equivalence Results forTwo-Person Strict Games. Games and Economic Behavior71(2):479–486. doi.org/10.1016/j.geb.2010.04.007

Topkis, D. 1998. Supermodularity and Complementarity.Princeton, NJ: Princeton University Press.

Wang, H. 1960. Toward Mechanical Mathematics. IBM Jour-nal of Research and Development 4(1): 22–22.doi.org/10.1147/rd.41.0002

Fangzhen Lin (PhD, Stanford University) is a professor ofcomputer science at the Hong Kong University of Scienceand Technology. He is an AAAI fellow.

Articles

SUMMER 2018 59