nonmonotonic reasoning, nonmonotonic logics and reasoning about change

30

Click here to load reader

Upload: john-bell

Post on 06-Jul-2016

224 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Artificial Intelligence Review (1990), 4, 79-108

Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

John Bell Compute r Science Department, Queen Mary and Westfield College, University of Londan

Abstract. In this paper we introduce nonmonotonic reasoning and the attempts at formalizing it using nonmonotonic logics. We examine and compare the best known of these. Despite the difference in motivation and technical construction there are strong similarities between these logics which are confirmed when they are finally shown to have a common basis. Finally we consider using nonmonotonic logics to represent reasoning about change.

'Contrariwise,' continued Tweedledee, 'if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's Logic.'

Lewis Carroll (1871)

'Contrariwise,' continued Tweedledum, 'if it wasn't so it wouldn't have been; and if it weren't so, it wouldn't be; but as it isn't isn't so, it might be. That's Nonmonotonic Logic.'

1 N o n m o n o t o n i c r e a s o n i n g

C o m m o n sense reasoning differs f rom the kind of reasoning traditionally studied by logicians.

S tandard logical systems (such as the Proposi t ional Calculus (PC) and the Predi- cate Calculus (FOPC)) are concerned with the formalizat ion of val id arguments . An a rgument is valid if its conc lus ion mus t be true given that its premises are. Reason- ing w h i c h is conce rned solely wi th valid arguments is called deductive reasoning.

Deduct ive reasoning deals wi th certainties: once a conc lus ion is established no addi t ional in format ion can over turn it.

In contrast, c o m m o n sense r e a s o n i n g - - t h e kind of reasoning e m p l o y e d in every- day l i f e - - i s very different. Here we are faced with the problem of d rawing conclu- sions f rom informat ion wh ich is often incomplete . We also need to be able to go beyond deduct ive reasoning and draw conclus ions wh ich are plausible (but not certain) given the informat ion at hand. So c o m m o n sense reasoning is both stronger

79

Page 2: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

80 J. Bell

and weaker than deductive reasoning. It is stronger because it licences more conclusions, and weaker because these conclusions are not established with cer- tainty. This lack of certainty means that common sense arguments are defeasible; that is, further information might well mean that previously inferred consequences need to be retracted.

It is possible to distinguish an important form of common sense reasoning which, for reasons which will become apparent, we will call nonmonotonic reasoning. 1 It is characterized by inferences of the form 'infer A if there is no evidence to the contrary' , or more generally ' infer A if B cannot be inferred', made in situations where A is typically the case and B is typically not the case. For example, suppose that Jones normally walks his dog in the evening. Then we want to infer that Jones will walk his dog this evening unless something untypical prevents him from doing so. Reasoning of this kind is not deductive; we cannot deduce that Jones will walk his dog this evening (A) from the inability to deduce that he won' t (B). Nevertheless it is clearly inferential in character and is intended as a complement to deduction. This inferential nature of nonmonotonic reasoning distinguishes it from other forms of common sense reasoning, such as inductive and probabilistic reasoning, and enables us to give a qualitative account of it.

Although it is inferential, nonmonotonic reasoning is defeasible. The conclu- sions it sanctions depend on the available evidence. If the available evidence changes, what can be inferred may change and previously derived conclusions may have to be withdrawn. So if we subsequently learn that Jones has to work late this evening then we will have to retract the inference that he will walk his dog.

In order to formalize this form of reasoning it is necessary to develop logics which can represent defeasible inference. In deductive logics the property which corres- ponds to the certainty (non-defeasibility) of arguments is known as monotonicity:

if A~-C then A&BFC for all sentences A, B and C.

Consequently to represent nonmonotonic inference we need to develop logics in which this property fails; that is, we need to develop nonmonotonic logics.

2 N o n r n o n o t o n i c l o g i c s

We begin by looking at some of the best known attempts at formalizing nonmonoto- nic reasoning, all of which involve adding nonmonotonic inference mechanisms to standard deductive logics. 2

2.1 Circumscription

McCarthy (1980) proposes Predicate Circumscription. The idea is that for a predi- cate P occurring in a set of sentences A we should infer ~P(a) if there is no evidence to the contrary. In order to achieve this the domain of P is first circumscribed; that is, restricted to those objects which P has to satisfy in order to satisfy A. We are then al lowed to infer -TP(a) if a is not in the circumscribed domain of P. For example if A=Block(a)&Block(b) then -TBlock(c) is inferable from the circumscript ion of

Page 3: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 81

Block in A. Clearly this notion is nonmonotonic; if A '=A&Block(c) then of course ~Block(c) is not inferable from the circumscript ion of Block in A ' .

2.1.1 Circumscription axioms. McCarthy formalizes this idea by means of a second-order axiom schema which achieves the effect for a single predicate, and which can be generalized to allow several predicates to be circumscribed jointly. Let P be an n-ary relation and let ,~ abbreviate xl ..... x,. Let A(cP) be the result of replacing all occurrences of P in A by the predicate expression ¢ . (A predicate expression is a predicate symbol or a suitable k-expression.) Then the circumscrip- tion of P in A, Circum(A,P), is the schema

A(~)&(V~ )(~(~) ~ P(~)) ~ (V~)(P(~) ~ q,(~)).

Circum (A,P) states that if ¢P satisfies the conditions satisfied by P and every n-tuple that satisfies ¢P also satisfies P, then the only n-tuples which satisfy P are those which satisfy ¢P.

Circumscriptive inference,~p, is then defined as

A~-pB iff A&Circum(A,P)FB.

To see how this works, let A=Block(a)&Block(b). Circumscribing Block in A gives the schema

¢P(a )&eP(b )&(Vx)(cP(x ) ~ Block (x)) ~ (Vx)(Block (x) D ~(x)).

Substituting Xx-x= a vx--b for ¢P and evaluating the X-expressions gives the follow- ing true antecedent

(a-- a va = b )&(b = a vb = b )&(Vx )(x= a vx= b ~ Block (x)),

and so we can deduce the consequent

(Vx )(Block (x ) ~ x=a v x=b ),

which states that the only objects which are blocks are a and b.

2.1.2 Model theory. Intuit ively the circumscript ion of P in A says that a tuple satisfies P only if it has to. The model-theoretic counterpart of this is minimal

entailment (for a similar idea see Davis, 1980). Let M and M' be models of the sentence A, and let M~<pM ' iff

(1) M and M' have the same domain and agree on the interpretation of variables,

functions, and relations other than P, and

(2) The extension of P in M is a subset of its extension in M'.

Definition 2.1.1. A model M of A is minimal in P iff M' ~<pM only if M ' = M .

Definition 2.1.2. A minimal ly entails B with respect to P, written A epB, iff B is true in all models of A that are minimal in P.

McCarthy proves the following theorem.

Page 4: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

82 I. Bell

Theorem 2.1.1. Any instance of the circumscript ion of P in A is true in all models of A that are minimal in P; that is, is minimal ly entailed by A in P. Which has the corollary that circumscript ive inference is (minimally) sound.

Corollary. If A~'eB then A ~ pB.

The above definitions and results can of course be generalized to cover models minimal in n predicates, as is required when n predicates are circumscribed jointly.

Section 3 contains examples of the use of circumscription, and more general reformulations of it.

2.2 Default logic

Reiter (1980) develops a logic for nonmonotonic reasoning. He begins by adding the sentential operator M, to the language of the predicate calculus. MA is read 'it is consistent to believe that A'. Default rules (or defaults) are then expressed in an extension of this language. For example the default:

Bird (x) : M Can - f l y (x) / Can-fly (x).

states that if x is a bird and it is consistent to believe that x can fly, then one may believe that x can fly.

Intuitively, it is consistent to believe that A if A is consistent with what is known, together with all of the other beliefs sanctioned by all of the other defaults in force.

Clearly, reasoning of this kind is nonmonotonic. Suppose that the only informa- tion we have is the default MA/A. Then we are entitled to believe A. However, if we subsequently discover that A is not the case then of course it can no longer be believed.

Defaults can be viewed as meta-rules: given an incomplete first-order theory, they represent instructions about how to create extensions of the theory. Those sent- ences sanctioned by the defaults can then be viewed as beliefs about the world. In general there are many different ways of extending an incomplete theory. For example given the theory 7 A v ~ B and the defaults :MA/A and :MB/B, two exten- sions are possible. Applying the first default gives the extension {A, -TB }, while applying the second gives {B, ~ A }. This leads Reiter to conclude that the applica- tion of defaults should be nondeterministic.

2.2.1 Default theories and their extensions. More formally, a default is an express- ion of the form:

(1 : M ~ I . . . . . [ ~ [ ~ n / 7 ,

where a, ~i (0 ~ i ~ n ), and y are first-order sentences which share a common set of variables. (The discussion in this section is restricted to closed defaults; that is, defaults containing no free variables.)

A default theory is a pair < D , W > where D is a set of defaults and W is a set of first order sentences.

Page 5: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 83

An extension E of a default theory < D,W > represents an acceptable set of beliefs that one may hold about the incompletely specified world W. E should therefore

satisfy the following conditions:

(1) E should contain W,

(2) E should be deduct ively closed,

(3) E should be faithful to the default rules: if (I:[~ 1 ..... M ~ n / 7 is in D, a is believed (acE), and each [3i can be consistently believed (~[~i~E), then "/is believed (TEE).

Furthermore, E should be a minimal set with these properties; that is, E should contain only those beliefs which are justified by conditions 1 -3 .

This results in the following definition.

Definition 2.2.1. Let A = < D , W > be a default theory. Then for any set of sentences S let F(S) be a minimal set which satisfies the following three conditions:

(1} WCF(S) ,

(2) Th(F(S))=F(S) where Th(T) is the deductive closure of T,

(3) if c~:M~l ..... M ~ , / yeD and aeF(S) and --n[3i ¢F(S) (0~<i~<n) then y~F(S).

Then E is an extension for A iff F(E)=E; that is, iff E is a fixed point of the operator F. Note that this definition gives us no idea of how to go about constructing an extension of A. With this in mind Reiter proves the following theorem.

Theorem 2.2.1. Let E be a set of sentences, and A = <D,W> be a default theory. Let

Eo = W,

and for n t> 0 let

En+l = Th(E,) U {y [ a:M[31 ..... M~m/yED and aeEn and ~[3ieE (0~<i~<m)}.

Then E is an extension for A iff

oo

E -- [_JE,. n = 0

However, as E occurs in the definition of E, +1 this definition is no more informa- tive than the last.

As we have seen, some default theories have mult iple extensions. Some default theories, for example <{:MA/-nA}, ~ > , have no extensions.

Reiter shows that extensions have the following properties. A default theory <D,W> has an inconsistent extension iff W is inconsistent. If <D,W> has an inconsistent extension then it is its only extension. Finally, no extension is a proper subset of another.

Page 6: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

84 J. Bell

While each extension can be viewed as an internally consistent and coherent view of the world, the union of two extensions may well be inconsistent. This leads Reiter to describe default reasoning as the process of selecting one extension of a theory and then reasoning within that extension until such t ime as the evidence forces a revision of those beliefs, in which case a switch to a new extension is called for.

As a consequence of this view, his 'proof theory' consists of a procedure for determining, for a sentence 13 and theory T, whether 13 is in some extension of T. However, as this amounts to testing whether 13 is in the union of all extensions of T, it is not what is required by the above view of default reasoning. In fact, as extensions are defined nonconstructively, there is no way of ' isolating' a single extension of a theory, and thus no way of testing for theoremhood within an extension. Furthermore, Hanks & McDermott (1986) point out that, as there is no way of referring to an extension within the language, any definition of default reasoning based on discrimination among extensions is actually beyond the ex- pressive power of Default Logic.

Finally, as Konolige (1987) notes, problems also arise because of the formulation of Default Logic. Defaults operate at the meta-level; they are not expressed within the language of the theory, and are not rules within the theory. Rather they can be thought of as a means of taking a theory and transforming into another one by adding (and deleting) sentences not logically derivable from the original. This leads to questions as to the representational power of defaults; can they be nested, and do conditional defaults exist? Furthermore, because defaults are expressed as infer- ence rules operating in conjunction with a fixed-point construction, it is not clear what the meaning of objects such as MA is.

In subsequent sections we look at more homogeneous approaches. Rather than adding metatheoretical rules, the language of the theory itself is extended. This solves the representational problem, and ultimately makes it possible to give an elegant account of the semantics of nonmonotonic reasoning.

2.2.2 Relation to circumscription. With certain provisos, Default Logic and Cir- cumscr ipt ion (in the form presented in 2.1) are equivalent. It follows as a special case of a more general theorem (Etherington, 1987) that:

B is in an extension of <{:M~P(x) / -~P(x)} ,A> iff A ~'pB,

provided that the domain is closed (each variable is assigned a member of the domain) and provided that the equality relation is decided (we know whether or not a=b is true for ever a and b).

If either of the provisos fails to hold, default logic can make conjectures that cannot be obtained by circumscription. For example (Etherington), if the equality relation is undecided then the default theory <{:Ma:/:b/a:/:b},~> has a unique extension containing a :~ b.

Furthermore, more advanced forms of circumscription (see Section 3.3) allow the extension of some predicates to be held constant while that of others is al lowed to vary. But in default logic there is no way of restricting the repercussions of the

Page 7: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 85

defaults to particular predicates. So, Etherington concludes, a completely general embedding of circumscript ion seems to be precluded.

2.3 Nonmonotonic modal logics

In this section we discuss nonmonotonic logics which are based on monotonic modal logics with unary modal operators.

2.3.1 The logics of McDermott & Doyle. McDermott & Doyle (1980) present a nonmonotonic modal logic which we will refer to as NMLI. They extend the language of the predicate calculus by adding the modal operator M. For a given sentence A, MA is intended to be read as 'A is consistent ' . The idea is to have an inference rule to the effect:

(pos) infer MA from the inability to infer ~A.

So that, for example, if all we know is that:

(Vx)(Bird (x) & MCan-fly (x) D Can-fly (x)), Bird (a).

Then we want to be able to use pos to infer that MCan-fly(a), and hence, by deductive reasoning, to be able to conclude that a can fly. If however we additional- ly know that:

Kiwi (a), (Vx )(Kiwi (x) D ~Can-fly (x)).

Then we can no longer use pos to infer MCan-fly(a) as ~Can-fly(a) is now inferable.

Clearly pos is a nonmonotonic inference rule. The inferences it permits depend on the context in which it is used; if we extend the context then previously licensed conclusions may no longer be warranted. McDermott & Doyle characterize pos by means of the fixed points of a nonmonotonic derivability operator NM. For a given set of premises A, a fixed point of A is a set X such that X=NMA(X), where

NMA(B)={C/AUASA(B) F C},

and

ASA(B)={MD/D is a sentence, and ~D~B},

and k is the derivability relation of FOPC.

Following Moore (1983) the definition can be simplified as follows: T is a fixed point of A iff for any sentence B,

T = { C I A U { M B / ~ B C T } k C}.

Note that as T occurs on both sides of the equation the definition again gives us no idea of how to construct a fixed point.

Page 8: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

86 I. Bell

It turns out that a given set of premises A may have more than one fixed point. For example, the set {MB ~ -~C, MCD ~B ) has a fixed point containing -7C but not ~B, and a fixed point containing ~B but not ~C. In the former case we reason as follows. ~B is not in the set, hence MB is in a fixed point of it. Consequently, ~C is in that fixed point preventing MC and consequently ~B from being in it. The reasoning in the latter case is analogous.

Furthermore, some sets of premises have no fixed points. The set {MCD-1C} is an example. Suppose that T was a fixed point of this set. Then T must contain MC as the set does not contain -~C. But if T contains MC then it also contains ~C and so by definition cannot be a fixed point.

Faced with the possibility that a given set of premises may have zero or more fixed points, McDermott & Doyle define a nonmonotonic derivability relation,~.NML, as follows:

A~'NMLB iff B is in every fixed point of A.

Note that if there are no fixed points of A then A~'NMLB for any sentence B. This is analogous to the situation in monotonic logic where any sentence can be derived from an inconsistent set of premises.

This definition of theoremhood has the property of being closed under ordinary monotonic deduction, so that everything that can be deduced from it monotonically is a theorem. Furthermore, if A has fixed points, its theorems will be consistent.

As McDermott & Doyle (1980) point out, the central difficulty of NMLI is that it fails to capture a coherent notion of consistency. For example the consistency operator M is not distributive over &; that is, MB does not follow from M(B&C). Also, the set {MC, -~C) has a fixed point, which amounts to saying that C is consistent with ~C.

In a subsequent paper (1982) McDermott attempted to remedy this by basing his NMLII on one of the standard modal systems T, $4, S 5. Following Boolos (1979), MA (~A) is interpreted as 'A is consistent', and its dual LA ([3A) is interpreted as 'A is provable'. The idea is that one of the above systems will supply the required axioms and rules for consistency (provability). As with NMLI, the NMLII- consequences of a set of premises A are those sentences which occur in all fixed points of A, where fixed points are defined as before with the exception that ~- is replaced by the derivability relation of one of the three modal logics mentioned. mentioned.

This attempt to strengthen the logic meets with mixed success. McDermott shows that nonmonotonic S 5 collapses into monotonic S 5. Nonmonotonic T and $4 do not collapse, but there are problems with these systems also. On the technical side, there are doubts as to their consistency (although McDermott does prove the consistency of the propositional subsets of these logics).

More fundamentally however it is questionable whether 5 is the appropriate, or the only, axiom to abandon. It seems that there are good grounds for rejecting T.

Boolos (1979) presents a logic of provability, G, which captures precisely the provability (consistency) relation of formal arithmetic; that is, the theorems of G are precisely those modal sentences which, appropriately translated, are the theorems

Page 9: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 117

of arithmetic. G consists of the modal logic K together with the axiom:

(G) [](E3ADA)D[3A.

Boolos shows that all instances of 4 are theorems of G. However, if T were an axiom the system would be inconsistent as any sentence whatsoever would then be derivable:

(1) E]ADA T (2) [3(DA~A) 1,Nec (3) [3(nADA)D[~A g (4) [3A 2,3 Modus Ponens (5) A 1,4 Modus Ponens.

McDermott indicates that he is aware of Boolos's work. Consequently it is surpris- ing that he did not use G as the basis for a nonmonotonic logic, or at least consider systems without T.

Moore (1983) argues for rejecting T while retaining 5 (see Section 2.3.2.2). A further criticism of McDermott 's approach is that there is no motivation for

interpreting the modal operators as representing consistency and provability. According to the semantic clause for M, MA is true at a world i just in case A is true at some world accessible to i. But, as Turner (1984) argues, it is difficult to see how to equate accessibility with consistency: as possible worlds represent complete states of affairs there seems to be no obvious sense in which one world can be considered to be consistent with (accessible to) another. (By contrast, if we consider partial states of affairs, for example the information states of Data Semantics (Veltman, 1985), then one state can be considered as accessible to another if it is a consistent extension of it.) Consequently the formal semantics adopted by McDer- mott contributes nothing to our intuitive understanding of the concepts of con- sistency and provability.

2.3.2 Autoepistemic Logic. Autoepistemic Logic (Moore, 1983, 1984, 1985, 1988) is a logic designed to characterize the beliefs of an agent reflecting upon its own beliefs. Such reasoning can lead the agent to adopt beliefs on the basis of what it does not believe. Moore gives the following example. If I had an older brother, I would believe that I did. I don ' t believe that I have an older brother, so I don ' t have one.

The language of Autoepistemic Logic is that of the proposit ional calculus together with the sentential operator B. Intuitively, for a sentence A, 13A means that A is believed.

Returning ta Moore 's example, let A represent the fact that I have an older brother. Then my belief that if I had an older brother then I would know about it can be expressed as

(1) ADBA.

Suppose that I have no other relevant beliefs about an older brother. Then I am

Page 10: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

88 J. Bell

entitled to believe

(2) ~BA.

But then by Modus Tollens on (1) and (2) I am entitled to believe

(3) ~ A .

Clearly autoepistemic reasoning is nonmonotonic; if I subsequently discover that I do in fact have an older brother then I am no longer entitled to believe (2) and so can no longer infer (3).

Stalnaker (1980) has observed that the nonmonotonici ty arises because the mean- ing of an autoepistemic statement is context-sensitive: the meaning of the belief operator depends on all of the agent's beliefs; its meaning changes as the agent 's beliefs change. As Moore puts it, the operator B functions as an indexical: its meaning changes with context just as other indexicals such as T, 'here ' , and 'now' . Of course, similar remarks apply to the M operator of NML and Default Logic.

2.3.2.1 Formalization. In Autoepistemic Logic we are interested in the beliefs of an agent reflecting upon its own beliefs. These are represented as a set and referred to as an autoepistemic theory. In the sequel T is assumed to be such a theory.

Standardly, I is an interpretation of T if I is a truth assignment that conforms to the usual truth recursion, and I is a model of T if I makes all of the sentences in T true.

Definition 2.3.2.1. An autoepistemic interpretation I of T is an interpretation of T that satisfies the condition: I~BA iff A ~T.

Definition 2.3.2.2. An autoepistemic model of T is an autoepistemic interpretation of T in which all sentences in T are true.

A semantically complete set of beliefs is one that contains everything that must be true, given that the set of beliefs is true and given that it is the set of beliefs that is being reasoned about.

Definition 2.3.2.3. T is semantically complete iff T contains every sentence that is true in every autoepistemic model of T.

Soundness of a theory must be defined with respect to some set of premises. Intuitively speaking, T will be sound with respect to a set of premises S just in case all the beliefs in T must be true given that all the premises in S are true and that T is the set of beliefs being reasoned about.

Definition 2.3.2.4. T is sound with respect to a set of premises S iff every autoepiste- mic interpretation of T that is a model of S is a model of T.

Examples (p and q are atomic). (1) If S=~p} and T-- (Bp,p} then T is sound with respect to S. Any autoepistemic interpretation I which is a model of S has V(p):true and I~Bp (as p ~T). So as an (ordinary) interpretation I has V(Bp)=true and is consequently a model of T. (2) If S = (p , q} and T=(B(p&q),p,q} then T is

Page 11: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 89

unsound with respect to S. Suppose that I has V(p)=V(q)=true. Then I is a model for S. But I~B(p&q) as p&qCT. So as an interpretation I has V(B(p&q))=false. The second example represents the case where the agent is logically deficient to the extent that it lacks an &-introduction rule.

In order to give a syntactic characterization of the autoepistemic theories which satisfy these conditions it is necessary to restrict our attention to ideally rational agents. Autoepistemic theories of ideally rational agents should include whatever the agent could infer either by ordinary logic or by reflecting upon what it believes. Following Stalnaker (1980), Moore suggests that such theories should satisfy the following conditions:

(1) T contains all truth-functional tautologies. (2) If A c T and A b B o T then BeT . (3) If A c T then BA~T, (4) If ACT then -~BA ~T.

Stalnaker describes the state of belief characterized by such a theory as stable ' in the sense that no further conclusions could be drawn by an ideally rational agent in

such a state'. The kernel of a stable set is its subset of base sentences; where a base sentence is

one that contains no modal operator. Moore (1983) shows that:

Theorem 2.3.2.1. A stable set is uniquely determined by its kernel.

Definition 2.3.2.5. T is stable iff T satisfies Stalnaker's conditions 1-4 ; that is, if

{B/SU(BA/AeT} U ~-~BA/A~T~ ~- B} C_ T.

It turns out that these stability conditions precisely characterise the autoepiste- mic theories that are semantically complete (Moore, 1985).

Theorem 2.3.2.2. (completeness) T is semantically complete iff T is stable.

Stability of an agent's beliefs guarantees that they are semantically complete. However it does not guarantee that they are sound with respect to its premises. Consequently the agent may believe propositions that are not grounded in its premises. In order to ensure that this is not the case it is necessary to add an additional constraint specifying that the only beliefs an ideal agent has are its initial premises and those required by the stability conditions.

Definition 2.3.2.6. T is grounded in a set of premises S iff

T C_ {B/SU{BA/A~T} U (~BA/A~T) ~-B}.

The following theorem shows that this syntactic constraint on T and S captures the semantic notion of soundness (Moore, 1985).

Theorem 2.3.2.3. (soundness) T is sound with respect to a set of premises S iff T is grounded in S.

Page 12: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

90 J. Bell

The beliefs of an ideally rational agent ought to be both semantically complete and grounded in its premises.

Definition 2.3.2.7. T is a stable expansion of a set of premises S iff T is a stable superset of S that is grounded in S; that is,

T={B/S u{raA/A~T} U {-~RA/ACT} k R}.

From Theorems 2.3.2.2 and 2.3.2.3 we can see that the possible sets of beliefs that an agent might hold, given S as its premises, ought to be just the stable expansions of S.

The plural is used as there may be more than one stable expansion of a given set of premises. For example, the set {~BADB, ~BBDA} has a stable expansion in which B is true and A is false, and a stable expansion in which these values are reversed. (In the former case we reason that A is not in the set so -TBA (and hence B ) must be in a stable expansion of it. The latter case is similar.)

Some sets of premises may have no stable expansion. Consider the set {-~8A DA }. If T is a stable expansion of this set, T must contain A; if A e T , -nBA ~T, hence AeT. But if A ~T then T is not grounded in {-TBA DA } and so is not a stable expansion it.

By now this comes as no surprise, and is due to the indexical nature of the belief operator. (Just as in the case of Default Logic and NML it is due to the indexical nature of the consistency operator.) It does, however, raise the question as to how to define autoepistemic inference. Given a set of premises S considered as axioms what are the theorems of S ? If there is a unique stable expansion T of S then we can take T as the set of theorems. But what if there are several expansions of S - - or none at all? Moore argues that if we take the point of view of the agent we have to say that there can be alternative sets of theorems (which may be mutual ly inconsistent) or no theorems at all. Alternatively, as in NML, we can take the theorems of S to be those sentences which are in every stable expansion of S. This yields a consistent set of theorems if there are any stable expansions, and it makes the theory inconsis- tent if there are none. Moore considers this idea reasonable and says that it represents what an outside observer would know, given only knowledge of the agent 's premises and that it is ideally rational.

2.3.2.2 Comparison with NML. The definition of a fixed point in (the proposit ional case of) NMLI is equivalent to the following: T is a fixed point of S iff

T={B/SU{~BA/ACT} k B}.

Whereas T is a stable expansion of S iff

T={B/SU{BA/A~T} U {~BA/A-TT} I-B}.

So in NMLI {BAIA eT} is missing from the base of the fixed points. This explains the weaknesses of NMLI. For example {MA, --hA } has a consistent fixed point. But

Page 13: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 91

the equivalent set of premises {~BA,A } would have no consistent fixed point if BA were forced to be in every fixed point that contains A.

In the case of NMLII, the definition of a fixed point is changed such that F is replaced by the derivability operator of one of the modal logics T, S 4, S 5. As this adds the rule of necessitation (infer F1A from A) the result is to bring the NMLII systems much closer to autoepistemic logic.

The crucial difference however is hat the NMLII systems contain the T-axiom which is unacceptable in Autoepistemic Logic as it allows beliefs to be grounded which would otherwise not be. BA DA will always be in any stable autoepistemic theory (as -TBA~T if ACT). However making it an axiom allows beliefs to be grounded that otherwise would not be. For example if F- were to contain the consequences of the T-axiom then (A) would be grounded in (BA}; that is,

(A}C_(B/(BA} U (BA/A~T} U {~BA/A~T} F B} C_ T.

By contrast, the S 5-axiom is acceptable even though it may allow sentences of the form BA to be self-grounding.The contraposition of 5 is ~B~BA DBA, so BA can be in a stable expansion of {~B~BA } despite the fact that this set does not contain A. However, this inference is never in danger of being falsified.

Consequently, McDermott should have dropped T rather than 5. The result is the belief logic K45 (terminology from Chellas, 1980).

Moore (1985) argues that adopting these axioms in the case of autoepistemic logic is unnecessary. He shows that if A is true in every autoepistemic interpretation of T, then T is grounded in S U {A } iff T is grounded in S. The immediate corollary is that if A is true in every autoepistemic interpretation of T, then T is a stable expansion of SU{A} iff T is a stable expansion of S. Reading BA as A c T and ~BA as A ~T the axiom schemas

(K) B(ADB) D (BADBB) (4) BADBBA (5) ~ B A D B ~ B A ,

simply state Stalnaker's conditions 2 -4 .So all of their instances are true in every autoepistemic interpretation of a stable autoepistemic theory. The only other sent- ences with this property are instances of tautologies. So any set of premises containing K, 4, or 5 will have exactly the same stable expansions as the corres- ponding set of premises without those axioms.

As we are considering ideal agents it also seems reasonable to insist that their beliefs are consistent; that is,

(5) if A c T then ~ACT.

The axiom corresponding to (5) and (6) is

(D) BA D ~ B ~ A .

These considerations suggest that the underlying modal logic of Autoepistemic Logic is KD45.

Page 14: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

92 ]. Bell

2.3.2.3 Possible-world semantics. Moore (1984) gives alternative possible-worlds semantics for Autoepistemic Logic for the case of ideally rational agents. Let M=<I,R,V> be a complete S 5-model; that is, iRi' for every i and i ' in I. As usual, truth of a sentence is defined relative to a world, and conforms to the usual truth recursion for proposit ional sentences. A sentence of the form BA is true at a world i iff A is true at every world i ' accessible from i. Let B(M), the set of facts that are believed in M, be the set {A/M,i ~A for all i eM}. The major result is that the sets of sentences that are true in every world of some complete S 5-model are exactly the stable autoepistemic theories (Moore, 1984).

Theorem 2.3.2.4. T=B(M) for some complete S 5-model M iff T is stable.

Proof. Suppose that T = B(M) for some complete S 5-model M. By the soundness of proposit ional logic, B(M) is closed under tautological consequence. By the truth condit ion for B, BA is true at every world in M just in case A is, so BAeB(M) iff A eB(M). Similarly, BA is false at every world just in case A is false at some world, so -7BA eB(M) iff ACB(M). So B(M) is stable.

Conversely, suppose that T is stable. Let MT consist of all the worlds consistent

with T; that is,

MT={i :i is a world that satisfies all base sentences of T}

As we have just shown, B(MT) is a stable set. By the definition of MT, the base sentences of B(MT) are exactly those of T. And by Theorem 2.3.2.1 a stable set is uniquely determined its kernel. Hence T=B(MT). []

This result was obtained independent ly by Halpern & Moses (1984), M. Fitting, and J. van Benthem (unpublished data). It means that an autoepistemic interpreta- tion of a stable autoepistemic theory can be characterised by an ordered pair consisting of a complete S 5 model (specifying the agent 's beliefs) and a proposi- tional truth assignment (specifying what is true in the actual world).

Definition 2.3.2.8. If M is a complete S5-model and v is a proposit ional truth assignment, both defined on the base sentences of T, then < M , v > is a possible- world interpretation of T i f f T=B(M).

An atomic sentence p is true according to < M , v > iff v(p)=true. The truth conditions for the truth-functional connectives follow the usual recursion. Finally, < M , v > ~ BA iff AeB(M).

Definition 2.3.2.9. < M , v > is a possible-world model of T i f f < M , v > is a possible- world interpretation of T and every sentence of T is true in <M,v >.

Theorem 2.3.2.5. If < M , v > is an autoepistemic interpretation of T, then < M , v > is an autoepistemic model of Ti f f v is consistent with the truth assignment for one of the worlds in M. (Moore, 1984.)

Page 15: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 93

Intuitively this means that a set of beliefs is true just in case the actual world is one of the worlds that the aent thinks might be actual.

Theorems 2.3.2.4 and 2.3.2.5 ensure that for every autoepistemic interpretation (model) of a stable theory there is a corresponding possible-world interpretation (model) and vice versa.

2.3.2.4 Relation to default logic. Konolige (1987) shows that Default Logic and (a slightly restricted form of) Autoepistemic Logic are formally equivalent.

The base language of Autoepistemic Logic is taken to be first-order rather than propositional. However, as no quantifying-in is permitted, this change is superfi- cial. More importantly, Konolige argues that Moore's groundedness condition is too weak. For example, S = {BA DA } has two stable expansions (T, and T'). T contains A and BA, T ' contains ~BA but not A. For the belief stateT, the agent's belief in A is grounded in its assumption that it believes A. This situation is anomalous as the aent is justified in believing A not because of any objective grounds but merely because it chose to believe A. This leads Konolige to strengthen the groundedness condition as follows.

Definition 2.3.2.10. T is strongly grounded in S iff

TC_{B/SU{BA/AeS} U {-~BA/AeT} ~- B}.

Definition 2.3.2.11. An autoepistemic extension T of S is minimal for S if there is no other extension T ' of S whose kernel is a subset of S's. (Recall that the kernel of a set is its subset of base sentences; that is, its subset of sentences that contain no modal operators.)

Proposition 2.3.2.1. An autoepistemic extension of S is strongly grounded in S iff it is minimal.

Konolige then shows that if we accept the stronger groundedness condition and restrict our attention to minimal extensions, then Autoepistemic Logic and Default Logic are equivalent. (If we consider all stable expansions then Default Logic is included in Autoepistemic Logic.) In outline, the proof is as follows.

Every default theory <D,W> can be transformed into a set of autoepistemic sentences S as follows:

a : MI31&...&M ~n/7--' B(2&~B~[31&...&~B~I~n ~¥.

It can then be shown that the minimal extensions of S are exactly the extensions of <D,W>.

Conversely, every set of autoepistemic sentences S is equivalent to a set in which each sentence is of the form:

~Ba v B~ v...vB~, vT.

Page 16: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

04 J. Bell

It can then be s h o w n that a default theory <D,I/V> can be constructed such that E is an ex tens ion of < D , W > iff E is the kernel of a m i n i m a l ex tens ion of S.

2.3.3 The logic of minimal knowledge. Halpern & Moses (1984) aim to characterize what an ideal agent would know by introspecting on what it knows. An agent's knowledge state is a description of what it knows so A is in the agent 's knowledge state iff the agent knows A. As we only consider the case of a single agent we will write KA to represent the fact that A is in the agent 's knowledge state. So if the sentence c~ completely describes the agent 's knowledge state, Halpern & Moses aim to characterize what the agent knows if it 'knows only cx'.

Suppose that an agent knows only A. Then by introspection it can discover that it does not know B, and by further introspection it can discover that it knows that it does not know B. So its knowledge state contains -~KB and K~KB. So the agent 's lack of knowledge leads to knowledge. However, if the agent subsequently discov- ers B then this will no longer be the case. Clearly what the agent knows if it 'knows only c~' is nonmonotonic .

The situation is further complicated because some sentences do not uniquely characterize a knowledge state. For example an agent cannot know only KAvKB. If this is all that it knows then it doesn' t know A and it doesn' t know B. So ~KA and ~KB are in its knowledge state, making it inconsistent. Sentences which uniquely characterize a knowledge state are called honest. If a sentence c~ does not uniquely characterize a knowledge state then c~ is dishonest as an agent cannot claim to 'know only c~'. Halpern & Moses present several different, but as they show equiva- lent, characterizations of honesty and what an agent knows if it 'knows only cd. We look at two of these. They also define a nonmonotonic provabili ty relation,•K, where c~'KA iff the agent knows A given that it 'knows only c~'. The relation is restricted to honest cds as ~K 'proves ' inconsistent sentences for dishonest c~'s.

2.3.3.1 Knowledge states as stable sets. If T is a knowledge state then T should satisfy the following conditions.

(1) All instances of proposit ional tautologies are in T. (2)If A c T and A b B o T then BeT . (3)A~T iff KANT. (4) A ~ T iff ~ K A ~ T . (5) T is (propositionally) consistent.

Conditions (1) and (2) are required as we are assuming that the agent can do perfect proposit ional reasoning. Conditions (3) and (4) reflect the fact that the agent is capable of introspection with regard to its knowledge. Condition (5) requires that knowledge states are consistent. Any set satisfying these conditions is stable. So knowledge states correspond to stable sets.

The agent's knowledge state when it 'knows only c~' should then be the one which is in some sense minimal among the knowledge states containing cc As no stable set properly includes another, 'minimal ' cannot mean set inclusion. However, a stable set is uniquely determined by its kernel (Theorem 2.3.2.1). So a possible candidate

Page 17: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 95

for the minimal knowledge state containing ~ is the stable set containing ~ whose kernel is m in imum (w.r.t. inclusion). However not all sentences have minimal knowledge states. For example any stable set containing the sentence a=KAvKB must contain either A or B. There is a stable set containing A but not B, and a stable set containing B but not A. But a set containing a whose kernel was minimal would contain neither A nor B, and so would not be stable. This leads to the following definition. A sentence a is honests iff there exists a stable set containing ~ whose kernel is min imum. For an honest sentence c~, S ~ is the stable set that describes the agent 's knowledge if it 'knows only cd.

2.3.3.2 Knowledge states as S5-models. If M is a (complete) S 5-model, let K(M) be the set of facts that are known in M; that is, K(M)= {A/M,i F-A for all i ~M}. Then we know from Theorem 2.3.2.4 that

T=K(M) for some complete S5-model M iff T is stable.

So it is possible to characterize knowledge states by means of S 5-models. The S 5-model in which the agent 'knows only cd should then be the model in

which it knows the least among the models in which it knows ~. As K(M) C_ K(M') if M' C_ M the appropriate model is the union of all models in which Ka holds. This model, M~, should be the model that characterizes the agent's knowledge state if it 'knows only cd. However, it turns out that in some cases c~K(M), in which case there seems to be no good candidate for M~. So a sentence c~ is honestM if cc~K(M~). As an example of a sentence that is not honestM Halpern & Moses give the familiar sentence c~=KpvKq. The models Mp={i/i ~V(p)) and Mq=(i/i ~V(q)} both satisfy Kcc However Mp U Mq is the set of all possible worlds. This is themodel of most ignorance, in which the only facts known are the tautologies of $5. Therefore, a~K(M~) and c~ is not honestM.

2.3.3.3 Comparison with autoepistemic logic. Halpern & Moses's logic is a nonmo- notonic logic of knowledge based on S 5. Autoepistemic Logic is a nonmonotonic logic of belief based on KD45.

On the intuitive level the contrast is between certainty (in the case of knowledge) and conjecture (in the case of belief). There can only be one knowledge state corresponding to what an agent knows if it 'knows only a' , but there may be several belief states corresponding to what an agent believes if it 'believes only c~'.

Technically, the difference depends on the T-axiom. For example, a = BA has no stable expansion. Informally, this is because believing A does not constitute grounds for concluding that A is true in the world. Formally it is because without the T-axiom we cannot derive A from BA. By contrast, ~= KA is honest because an ideally rational agent that claims to know only KA is completely describing its state of knowledge (and saying that A is true in the world).

Given the distinction between knowledge and belief it comes as no surprise that conjectural inference rules, such as defaults, can be represented in autoepistemic logic but not in the logic of minimal knowledge. In the latter case, instances of defaults such as ~KA ~B (that is, KAvB) are dishonest, but in autoepistemic logic

Page 18: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

96 J. Bell

B A v B has a unique stable expansion. This represents further confirmation that KD45 is the right base logic for nonmonotonic reasoning.

2.4 Preference logics

Shoham (1987, 1988) defines a general semantic f ramework for a class of nonmono- tonic logics known as preference logics.

Let L be a standard logic (propositional calculus, predicate calculus, or a modal logic) and let < be a strict partial order on the class I of all interpretations of L (where ' interpretat ion' is used to refer to a truth value assignment in the case of proposit ional calculus, an interpretation in the case of predicate calculus, and a Kripke-model in the case of a modal logic3). Intuitively, i< i ' means that the interpretation i is preferred over the interpretation i ' . Then L and < define a new logic L< called a preference logic. The syntax of L< is identical to that of L. As for the semantics, the notions of satisfaction, validity, and entai lment are defined using those of L.

Definition 2.4.1. An interpretation i preferentially satisfies A in the frame for L< (written i P<A) iff i ~A and there is no other interpretation i ' such that i '<i and i '~A. In this case we say that i is a preferred model of A.

Clearly, if i ~<A then i~A.

Definition 2.4.2. A preferentially entails B (written A ~<B) iff any interpretation i such that i ~<A is such that i ~B ; that is, if the models of B (preferred or otherwise) are a superset of the preferred models of A.

Monotonicity can then be defined as follows.

Definition 2.4.3. L< is monotonic iff for all A, B, and C in L, if A~<C then

A&B P<C also. So, as the special case where < is empty, L is monotonic. The Deduction Theorem holds for any preferential logic L<; that is, for any sentences A, B, and C in L<,

if A&B~<C then A~<B~C.

However, the converse holds only if L<is monotonic. (If L is nonmonotonic we have A~C and A&B~C for some A, B, and C. But then we have A ~ B 3 C and A&B~C.)

2.4.1 Some preference logics. Shoham defines the following two preference relations.

In the case of predicate circumscript ion (2.1) the preferred interpretations are the minimal models of the circumscribed predicate P. So i<i ' iff

(1) i and i ' have the same domain and agree on the interpretation of variables, functions, and relations other than P,

(2) i '~P(a) if i~P(a) for all a. (3) i '~P(a) but i~P(a) for some a.

Page 19: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 97

In the case of the logic of minimal knowledge (2.3.3) interpretations are S5- models. The preferred interpretations are those where fewest base sentences (sent-

ences containing no modal operators) are known. So i < i ' iff

(1) i 'FKA if ieKA for all base sentences A. (2) i'eKA but i ~KA for some base sentence A.

We can also give a preference criterion for the (slightly) restricted form of Moore's autoepistemic logic which Konolige proves equivalent to default logic (2.3.2.4).

By Theorem 2.3.2.5 every stable set has a possible-worlds model consisting of a pair < M , v > in which M is an S5-model and v is an assignment of truth values which is consistent with that for one of the worlds in M. Each < M , v > - p a i r can be transformed into a KD45-model as follows. If M=<I,R,V> then let M'=<I',R',V'>; where i is a new world, I'=IU{i}, R' extends R by having iR'i ' for some i ' in I, and the assignment by V' of truth values to the atomic sentences at

i is consistent with v (see Fig. 1).

0

M

Fig. 1. The KD45-model, M', corresponding to <M,v>.

It is easily seen that R ' is irreflexive, serial, transitive, and euclidean. So M' is a KD-45-model. Clearly also, MFBA iff M'FBA. So M and M' are equivalent.

We can now define the preference criterion on the KD45-models that correspond to strongly grounded extensions. Preferred interpretations are those where fewest base sentences are believed. So i<i' iff

(1) i'eRA if ieRA for all base sentences A. (2) i'~RA but i~BA for some base sentence A.

Proposit ion 2.4.1. The preferred models of A coincide with the strongly grounded autoepistemic extensions of A.

Proof. Suppose that i is a preferred model of A and that its corresponding possible- worlds model is <M,v >. Then B(i) = B(M); that is, the set of sentences believed in i is the same as the set of sentences believed in M. As M is an S 5-model there is a stable set T such that T=B(M) (Theorem 2.3.2.4). So T--B(i). As i is a preferred model of A the number of base sentences in B(i) is minimal subject to Ae8( i ) . So the number of base sentences in T is minimal subject to A ~T. So T is minimal for A

Page 20: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

98 ]. Bell

(Definition 2.3.2.11). So T is a stable expansion of A that is strongly grounded in A (Proposition 2.3.2.1). Conversely, suppose that T is a stable expansion of A that is strongly grounded in A. Then T is minimal for A and T=B(M) for some S 5-model M. Consider any possible-worlds model <M,v > and corresponding KD 45-model i. As T is minimal for A, i is a preferred model of A. []

So the preferred models of A coincide with what the ideally rational agent ought to believe given that it believes only A.

Using the notions of preferential satisfaction and preferential entailment we can define two different notions of autoepistemic theoremhood as follows. If the auto° epistemic theorems of A are the intersection of the stable expansions of A that are strongly grounded in A, then the autoepistemic consequences of A are the set:

(B/A preferentially entails B }.

And if the autoepistemic theorems of A are the union of the stable expansions of A that are strongly grounded in A, then the autoepistemic consequences of A are the set:

{B/A preferentially satisfies B}.

So all of the nonmonotonic logics we have considered can be characterized as preference logics. Further examples are given in the next section.

2.4.2 The logic of preference logics. Shoham has given the semantics of preference logics. The question arises as to whether it is possible to give a corresponding proof theory. In Bell (1990) we show that this can be done using the conditional logic C. Insofar as all nonmonotonic logics can be characterized as preference logics C is the logic of nonmonotonicity.

3 Reasoning about c h a n g e

The aim of knowledge representation is to describe the world in a rigorous but computably tractable way. When it comes to reasoning about change we encounter three fundamental problems.

The qualification problem (McCarthy, 1977) consists in the fact that the number of preconditions for the successful performance of an action is immense. Ginsberg & Smith (1987) give the example of a robot attempting to move a bookcase. Many things could prevent it from doing so: the bookcase might be too heavy with all the books in it, it could be fastened to the floor, the door might be too small, the house might catch fire, or there might be a nuclear war. It is infeasible to record all of the preconditions of an action, and computationally intractable to check them all explicitly.

The frame problem (McCarthy & Hayes, 1969) is that of indicating and inferring all those things that do not change as actions occur and time passes. The allusion is to animated films where the background changes little from frame to frame as foreground events unfold. If Ginsberg & Smith's robot were to move the lamp from

Page 21: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 90

the bookcase to the table, we need to determine that the vacuum cleaner and carpet remain stationary and do not change shape or colour, etc.

Finally, the ramification problem (Finger, 1987) is that it is unreasonable to record explicitly all of the consequences of actions, even the immediate ones. For any given action there are a potentially infinite number of possible consequences that depend upon the details of the situation in which the action occurs. If the robot succeeds in moving the bookcase, all of the books in it move also, so do the objects on top of it, the furniture is re-arranged, parts of the carpet become covered or uncovered, etc.

In the following discussion we will concentrate on the frame problem and the ramification problem. These two problems are complementary in the sense that avoiding one means encountering the other, or more optimistically, solving one can mean avoiding the other.

It was part ly in response to these three problems that nonmonotonic logics were developed. McCarthy (1980) discusses the qualification problem as part of the motivat ion for circumscription, and suggests that circumscript ion can be used to solve it. He also proposes (McCarthy, 1984) using circumscript ion as a means of solving the frame problem. We will begin by looking at this proposal in more detail.

3.1 The frame problem in the situation calculus

The frame problem was first discussed within the specific context of the situation calculus (sc) of McCarthy & Hayes (1969). sc is a formalism in which t ime and change are represented in terms of facts holding (or failing to hold) in situations which are related by events. 4 A situation is a complete description of the world at a particular point in time; so every fact is unambiguously either true or false in a situation. To assert that fact f is true in situation s we write T(f,s), where T is a predicate and f and s are terms. (So, syntactically, facts are terms naming first-order formulas.) Happenings in the world are represented by events. Events may change the truth-value of facts, and can thus be thought of as causing a transition from one situation to another. The transition is represented by means of the function r (for result) which maps a situation and an event to another situation. So if So is a si tuation and wakeup (John) is an event, then r(wakeup (John),S o) is the situation resulting from John waking up in S o. We can then state that John is awake in this situation:

T (awake (John),r (wakeup (John),S o)).

and, more generally, we can state that this is the case for any individual

(Vp,s)(T (awake (p ),r (wakeup (p ),s ) ).

The frame problem arises when we try to express the idea that facts tend to stay true (persist) from situation to situation as irrelevant events occur. For example, we would intuitively think that John is awake in the situation

S 2 = r (eat-breakfast (John),r (wakeup (John),S o))

Page 22: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

100 J. Bell

because it is typically the case that eating breakfast does not cause one to fall asleep. But given the above axioms there is no way to deduce

T(awake (John),S 2).

In order to conclude that John is awake in $2 we could add the frame axiom

(Vs)(T (awake (John,s)) ~ T (awake (John,r (eat-breakfast (John),s )))),

but this seems arbitrary and may on occasion be false. Furthermore, any realistic description of the world will require that we add an unacceptably large number of such axioms.

3.2 McCarthy's proposal

McCarthy (1984) proposed extending sc by adding the new predicate Ab. Ab (f,e,s) means that fact f is abnormal if event e occurs in situation s. We then say that 'normal' facts persist; that is,

(Vf, e,s )(T (f,s)&~Ab (f,e,s) D T(f,r (e,s))).

And add axioms stating when facts become abnormal; for example

(Vp,s)(Ab (awake (p ),gotosleep (p ),s ) ).

Then in order to be able to infer that fact f is not abnormal with respect to event e occurring in situation s, and hence that f persists in r(e,s) it is sufficient to circumscribe Ab while allowing T to vary.

This last twist requires a more expressive form circumscription, parameterized circumscription, which allows the domains of predicates named in a new circums- cription axiom schema to vary while the domains of all other predicates remain fixed. We will write simply Circum (A,Ab,T) for the result of circumscribing Ab in A allowing T to vary. The specifix axiom schema required is

A(Ab,T) &(V Ab ' ,T')(A ( Ab ',T')&(Vf, e,s)(Ab'(f,e,s) DAb (f ,e,s ) ) ) D(Vf, e,s)(Ab ' ff , e,s )~- Ab (f ,e,s ) );

where A (Ab,T) indicates that Ab and T are substitutable predicates occurringin A, and A(Ab' ,T ' ) denotes the result of substituting Ab' for Ab and T' for T in A.

The preference criterion of Section 2.4.1 is easily modified: M < M ' iff

(1) M and M' have the same domain and agree on the interpretation of variables, functions, and relations other than T and Ab,

(2) M'~Ab(f,e,s) if M~Ab(f,e,s) for all <f ,e ,s>, (3) M'~Ab(f,e,s) but M~Abff, e,s) for some <f,e,s>.

3.2.1 The Yale shooting problem. Hanks & McDermott (1985, 1986, 1987) question whether McCarthy's suggestion does indeed represent what we intuitively mean by assuming that facts tend to persist over time. They present the following simple

Page 23: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 101

example of the frame problem. At a given moment an individual (Fred) is alive. A gun is loaded and, after a short pause, it is fired at Fred. Is Fred dead as a result? Intuit ively we think so as we assume that the gun remains loaded until fired. The example can be represented in sc by the following set, A, of axioms:

(1) T(ALIVE,S o) (2) (Vs)(T (LOADED,r (LOAD,s))) (3) (Vs)(T (LOADED,s) DAb (ALIVE,SHOOT,s)&T (DEAD,r (SHOOT,s)))

(4) (Vf, e,s )(T(f,s )& ~Ab (f,e,s ) D T(f,r(e,s )))

For simplici ty facts are represented as atomic propositions. The axioms are in- tended to refer to Fred who in any situation is either alive or dead and the gun that can be either loaded or unloaded. At So Fred is alive (1). The gun becomes loaded any t ime a load event happens (2). Axiom (3) says that if Fred is shot with the loaded gun in any situation then his being alive becomes abnormal and he is dead in the resulting situation. Axiom (4) is the familiar assertion that 'normal ' facts persist. The events result in the following sequence of situations:

$1 = r(LOAD,So), S 2 = r(WAIT,S1), S 3 = r(SHOOT, S2).

Where the event WAIT represents the period of t ime in which nothing significant

happens. Now suppose that, following McCarthy's suggestion, we circumscribe Ab in A

(allowing T to vary), can we conclude T(DEAD,S 3) from Circum (A,Ab,T)? Hanks & McDermott show that we cannot.

Recall that, by the soundness of circumscription, the circumscriptive inferences are those sentences that are true in all models of Circum (A,Ab,T) minimal in Ab. The converse of this is that if a sentence is not true in all such minimal models then it is not inferable from the circumscription. In the present example the circumscrip- tion results in two conflicting models.

The first model results from reasoning forwards in time minimising abnormali- ties along the way. From (2) we can deduce T(LOADED,S1). As we cannot infer Ab(LOADED,WAIT, S1) we can assume its negation. This with (4) gives T (LOADED,S 2). When (3) gives Ab (ALIVE,SHOOT, S 2) and T(DEAD,S 3).

The second model results from reasoning backwards in t ime minimising abnor- malities along the way. Ab (ALIVE,SHOOT,S 2) is not inferable so we can assume its negation. By the contraposit ion of (3) we can deduce -TT(LOADED,S 2), blocking a proof of T(DEAD,S3). And we can assume ~Ab(ALIVE,WAIT, S1) and ~Ab(ALIVE,LOAD,So). So by (1) and (4) we can deduce T(ALIVE,S3). Thus all we can conclude from Circum(A,Ab,T) is T(DEAD,S3) v~T(LOADED,S3), and not T(DEAD, S 3) as intended.

Hanks & McDermott go on to show that Default Logic and NML also fail to produce the intended result. They also show that the problem arises when using formalisms other than sc.

Page 24: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

102 J. Bell

They diagnose the fault as follows.Two different models arise as a result of the circumscription because of the temporal order in which abnormalities are mini- mized. The first (intended) model is built by assuming an earlier normality, ~Ab (LOADED,WAIT, S 1), and deriving a later abnormality, Ab (ALIVE,SHOOT,S2). The second (unintended) model results from assuming a later normality, ~Ab(ALIVE,SHOOT, S2), and deriving an earlier abnormality, Ab(LOADED, WAIT,S1). This suggests that we want circumscription to select models which are minimal in Ab and in which abnormalities occur as late as possible. Shoham calls these models the chronologically minimal models. However, this more sophisticated minimality criterion cannot be represented in terms of set inclusion, so circumscription is unable to represent the shooting problem correctly.

Similar remarks apply to Default Logic and to NML. In order to get only the intended extensions (fixed points) we need to have some way of mediating the order of application of defaults (NML axioms) such that abnormality occurs as late as possible. Again, this is beyond the expressive power of the logics.

3.3 Proposed solutions

3.3.1 Pointwise circumscription. Lifschitz (1986) suggests a more expressive form of circumscription, pointwise circumscription, which is capable of representing the chronological minimality criterion directly.

The fundamental idea of pointwise circumscription is that predicates are inter- preted as characteristic functions rather than as sets. We can then think of minimiz- ing a predicate at a particular point (argument of the characteristic function) by changing its value at that point from true to false. It is then possible to give a circumscription axiom which allows a predicate to be minimized at some points in preference to others according to a 'circumscription policy'.

For the shooting problem we want to chronologically minimize abnormalities. So a point is a fact-event-situation triple ~f,e,s ~, and a temporal ordering relation ~ is defined on points such that ~f,e,si~ ~ ~f ' , e ' , s j~ iff O~i~j~3. Ab is then mini- mized according to the temporal ordering defined by ~ (with T allowed to vary as a parameter). Lifschitz gives the following schema:

(Vf, e,s,Ab ' ,T') ~( Ab (f ,e,s ) &(Vf',e',s')(.~f',e',s'~ ~ ~f,e,s~ ~ (Ab'(f ' ,e' ,s ')~- Ab(f',e',s'))) &A (~.f',e',s'[Ab (f',e ',s ')&~f,e,s ~ ~ ~ f ' , e ',s '~],T'));

which states that there is no Ab' and T' such that (i) Ab is true at ~f,e,s ~, (ii) Ab and Ab' agree on all points earlier than ~f,e,s ~ and (iii) Ab' does not satisfy A at ~f,e,s~. That is, there is no Ab' which agrees with Ab on all earlier points ~ f ' , e ' , s ' ~ which is also false at ~f,e,s~. So Ab is minimal up to ~f,e,s~.

Hanks & McDermott conduct a lengthy proof that this does produce exactly those sentences that are true in all chronologically minimal models. So we can conclude that Fred is dead as a result of the sequence of events.

Page 25: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 103

As for semantics, we can define a preference criterion similar to that given by Shoham in the next subsection: M<A~M' iff there is an i such that

(1) M and M' have the same domain and agree on the interpretation of variables, functions, and relations other than T and Ab,

(2) M'~'Ab(f,e,sj) if M~'Ab(f,e,sj) for all f and e and j<~i, (3) M'~Ab(f,e,si) and M~Ab(f,e,si) for some f and e.

3.3.2 Chronologically maximal ignorance. Rather than minimize abnormalit ies Shoham (1986, 1988) proposes chronologically minimizing knowledge. The idea is that we can assume that facts persist unless we know otherwise. He presents TK a monotonic logic of temporal knowledge. We consider a subset of TK which contains sentences like (t,A) meaning A is true at t ime t, and [3(t,A) meaning that A is known to be true at t ime t. The semantics for the knowledge operator [] are those of the modal logic S 5. For our purposes what matters is that while

[](t,A )v ~[3(t,A )

is always true,

[3(t,A )v[](t ,-~A )

need not be. (If you don ' t know A at t it does not necessarily follow that you know ~ A at t.) For current purposes t ime is interpreted over the integers.

The nonmonotonic logic CI, the logic of chronological ignorance, is then defined by supplying the preference criterion <TK which chronologically minimizes the number of base sentences known.

A model M is chronologically more ignorant than a model M' (written M<TKM') iff there is a t ime t such that

(1) for all t ' ~ t and base sentences A, M'~m(t ' ,A) if M~[3(t',A), and (2) for some base sentence A, M'~[3(t,A) and M~O(t,A).

M is a chronologically maximal ly ignorant model (c.m.i. model) of a sentence A if M preferentially satisfies A; that is, MeA and there is no other M' such that M'<TKM and M'eA.

The shooting problem can then be represented by the following axioms:

(1) [3( O,LO ADED )&E]( O,ALIVE )&[3( 2,S HOOT ) (2) [3( t,LO ADED )&-~[3( t,S HOOT )& ~[3( t,EMPTIED-MANU ALLY )

D[](t + 1,LOADED) (3) [3(t,LOADED)&[3(t,SHOOT) ~ • ( t + I,~ALIVE) (for all t)

Axiom (1) represents the boundary conditions. Axioms (2) and (3) are causal rules. Axiom (2) corresponds to a frame axiom: it says that the gun remains loaded unless there is some known reason for it not doing so. Axiom (3) states that if Fred is shot with the loaded gun at t ime t then he is dead at t ime t + 1.

Reasoning forwards from 0 it is easy to see that all c.m.i, models of this theory satisfy [](3,~ALIVE) as any such model must satisfy [3(1,LOADED) and

Page 26: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

104 ]. Bell

N(2,LOADED). (Suppose that M~'N(1,LOADED) and M'~N(1,LOADED). Then by (2) M'~N(0,SHOOT) or M'~N(O,EMPTIED-MANUALLY). So M<TKM'. Similarly for N(2,LOADED).)

Models where the gun remains loaded are preferred to models in which the gun

becomes unloaded because the latter models admit knowledge of an unloading event which occurs temporally earlier than the knowledge that Fred is dead.

Minimizing knowledge amounts to minimizing all known facts at each point in time. This cannot be achieved by minimizing truth as minimizing the times when A holds implies maximizing the times when ~A holds. So the idea works because it exploits the partiality of knowledge contexts; ~Z](t,A)vZ3(t,~A).

3.3.3 A formal theory of action. Lifschitz (1987) proposes a solution to the shooting problem which involves providing axioms for a theory of action. We will again follow the presentation in Hanks & McDermott (1987).

Lifschitz reverts to the original ontology of sc. Events are replaced by actions, and facts are replaced by fluents. For current purposes, fluents are functions from situations to truth values. For example, FALSE is a fluent that is true in no situation, and TRUE is a fluent that is true in every situation.

The theory of action is then built up as follows. The predicate Causes has the form Causes(action,fluentl,fluent2), and means that if action is executed it causes fluent1 to take the value of fluent2 in the resulting situation. Examples: Causes (LOAD,LOADED, TR UE ), Ca uses (SHOOT,ALIVE,FALSE).

The predicate Precond determines whether an action is successful: Precond (fluent,action) means that action will succeed in a situation only if fluent is true in that situation.

An action is successful in a situation just in case all of its preconditions are true in that situation:

(Va,s)(Success (a,s) ~ (Vf)(Precond (f,a) D T(f,s))).

The predicate Affects can then be defined as follows:

(Va,f,s )(Affects (a,f,s) ~--- Success (a,s)&(3b)(Causes (a,f,b ))).

So an action affects the value of a fluent just in case the action is feasible and causes the fluent to take on some truth value.

It is then possible to define how an action can change a fluent's value:

(Va,s,f,b)(Success (a,s)&Causes (a,f,b) D (T (f,r (a,s)) ~ (b = TRUE))), (Va,s,f)(~Affects(a,f,s) D (T(f,r(a,s)) ~ T(f,s))).

The first sentence can be thought of as a law of change; if a successful action causes a change in the value of a fluent, the result of the action is to change the fluent's value to that value. The second sentence represents a law of inertia; if an action does not affect the value of a fluent, its value persists as a result.

For a given problem we then list all known causes and preconditions. We then circumscribe Causes and Precond allowing T to vary, which has the effect of making the explicit causes and preconditions the only causes and preconditions.

Page 27: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmonotonic reasoning 105

For the shooting problem we have

Causes (LOAD,LOADED,TRUE) & Causes (SHOOT,ALIVE,FALSE)& Precond (LOADED,SHOOT)

and, as a result of the circumscription, we have

(Va,f,b )( Causes (a,f,b ) (a =LOAD &f =LOADED &b = TRUE)v

(a= SHOOT &f = ALIVE &b= FALSE ) ), (Va,f) (Precon d (f,a ) ~ f= LOADED &a = SHOOT).

Then purely deductive reasoning yields the desired result:

q T ( ALIVE,r ( S HOOT,r (WAIT, r (LO AD,S o)))).

This approach works because the circumscript ion ensures that the only changes in the values of fluents are as a result of some explicitly stated cause; as Causes(WAIT,LOADED,FALSE) does not follow from Circum(A,{Precond, Causes},T) the success of a WAIT action does not cause the gun to become unloaded. There is a unique model of the axioms because neither of the circums- cribed predicates contains a temporal reference; consequently they can be mini- mized in only one way.

The preferred models are those that are minimal in Causes and Precond {with T al lowed to vary). So M<p.cM' iff

(1) M and M' have the same domain and agree on the interpretation of vari- ables, functions, and relations other than T, Precond and Causes,

(2) M'~Precondff,a) if M~Precondff, a) for all f and a, and (3) M'~Causes(a,fl,f2) if MeCauses(a,f, , f2) for all a, f l and f2. (4) M'~Precond if, a) and M~Precond if, a) for some f and a, or M'eCauses (a,fl,f2)

and M~Causes(a,fl,f2) for some a, f l , f2.

3.4 Pragmatics

The nonmonotonic logics of Section 2 are unable to represent the shooting problem correctly because their inference mechanisms cannot represent the major pragmatic element i n v o l v e d - - the need to reason forwards in t ime while minimizing changes. Consequently there is no way of restricting the models of the axioms to the intended ones. The proposals based on chronological minimizat ion succeed precisely be- cause they do allow this element of the problem to be represented.

The third proposal succeeds because it restricts changes to those permitted by the theory. In doing so it evades the problem rather than solving it. Circumscribing Causes restricts changes to explicitly states ones; so the frame problem does not arise. 5 However, as it is necessary to provide the appropriate extension of the Causes predicate, at tempts to generalise the proposal are limited by the ramifica- tion problem; as the complexi ty of the domain increases it will become increasingly

Page 28: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

106 ]. Bell

difficult, if not impossible, to explicitly record all of the consequences of every action. Circumscribing Causes produces the same effect as the STRIPS add and delete lists (Fikes & Nilsson, 1971), so the proposal embodies the STRIPS Assump- tion: facts which aren't explicitly changed by an action persist.

The approaches relying on chronological minimization allow the frame problem to arise as changes follow as inferential consequences of the axioms. Chronological minimization, CM, is more than a technical hack. It captures two central principles of reasoning about change.

(1) All change is rule governed: nothing changes unless it is caused to. To allow otherwise is to allow miracles, and allowing miracles involves a more complex form of reasoning.

(2) The causal relation is temporally directed; that is, when reasoning causally we reason forwards in time from cause to effect. (Some philosophers have challenged this, but for a convincing argument in favour see (Dummett, 1978).)

CM amounts to the principle that things don't change until (and so unless) they have to (or are caused to). It can therefore be considered as the major pragmatic element in reasoning about change.

In order to appreciate just how general the principle is, try and think of a case where it does not apply. Kautz (19861 gives the following example. Suppose that I park my car in the parking lot at t = 1 and suppose that I learn at t = 1000 that my car is gone. Then using CM I can conclude that it was still there at t : 999 . He argues that this is clearly unreasonable as someone could have stolen it at t=5. However, the example only works because it introduces the extra dimension of my beliefs about the world. CM gives the correct conclusion that the car remains where it is until it is moved. It is silent as to what constitutes a reasonable belief about the matter. Another purported counter-example is where you need to know something as soon as possible; for example that two objects are on a collision course. This however is simply a matter of how soon (if at all) the axioms indicate the fact.

Several points are clarified by viewing the matter in terms of preference criteria. The nonmonotonic logics of Section 2 fail because their preference criteria defined in terms of minimality (using the C_ relation) are unable to distinguish the intended interpretations from the unintended ones. (Note that the problem arises because there are unintended models, and not simply because there are multiple models./ The proposed solutions work because they all supply appropriate preference criteria for the problem. Those based on CM provide the additional pragmatic information required in the form of a partial order that restricts C_ with the result that only the intended interpretations (those in which changes are deferred) are considered. The preference criterion for the third proposal relies on C_ alone. This is a good indication that the problem has been avoided: as there is no possibility of unintended results of an action there are no unintended interpretations and hence no need for an ordering restricting C_. The frame problem only arises where there is the possibility that unintended results of an action can lead to unintended inter- pretations. It is then solved by specifying the appropriate restriction of C_.

Page 29: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

Nonmono ton i c reasoning 107

4 Conc lus ion

We have c o n s i d e r e d n o n m o n o t o n i c r ea son ing and c o m p a r e d and con t r a s t ed severa l

f o rma l i za t i ons of it. Desp i te p r ima- fac ie d i f ferences , the s imi la r i t i e s a m o n g these

logics were c o n f i r m e d w h e n we saw h o w they c o u l d al l be r e p r e s e n t e d as prefer-

ence logics. F i n a l l y we saw that the a p p l i c a t i o n of n o n m o n o t o n i c logics is not

w i t h o u t i ts pi t fa l ls . The i m p l i c i t a s s u m p t i o n b e h i n d the d e v e l o p m e n t of n o n m o n o -

ton ic l o g i c s - - that a s i m p l e ex t ens ion to FOPC w o u l d cap ture al l forms of n o n m o -

no ton ic r e a s o n i n g - - was s h o w n to be over -op t imis t i c . H ow e ve r w e saw tha t less

genera l logics c o u l d be de f ined w h i c h were capab le of r ep resen t ing the p ragma t i c

e l e m e n t s of more spec i f ic forms of n o n m o n o t o n i c reasoning . In pa r t i cu l a r w e saw

that n o n m o n o t o n i c logics w h i c h e m b o d y the p r i n c i p l e of ch rono log ica l m i n i m i z a -

t ion can be de f ined in the d o m a i n of causa l reasoning .

A c k n o w l e d g m e n t s

I w o u l d l ike to t hank Mar t in Henson , Ray Turne r and an a n o n y m o u s referee for

the i r he lp fu l c o m m e n t s and sugges t ions . This research was s u p p o r t e d by the

Un i t ed K i n g d o m Sc ience and Eng inee r ing Research Counci l .

Notes

l This form of reasoning has also been called default reasoning or Qutoepistemic reasoning. z In this article we assume some familiarity with FOPC and modal logic. A good introductory text is

Hughes & Cresswell (1968). 3 This is a simplification of Shoham's treatment. As we will only be interested in sentences which

are valid in Kripke-models. We need nat consider individual worlds in such models. 4 We are following the presentation in Hanks & McDermott (1987). 5 Similarly, circumscribing Precond restricts the preconditions of an action to the explicitly stated

ones, so the qualification problem is avoided.

References

Bell, J. (1990) 'The Logic of Nonmonotonicity', Research Note No. CSM-122, Department of Computer Science, University of Essex, Wivenhoe Park, Colchester CO4 3SQ. Artificial Intelli- gence, 41, 365-374.

Boolos, G. (1979) The Unprovability of Consistency, Cambridge University Press, London. Chellas, B. F. (1980) Modal Logic: An Introduction, Cambridge University Press, London. Davis, M. (1980) 'The Mathematics of Non-Monotonic Reasoning', Artificial Intelligence, 13,

73-80. Dummett, M. (1978) 'Can an Effect Precede its Cause?' and 'Bringing About the Past'. In Truth and

Other Enigmas, Duckworth, London. Etherington, D. W. 'Relating Default Logic and Circumscription', IJCAI, 10, 489-494. Fikes, R. & Nilsson, N. J. 11971) 'STRIPS: A New Approach to the Application of Theorem Proving

to Problem Solving', Artificial Intelligence, 2, 189-208. Finger, J. J. (1987) Exploiting Constraints in Design Synthesis, Ph.D. Thesis, Stanford University,

Stanford, CA.

Page 30: Nonmonotonic reasoning, nonmonotonic logics and reasoning about change

108 J. Bell

Ginsberg, M. L. & Smith, D. E. (1987) 'Reasoning About Action I: A Possible Worlds Approach', In The Frame Problem in Artificial Intelligence, (ed. F. M. Brown), pp. 233-258, Morgan Kauf- man, Los Altos, CA.

Halpern, J. & Moses, Y. (1984) 'Towards a Theory of Knowledge and Ignorance: Preliminary Report', Technical Report R] 4448 48316, IBM Research Laboratory, San Jose.

Hanks, S. & McDermott, D. (1985) 'Temporal Reasoning and Default Logics', Computer Science Research Report No. 430, Yale University, New Haven, CT.

Hanks, S. & McDermott, D. (1986) 'Default Reasoning, Nonmonotonic Logics, and the Frame Problem', AAAI-86, 328-333.

Hanks, S. & McDermott, D. (1987) 'Nonmonotonic Logic and Temporal Projection', Artificial Intelligence. 33,379-412.

Hughes, G. E. & Cresswell, M. J. (1968) An Introduction to Modal Logic, Methuen, London. Kautz, H. A. (1986) 'The Logic of Persistence', AAAI-86,401-405. Konolige. K. (1987) 'On the Relation Between Default Theories and Autoepistemic Logic', IJCAI, 10,

394-401. Lifschitz, V. (1986) 'Pointwise Circumscription: Preliminary Report' AAAI-86,406-410. Lifschitz, V. (1987) 'Formal Theories of Action', The Frame Problem in Artificial Intelligence, (ed.

F. M. Brown), pp. 35-58, Morgan Kaufman, Los Altos, CA. McCarthy, J. M. (1977) 'Epistemological Problems of Artificial Intelligence', IJCAI, 5, 1038-1044. McCarthy, J. M. (1980) 'Circumscription-- a Form of Nonmonotonic Reasoning', Artificial Intelli-

gence. 13, 27-39. McCarthy, J. M. (1984) 'Applications of Circumscription to Formalising Common Sense Know~

ledge', Proceedings of the Nonmonotonic Reasoning Workshop, pp. 151-164. McCarthy, J. & Hayes, P. J. (1969) 'Some Philosophical Problems from the Standpoint of Artificial

intelligence', Machine Intelligence 4, (eds B. Meltzer & D. Michie) pp. 463-502. Edinburgh University Press, Edinburgh.

McDermott, D. (1982) 'Nonmonotonic Logic II: Nonmonotonic Modal Theories', JACM, 29, 33-57. McDermott, D. & Doyle, J. (1980) 'Non-Monotonic Logic I', Artificial Intelligence, 13, pp. 41-271. Moore, R. C. (1983) 'Semantical Considerations on Nonmonotonic Logic', SRI Artificial Intelli-

gence Center Technical Note 284, SRI International, Menlo Park, California. Moore, R. C. (1984) 'Possible-World Semantics for Autoepistemic Logic', SRI Artificial Intelligence

Center Technical Note 337, SRI International, Menlo Park, California. Moore, R. C. (1985) 'Semantical Considerations on Nonmonotonic Logic', Artificial Intelligence,

25. 75-94. Moore, R. C. (1988) 'Autoepistemic Logic'. In Non-standard Logics for Automated Reasoning, (ed.

P. Smets). Academic Press, London. Reiter, R. (1980) 'A logic for default reasoning', Artificial Intelligence, 13, 81-132. Shoham, Y. (1986) 'Chronological Ignorance: Time, Nonmonotonicity, Necessity and Causal

Theories', AAAI, 5, 389-393. Shoham, Y. (1987) 'Nonmonotonic Logics: Meaning and Utility', IJCAI, 10, 388-393. Shoham, Y. (1988) Reasoning About Change, MIT Press, Cambridge, Massachusetts. Stalnaker, R. C. (1980) ~A Note on Non-monotonic Logic', Department of Philosophy, Cornell

University, unpublished manuscript. Turner, R. (1984) Logics for Artificial Intelligence, Ellis Horwood, Chichester. Veltman, F. J. M. M. (1985) Logics for Conditianals, Ph.D. Thesis, University of Amsterdam.