overview of non-monotonic reasoning

25
Ontologi es Reasonin g Component s Agents Simulatio ns Overview of Non-Monotonic Overview of Non-Monotonic Reasoning Reasoning Jacques Robin

Upload: kylee-watson

Post on 02-Jan-2016

135 views

Category:

Documents


3 download

DESCRIPTION

Overview of Non-Monotonic Reasoning. Jacques Robin. Outline. Monotonic vs. Non-Monotonic Reasoning (NMR) Epistemic vs. ontological non-monotonicity Epistemic non-monotonic automated reasoning tasks Default Reasoning (DR) Negation As Failure (NAF) in General Logic Programming (GLP) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Overview of Non-Monotonic Reasoning

OntologiesReasoningComponentsAgentsSimulations

Overview of Non-Monotonic Overview of Non-Monotonic ReasoningReasoning

Jacques Robin

Page 2: Overview of Non-Monotonic Reasoning

OutlineOutline

Monotonic vs. Non-Monotonic Reasoning (NMR) Epistemic vs. ontological non-monotonicity Epistemic non-monotonic automated reasoning tasks

Default Reasoning (DR) Negation As Failure (NAF) in General Logic Programming (GLP) Abduction and Abductive Logic Programming (ALP) Belief Revision (BR)

Ontological automated reasoning tasks Truth-Maintenance (TM) Belief Update (BU)

Page 3: Overview of Non-Monotonic Reasoning

Monotonic Monotonic vs.vs. Non-monotonic Non-monotonic Reasoning: Logical PerspectiveReasoning: Logical Perspective

Monotonic reasoningKB,NK,QML (KB |=ML Q) (KB NK |=ML Q)

Non-Monotonic Reasoning (NMR)KB,NK,QNML (KB |=NML Q) (KB NK |NML Q)

also called Defeasible Reasoning

Page 4: Overview of Non-Monotonic Reasoning

Monotonic Monotonic vs.vs. Non-monotonic Non-monotonic Reasoning: Internal Agent Reasoning: Internal Agent Architecture PerspectiveArchitecture Perspective

Ask

Tell

En

viro

nm

en

t

Sensors

Effectors

Domain-Specific

KnowledgeBaseRetract

Non-Monotonic Reasoning

MonotonicReasoning

GenericDomain-

IndependentInference

Engine

Page 5: Overview of Non-Monotonic Reasoning

In a partially observable environment (static or dynamic) Agents must make decisions (choose among alternative actions) that

require knowledge that it does not currently have about the environment They must thus make plausible but logically unsound assumptions

(hypotheses) about properties of the environment needed to act but about which they have no certain knowledge

Their KB is thus partitioned between at least two plausibility levels: Certain knowledge derived purely deductively from reliable sensors and certain

encapsulated initial knowledge Uncertain knowledge derived using at least one hypothetical reasoning step to

fill knowledge gap needed to make one decision When:

Uncertain knowledge p =u v, where p is an environment property and v its currently assumed value,

is contradicted by new certain knowledge p =c w c v purely deductively derived from new knowledge obtained from reliable sensors and certain knowledge:

p =c w must be inserted to the agent’s KB, p =u v must be retracted from the agent’s KB, all other properties, p1 =u v1, ..., pn =u vn, that were derived using p =u v must

be retracted from the agent’s KB, since one of the hypotheses on which their insertions relied on turned out to be

invalid.

Epistemic Non-MonotonicityEpistemic Non-Monotonicity

Page 6: Overview of Non-Monotonic Reasoning

Ontological Non-MonotonicityOntological Non-Monotonicity

In a dynamic environment (fully or partially observable) where some properties (fluents) of the environment change either spontaneously over time or as the result agents’ actions

When the value of a fluent f changes, e.g., from f = v to f = w, this change must be reflected in the agent knowledge base: f = v must be retracted f = w must be inserted all fluents fi = vi, ..., fi+n = vi+n whose value was deduced based on f

= v must also be retacted and all fluents fj = w1, ..., fj+m = wi+m whose value can now be

deduced based on f = w must be inserted

Page 7: Overview of Non-Monotonic Reasoning

Combined Epistemic and Ontological Combined Epistemic and Ontological Non-MonotonicityNon-Monotonicity

Agents in environments that are bothdynamic and partially observable need toperform both kinds of NMR:epistemic and ontological

Example: The Wumpus World Ontological NMR involves:

after choosing action forward while loc(agent) =c cavern(2,1)

retract loc(agent) =c cavern(2,1)

insert loc(agent) =c cavern(3,1), visited(cavern(2,1)) =c true

Epistemic NMR involves: after sensing no stench in cavern(3,1) retract loc(wumpus) =u cavern(2,3), loc(wumpus) =u cavern(3,2)

insert loc(wumpus) =c cavern(2,3) , safety(cavern(3,2)) =c ok

3 W?

2 OK S, A

W?

1 V V OK

1 2 3

3 W!

2 OK S, V

OK

1 V V A

1 2 3

Page 8: Overview of Non-Monotonic Reasoning

Epistemic Epistemic vs.vs. Ontological OntologicalNon-Monotonicity: Reasoning TasksNon-Monotonicity: Reasoning Tasks

Epistemic NMR: Default Reasoning (DR) General Logic Programming

(GLP) with Negation as Failure (NAF)

Abduction and Abductive Logic Programming (ALP)

Belief Revision (BR) Truth-Maintenance (TM)

Ontological NMR: Truth-Maintenance (TM) Belief Update (BU)

Page 9: Overview of Non-Monotonic Reasoning

Default Reasoning (DR)Default Reasoning (DR)

Extends deduction with certain knowledge and inference rules with derivation of uncertain but plausible default assumption

knowledge and inference rules for environment properties not known with certainty but

needed for decision making Example DR inference rules:

Closed-World Assumption (CWA) Inheritance with overriding

Classical DR knowledge example: KB: X (bird(X) > flies(X) // default knowledge penguin(X) flies(X) pigeon(X) bird(X) penguin(X) bird(X)) pigeon(valiant) penguin(tux)KB |= flies(valiant) flies(tux)

Page 10: Overview of Non-Monotonic Reasoning

Variety of Basis for Default Variety of Basis for Default KnowledgeKnowledge

Statistical: almost all objects of class C satisfy property P, e.g., almost all living mammals give birth to live offsprings

Group confidence: all known objects of class C satisfy property P, e.g., all known giant planet has large satellites

Prototypical: an object which is typical representative of class C satisfies property P, e.g., a pigeon, a typical bird, can fly

Normality: under normal circumstances, an object of class C satisfies property P, e.g., a healthy pigeon with both functioning wings can fly

Lack of contrary evidence: e.g., a person that is talking clearly, walking straight, neither excessively laughing nor behaving aggressively is not drunk

Inertia: an object of class C that satisfies property P at time t, will satisfy it at time t+n unless some event affect it, e.g., a book on a shelve will remain there until someone take it out or a strong earthquake occurs

Page 11: Overview of Non-Monotonic Reasoning

GLP with NAFGLP with NAF

Negation As Failure (NAF): Connective of General/Normal

Logic Programs (GLP/NLP) semantically different from classical negation

naf p is true iff attempt to prove p finitely fails under CWA

whereas p true iff attempts to prove p finitely succeeds under OWA

naf allowed in rule bodies, but not in rule heads

e.g., h :- p, naf q, r. but never naf h :- p, q, r.

Restricted form of default reasoning to derive negative conclusions

Typical DR example in Prolog:flies(X) :- bird(X), naf abnormal(X).abnormal(X) :- penguin(X), bird(X).abnormal(X) :- ostrich(X), bird(X).bird(X) :- pigeon(X). bird(X) :- penguin(X).bird(X) :- ostrich(X).pigeon(valiant).penguin(tux).?- flies(valiant) yes?- flies(tux) no?-

Page 12: Overview of Non-Monotonic Reasoning

AbductionAbduction

Given: I: full Intentional causal background knowledge, a generic theory linking

causes to effects O: full extensional knowledge of Observed effects P: Partial extensional knowledge of causes of these observed effects B: meta-knowledge of abductive Bias restricting the space S of possible

missing causes of these observed effects and defining a preference partial order over S

Abduce hypothetical missing causes H of O such that: H P I |= O, i.e., H fully explains O given P and I, and B(H), H pertains to the restricted subset of acceptable and preferred

missing causes of O defined by abductive bias B Example:

I: D,G,S (grass(G) rainedOn(G,D) wet(G,D+1)) (grass(G) sprinklerOn(G,D) wet(G,D+1))

(grass(G) shoe(S) wet(G,D) walk(G,S,D) wet(S,D)) O: wet(s,26) P: walk(g,s,26) grass(g) shoe(s) B: (D,G ((rainedOn(G,D) sprinklerOn(G,D) )

(rainedOn(G,D) » sprinklerOn(G,D)) H: rainedOn(g,25)

Page 13: Overview of Non-Monotonic Reasoning

Abduction is NMRAbduction is NMR

Why? New knowledge of previously unknown causes of observed

effects, may invalidate abductive hypothesis made on the causes of

these effects Example:

If P’ = P sprinklerOn(g,25) Then H’ = = H \ {rainedOn(g,25)}

Page 14: Overview of Non-Monotonic Reasoning

Belief Revision (BR)Belief Revision (BR)

Automated reasoning task answering the following question: How to revise current belief base Cb

to assimilate new belief N, or to retract currently held belief O while maintaining the consistency of the new revised belief base Rb

? i.e., Rb Cb N and Rb |

or Rb Cb \ O and Rb |

1st issue: KR language used to represent Cb, N, O and Rb

Page 15: Overview of Non-Monotonic Reasoning

BR: Belief BR: Belief BasesBases vs.vs. Belief Belief SetsSets

In almost all cases, belief base Cb contains not only extensional beliefs Cb

e but also intentional beliefs Cbi

Thus implicit beliefs Cbd are logically, plausibilistically or

probabilistically derivable from Cb albeit not physically stored in it i.e., (Cb Cb

e Cbi ) |D (Cb

s Cb \ Cbd)

Should “new” and “currently held” apply to Cb or Cbs ?

i.e., NCb OCb or NCbs OCb

s ?

Different, since it some cases: (Cb | Cb

s) (Db Cb) (Db | Cbs )

((Cb+n Cb N) | Cb+ns) ((Db+n Db N) | Db+n

s) (Cb+ns Db+n

s) ((Cb\o Cb \ O) | Cb\o

s) (Db\o Db \ O) | Db\os) (Cb\o

s Db\os)

e.g., Cb (p q ((p q) r q)), Db (p ((p q) r q)), Cb

s Dbs (p q ((p q) r q) r),

Cb\as (q ((p q) r q) r) Db\a

s ((p q) r q)

Page 16: Overview of Non-Monotonic Reasoning

BR:BR: Mild Mild Revision Revision vs.vs. SevereSevere Revision Revision

Mild revision: Cb N | , or Cb \ O | Then Rb Cb N, or Rb Cb \ O

Severe revision: Cb N | , or Cb \ O | Then new issue arises: which minimal set of further belief

revisions to execute to restore consistency? i.e., find M+, M- such that:

Cb N M+ \ M- | andm+, m- (((m+ M+) (m- M-)) ((Cb N m+ \ m-) | ))

Page 17: Overview of Non-Monotonic Reasoning

Postulates for Rational BRPostulates for Rational BR

Set of logical a set-theoretic requirements that must be verified by BR operators to model rational BR

Postulate history summary: 1988 original AGM postulates (Alchourrón, Gärdenfors and

Makinson) 1991 revised into KM postulates (Katsuno, Mendelzon) 1997 revised into DP postulates (Darwiche and Pearl) 2005 revised into JT postulates (Jin and Thielscher) Many other related proposals in between

Postulate sets differ mainly in terms of:1. The epistemological commitment of the KR language used to

represent the beliefs: boolean, ternary, possibilistic, plausibilistic, probabilistic, ...

2. Whether single-step or iterated belief revision is considered;3. Whether revision:

is limited to unconditional beliefs, i.e., in logic, atoms, e.g., r(f(X,c),g(d),Y,e),

or extends to conditional beliefs i.e., in logic, Horn clausese.g., p(h(X,Y,a),b)) q(Y) r(f(X,c),g(d),Y,e)

Page 18: Overview of Non-Monotonic Reasoning

Truth Maintenance Systems (TMS)Truth Maintenance Systems (TMS)

TMS Architecture: D: Deductive rule base, a conjunction of definite Horn clauses

(c1 p11 ... p1

q) ... (cp pp1 ... pp

r) I: Integrity constraint base, a conjunction of Horn clauses concluding false ( p

1 ... pn) ... ( p

1 ... pm)

A: Assumption base a conjunction of atomic formulae a1 ... ao that do not unify with any deductive rule base conclusion,

i.e., i,j 1io, 1jp, unif(ai,cj) but are nonetheless currently assumed T, for they does not deductively lead to the

violation of any integrity constraint, i.e., A F D I | F: Fact base,

a conjunction of atomic formulae currently proven or assumed T,f1[a1

1 ... a1l] ... fu[au

1 ... auw]

that unify with at least one deductive rule base conclusion,i.e., i,j 1io, 1jp, unif(ai,cj)

and where each formulae f is annotated by the conjunction of its justifications,justifications,i.e., the minimal set of assumptions that unified with the premises of the deductive rules whose chained firing concluded f

TMS Engine: Implements a form of belief revision with Boolean logical instead of

plausibilistic epistemological commitment Can serve for both epistemic and ontological NMR

Page 19: Overview of Non-Monotonic Reasoning

TMS Engine AlgorithmTMS Engine Algorithm

Let A0 be the initial assumption set1. Apply D on A0 to obtain first fact base F0, i.e., D A0 |= F0

2. Check whether new derived facts violate integrity constraints 3. If D F0 I |= , then

use CDBJ (Conflict-Directed Back Jumping) to identify minimal subset A0 of A0

responsible for failure update assumption base by retracting the elements of A0

, i.e., A1 = A0 \ A0

update fact base by retracting the elements that where justified by an

element of A0 , i.e., F1 = F0 \ {f [a1 ... al]F0 | 1il aiA0

}

4. If D F0 I | , then F1 = F0

5. Return F1

6. If new positive evidence becomes available,e.g., facts F2 from reliable agent’s sensor or user input knowledge, and/or assumptions A2 from user input hypotheses, then apply D on F1 given A1 together with this new evidence,

to obtain revised fact base F3, i.e., D F1 A1 F2 A2 |= F3 Go back to truth-maintenance steps 2-5 and

return F4 resulting from applying these steps to F3

7. If new negative evidence becomes available,e.g., knowledge that some currently made assumptions AIare now known to be incorrect or are no longer valid

then return F5 = F4 \ {f [a1 ... al] F4| 1il aiAI}

Page 20: Overview of Non-Monotonic Reasoning

Belief Update (BU)Belief Update (BU)

Belief Update vs. Belief Revision Update: motivated by ontological non-monotonicity, i.e., actual

changes in the agent’s dynamic environment, resulting from agent’s actions or spontaneously occurring events;

Revision: motivated by epistemological non-monotonicity, i.e., changes in the agent’s amount and certainty of knowledge about its partially observable or noisy environment.

Three main problems: The frame problem The ramification problem The qualification problem

Each problem has two aspects: A representational aspect, the design of a concise, space-efficient,

yet sufficiently expressive knowledge representation language for the changing properties of the environment (fluents)

An inferential aspect, the design of an time-efficient yet sound and as complete as possible update procedure for the truth-value, plausibility or probability of each fluent

Page 21: Overview of Non-Monotonic Reasoning

The Frame ProblemThe Frame Problem

General law of inertia: Any given event (in particular an agent action) induce only very

local changes on the environment state, i.e., they affect only a tiny minority of all fluents describing this state;

All others, i.e., almost all fluents remain unchanged by any given event;

Example: pick action in Wumpus World only changes the hasGold(agent) fluent, not affecting any other fluents such as in(agent,X,Y), hasArrow(agent), alive(agent), alive(wumpus), ...

Representational frame problem: how to design a KR language that exploits this locality to be concise and space-efficient?

Inferential frame problem: how devise time-efficient techniques to revise the changing fluents using this KR language

Page 22: Overview of Non-Monotonic Reasoning

The Frame ProblemThe Frame Problem

Naive approach to representing fluent changes directly in Classical First-Order Logic (CFOL): Precondition axioms that represent the required circumstances in which a given action is

indeed executable e.g., A,O,T,X,Y poss(pick(A,O),T) in(A,X,Y,T) in(O,X,Y,T) Direct effect axioms that represented the intended fluent changes of that same action e.g., A,O,T poss(pick(A,O),T) do(pick(A,O),T) has(A,O,T+1)

CFOL’s OWA forces to need for additional frame axioms explicitly representing everything that the action does not change e.g., (A,O,T poss(pick(A,O),T) do(pick(A,O),T) loc(A,X,Y,T) loc(A,X,Y,T+1))

(A,O,O’, T poss(pick(A,O),T) do(pick(A,O),T) O O’ has(A,O’,T) has(A,O’,T+1)) (A,O,O’, T poss(pick(A,O),T) do(pick(A,O),T) O O’ loc(O’,X,YT) loc(O’,X,YT+1)) (A,O,L,T poss(pick(A,O),T) do(pick(A,O),T) alive(L,T) alive(L,T+1) ...

Representational frame problem: combinatorially explosive size of the representation

Inferential frame problem: exponential time to apply inertia law by propagation of such a large size frame axiom set after each event occurrence or action execution

Insight: all these unchanged fluents are independent of almost all events and actions

Frame axioms are wasteful because they fail to exploit this massive independence

Page 23: Overview of Non-Monotonic Reasoning

The Ramification ProblemThe Ramification Problem

A direct effect axiom is a piece of diachronic knowledge that captures only the intended effects of a given action,i.e., the reason why it was executed by an agent;

Those effects are changes to the truth value, plausibility or probability of environment fluents that matches the goal of the agent when it chose the action;

e.g., in situation S where fluents loc(agent,X,Y,S) dir(agent,west,S) are true, executing the action do(agent,forward,S) the fluent has direct effect to turn fluent loc(agent,X+1,Y,res(S,do(agent,forward))) true

However, in most cases the truth value, plausibility or probability of those direct effect fluents are synchronically related to other fluents, triggering changes in those other fluents, called ramifications, or action indirect effects;

e.g., in situation S where fluents loc(agent,X,Y,S) dir(agent,west,S) has(agent,gold,S) has(agent,bow,S) in(arrow,bow,S) are true, executing the action do(agent,forward) the fluent has as indirect effect to turn the fluents loc(gold,X+1,Y,res(S,do(agent,forward))) loc(bow,X+1,Y,res(S,do(agent,forward))) loc(arc,X+1,Y,res(S,do(agent,forward)))

Possibly recursive, separate synchronic ramification axioms distinct from diachronic direct effect axiom, solve the representational ramification problem

Page 24: Overview of Non-Monotonic Reasoning

The Qualification ProblemThe Qualification Problem

Concise action precondition axioms with a few fluents are unrealistically optimistic in most environments

e.g., non-deterministic environments, environments perceived through noisy sensors, environments with a very wide variety of contingencies

for they implicitly assume a great many other fluents to also hold While these fluents do hold in almost all normal circumstances in which

the agent will attempt to execution, they fail to hold in some abnormal, unusual circumstances, leading the agent to create false expectation that a given action is

executable e.g., if the gold is in the same cavern of a dead wumpus, and its sticky

green acid blood has spilled on it, then the agent cannot take it directly with its hands

Qualification problem: how to reason correctly about action preconditions, while avoiding extremely long and elusively exhaustive conjunction fluents?

Solutions generally involve some form of default or probabilistic reasoning

Page 25: Overview of Non-Monotonic Reasoning

Approaches to Belief UpdateApproaches to Belief Update

The situation calculus The event calculus Transaction logic The fluent calculus