artificial intelligence and lisp #9 reasoning about actions

32
Artificial Intelligence and Lisp #9 Reasoning about Actions

Post on 19-Dec-2015

223 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Artificial Intelligence and Lisp #9 Reasoning about Actions

Artificial Intelligence and Lisp #9

Reasoning about Actions

Page 2: Artificial Intelligence and Lisp #9 Reasoning about Actions

Lab statistics 2009-11-02

Registration: 50 students

Number of: lab2a lab2b lab3a lab3b -------------------------Reg. Downloads 45 42 18 8Orphan downloads 3 2 1 0Lab completed 37 14 10 1Incomplete upload 0 0 0

Other information: Notice that two lectures havebeen rescheduled! Please check the schedule onthe official (university) webpage for the course.

Page 3: Artificial Intelligence and Lisp #9 Reasoning about Actions

Repeat: Strengths and weaknesses of state-space-based action planning

• Strength: systematic algorithms exist

• Strength: complexity results exist for some restricted cases, and some of them are computationally tractable

• Strength: progressive planning integrates well with prediction, which is needed in many cognitive-robotics applications

• Strength and weakness: actions with arguments must (and often, can) be converted to variable-free form, large size

• Weakness: expressivity is insufficient for many practical situations

• Weakness: computational properties are not always good enough in practice for large problems

• Use of logic-based methods is a way of obtaining higher expressivity

• Use of constraint-based methods often obtains better results for very large planning problems

• These approaches will be addressed in the next three lectures

Page 4: Artificial Intelligence and Lisp #9 Reasoning about Actions

Example of Scenario that is Unsuitable forState-Space-based Planning

• Scenario with a mobile robot and the action of the robot moving from one place to another

• The feasibility of the movement action depends on the floorplan, for example, whether the robot has to pass doorways, whether the door is open or closed, locked or not, etc

• In principle it is possible to do state-space planning in such a scenario by defining a state containing all the possible relevant aspects of the scenario, but it is inconvenient

• It is more practical to describe the scenario using a set of propositions (= statements), and to bring such statements into the planning process if and when they are needed.

Page 5: Artificial Intelligence and Lisp #9 Reasoning about Actions

Purpose of Using Logicfor Reasoning about Actions

• Specify current state of the world using logic formulas

• Specify the immediate effects of actions using logic formulas

• Specify indirect effects of actions and other, similar information for the world using logic formulas

• Specify policies and goals using logic formulas

• Use well-defined operations on logic formulas for deriving conclusions, identifying plans for achieving goals, etc

• This is worthwhile if one obtains better expressivity and/or better computational methods

Page 6: Artificial Intelligence and Lisp #9 Reasoning about Actions

Predicates (revised and extended from before)

• [H t f v] the feature f has the value v at time t

• [P t a] the action a is performable at time t

• [G t a] the agent attempts action a at time t

• [D t t+1 a] the action a is actually performed from time t to time t+1

• The predicate D can be generalized to allow actions with extended duration

• The predicate symbols are abbreviations for Holds, Performable, Go (or Goal), and Do or Done

• These are intended for use both about the past and about the future in the agent's world

Page 7: Artificial Intelligence and Lisp #9 Reasoning about Actions

Actions and Plans

• [P t a] the action a is performable at time t

• [G t a] the agent attempts action a at time t

• [D t t+1 a] the action a is actually performed from time t to time t+1

• The action argument in these is an action term

• An action term may be a verb with its arguments, for example [pour glass4 mug3]

• The sequential composition of action terms is an action term, e.g. [seq [pour g4 m3][pour m3 g7]]

• Conditional expressions and repetition may also be used, but they are a later step for the logic

• Composite action terms can be used in plans, and as scripts within the agent (e.g. precond advise), more...

Page 8: Artificial Intelligence and Lisp #9 Reasoning about Actions

Examples of Action Laws

• [P t [fill g b]] <-> [H t (subst-in g) empty]

• [D t-1 t [fill g b]] -> [H t (subst-in g) b]

• [P t [pour g1 g2]] <-> [H t (subst-in g2) empty]

• & (not [H t (subst-in g1) empty])

• Compare the previous notation (scripts): [Do .t [pour .fr .to]] = [if [and [H- .t (substance-in: .to) empty] (not [H- .t (substance-in: .fr) empty ])] [coact [H! .t (substance-in: .to) (cv (substance-in: .fr)) ] [H! .t (substance-in: .fr) empty ]] ]

Page 9: Artificial Intelligence and Lisp #9 Reasoning about Actions

Execution of a plan = composite action

• If a is an elementary action, [G t a] holds, [P t a] holds, and [G t a'] does not hold for any other elementary action a', then [D t t+1 a] holds

• If [G t [seq a1 a2 …]] holds, [P t a] holds, and [G t a] does not hold for any other elementary action a', then [D t t+1 a1] and [G t+1 [seq a2 …]] hold

• [G t [seq]] (empty sequence) has no effect, and [seq] is not an elementary action in the above

• Leave the definition of concurrent actions for another time

• The plan [seq a1 a2 …] is performable iff all the successive steps shown above are performable (actually it's a bit more complicated)

Page 10: Artificial Intelligence and Lisp #9 Reasoning about Actions

Now time to bring in logic

• Recall: the purpose of bringing in logic in the context of actions and plans, is to have systematic and well-founded methods for manipulation of formulas like the ones shown on the previous slides

Page 11: Artificial Intelligence and Lisp #9 Reasoning about Actions

From Decision Tree to Logic Formula

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

(a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) …

If you know a&b&c then conclude red

If you know b&red conclude c v -c (trivial)

If you know green conclude -c

If you know white conclude (-b&-c) v (b&c) which can be expressed as b <-> c

Page 12: Artificial Intelligence and Lisp #9 Reasoning about Actions

Entailment

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) …

If you know white conclude (-b&-c) v (b&c) which can be expressed as b <-> c

F, white |= (-b&-c) v (b&c)

The symbol |= is pronounced “entails” (“innebär”)

It is a relation between (sets of) formulas, not a symbol within a formula!

Page 13: Artificial Intelligence and Lisp #9 Reasoning about Actions

Definition of entailment

• For an entailment statement A, B, … |= G

• An interpretation is an assignment of truthvalues to the proposition symbols in A, B, … G

• The models for A is the set of those interpretations where the value of A is true. It is written Mod[A]. (More precisely, the classical models).

• The models for a set of formulas is the intersection of their model sets

• The entailment statement expresses that Mod[{A, B, …}] Mod[G]

Page 14: Artificial Intelligence and Lisp #9 Reasoning about Actions

Entailment – example of definition

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) …

If you know white conclude (-b&-c) v (b&c) Which can be expressed as b <-> c

F, white |= (-b&-c) v (b&c)

The symbol |= is pronounced “entails” (“innebär”)

It is a relation between (sets of) formulas, not a symbol within a formula!

Page 15: Artificial Intelligence and Lisp #9 Reasoning about Actions

Model sets

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

F = (a & b & c & red & -green & -blue & -white) v (a & b & -c & -red & green & -blue & -white) v (a & -b & c & -red & -green & blue & -white) …F, a&white |= (-b&-c) v (b&c)

Mod[a&b&c&red& …] = {{a:T,b:T,c:T,red:T,green:F,blue:F, white:F}}Mod[F]= {{a:T,b:F,c:T,red:T,green:F,blue:F,white:F},...}Mod[a&white] = {{a:T,white:T, (any combination of the others)}...}Mod[(-b&-c) v (b&c)] = {{b:F,c:F, (any combination of the others)}, {b:T,c:T, (any combination of the others)} }

Page 16: Artificial Intelligence and Lisp #9 Reasoning about Actions

Simplified formulation

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) …

Fd = (a&b&c&red) v (a&b&-c&green) v (a&-b&c&-red&blue) …

F2 = -(red&green v red&blue v red&white v green&blue v green&white v blue&white)

It is “easy” to see that Mod[F] = Mod[Fd,F2] and the latter formulation is much more compact

Page 17: Artificial Intelligence and Lisp #9 Reasoning about Actions

Equivalence between formulas [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

Fd = (a&b&c&red) v (a&b&-c&green) v (a&-b&c&-red&blue) v ...

Fc = (-a v -b v -c v red) & (-a v -b v c v green) & …

One can see that Mod[Fc] = Mod[Fd] since each combinationof a, b and c allows exactly one color

The first conjunct in Fc can be rewritten as ((-a v -b v -c) v red) (-(a & b & c) v red) (a & b & c) -> redThese formulas are equivalent – they have the sameset of models!

Fc – conjunctive normal form, Fd – disjunctive normal form

Page 18: Artificial Intelligence and Lisp #9 Reasoning about Actions

InferenceFc = (-a v -b v -c v red) & (-a v -b v c v green) & …

F2 = -(red&green v red&blue v red&white v green&blue v green&white v blue&white)F2 can be equivalently replaced byF3 = -(red&green) & -(red&blue) & ... And in turn byF4 = (-red v -green) & (-red v -blue) & ...

Both Fc and F4 can be replaced by the set of theirconjuncts (= components of the and-expression) forthe purpose of inference

Now suppose we have Fc, F4, and the proposition whiteWe observed above F, white |= (-b&-c) v (b&c)Try go obtain this using inference in a strict way!

Page 19: Artificial Intelligence and Lisp #9 Reasoning about Actions

Details of proof[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]]

white-a v -b v -c v red | -red-a v -b v c v green | -a v -b v -c-a v b v -c v blue | a v -b v c-a v b v c v white | -greena v -b v -c v white | -a v -b v ca v -b v c v red | a v b v ca v b v -c v blue | -bluea v b v c v green | -a v b v -c-white v -red | a v b v -c-white v -green | -b v c -white v -blue (and more...) | b v -c | (-b v c) & (b v -c) | (b -> c) & (c -> b) | b <-> c

Page 20: Artificial Intelligence and Lisp #9 Reasoning about Actions

Resolution Rule (Simple Case)

• The resolution rule is an inference rule

• It applies to a combination of two propositions each of which must be a clause, i.e. a disjunctions of literals, where a literal is an atomic proposition or the negation of one (no other subexpressions)

• Given (a v B) and (-a v C), it forms (B v C) where both B and C may be zero, one or more literals

• Conversion from or to logic formulas that are not clauses must be done using other rules

Page 21: Artificial Intelligence and Lisp #9 Reasoning about Actions

Inference Operator• Let A,B, … and G be clauses

• If G can be obtained from A,B,... using repeated application of the resolution operator, then we write A,B,... |- G and say that A,B,... lead to G

• The same notation is used for other choices of inference rules as well

• Recall that if Mod[A,B,...] Mod[G] then we say that A,B,... entails G

• Completeness result (for propositional logic): A,B,... |- G if and only if A,B,... |= G

• One can view |- as an implementation of |=

Page 22: Artificial Intelligence and Lisp #9 Reasoning about Actions

From here to reasoning about actions

• Now we have seen how logical inference can be done by systematic manipulation of logic formulas

• Three things are needed in order to apply this to reasoning about actions:

• Already done: Replace single propositional symbols (a, b, etc) by composite expressions involving a predicate and its arguments, for example [H time feature value]

• Extend the definition of models: not merely an assignment of truth-values to proposition symbols

• Generalize the resolution operator to the larger reportoire of logic formulas

Page 23: Artificial Intelligence and Lisp #9 Reasoning about Actions

Definition of Models for Actions, I

• Need to define interpretation and evaluation

• An interpretation for a logic of actions consists of a development (~ episode) Dev + a mapping Act from elementary action terms to actions + an invocation log Inv

• An action is here a set of pairs of partial states, as in the previous lecture

• An invocation log here a set of pairs of timepoints and action terms. Each pair specifies an attempt to begin executing the action term at that timepoint

• A formula [H t f v] is true in an interpretation <Dev Act Inv> iff [: f v] is a member of Dev(t), meaning that v holds as the value of f

• Must also define D and related predicates, later slide

Page 24: Artificial Intelligence and Lisp #9 Reasoning about Actions

Repeat: State-space Ontology of Actions

• The structures in lab2 implement this ontology

• Purpose: unified treatment for several ways of representing the preconditions and effects of actions

• It is also the basis for planning algorithms

• Start from a set Fea of features and an assignment of a range for each feature

• Consider the space of all feature states Sta; each of them is a mapping from features to values, and there may be restrictions on what combinations of values are possible

• A complete feature state assigns a value to all members of Fea, a partial feature state does not. Complete is the default.

• A development Dev is a mapping from integers from 0 and up, to feature states. (Similar to episodes in Leonardo)

Page 25: Artificial Intelligence and Lisp #9 Reasoning about Actions

Example 1

• The set of models for

• [H t lightning true] → [H t+1 thunder true]

• consists of those interpretations where, in the

• development, each state containing

• [: lightning true]

• is followed by a state containing

• [: thunder true]

• Notice that only the Dev part of the interpretation is used in this simple example; the other two parts may be empty sets, or whatever else

Page 26: Artificial Intelligence and Lisp #9 Reasoning about Actions

Definition of Models for Actions, II• Consider an interpretation <Dev Act Inv>

• A formula [P t a] is true iff pre Dev(t) for some member <pre post> of Act(a), meaning a is performable at time t

• A formula [D t t+1 a] is true in the interpretation iff pre Dev(t) and post Dev(t+1) for some member <pre post> of Act(a), meaning the action is done then

• A formula [G t a] is true in the interpretation iff the pair of t and a is a member of Inv

• A formula [D s t a] where s ≠ t-1 is false, so actions with extended duration are not admitted (in the present, simplified version of the formalism)

• Repeat: If A is a set of propositions, then Mod[A] is the set of interpretations where all members of A are true.

Page 27: Artificial Intelligence and Lisp #9 Reasoning about Actions

Interpretations

• In order to merely characterize what has happened during an episode, it is sufficient to use the predicates H and D, and interpretations can have the form <Dev Act {}>

• In order to characterize the intention or the plan of an agent, and the execution of that plan, use all of H, D, P and G, and make use of interpretations of the form <Dev Act Int> with non-empty Int

Page 28: Artificial Intelligence and Lisp #9 Reasoning about Actions

Return to Action Laws

• [P t [fill g b]] <-> [H t (subst-in g) empty]

• [D t-1 t [fill g b]] -> [H t (subst-in g) b]

• [G t a] & [P t a] -> [D t t+1 a] (simplified)

• [H 14 (subst-in glass4) empty]

• [G 14 [fill glass4 beer]]

• We expect to be able to obtain the consequence

• [H 15 (subst-in glass4) beer]

Page 29: Artificial Intelligence and Lisp #9 Reasoning about Actions

Rewrite as• 1 -[P t [fill g b]] v [H t (subst-in g) empty]

• 2 [P t [fill g b]] v -[H t (subst-in g) empty]

• 3 -[D t-1 t [fill g b]] v [H t (subst-in g) b]

• 4 -[G t a] v -[P t a] v [D t t+1 a]

• 5 [H 14 (subst-in glass4) empty]

• 6 [G 14 [fill glass4 beer]]

• Consequences using the resolution rule + instantiation:

• 7 -[P 14 [fill glass4 beer]] v [D 14 15 [fill glass4 beer]] from 4,6

• 8 -[H 14 (subst-in glass4) empty] v [D 14 15 [fill glass4 beer]] from 2,7

• 9 [D 14 15 [fill glass4 beer]] from 5,8

• 10 [H 15 (subst-in glass4) beer] from 3,9

• Instantiation = replacement of variable by some expression

• Resolution and instantiation can be combined: unification

Page 30: Artificial Intelligence and Lisp #9 Reasoning about Actions

Just a complicated way of doing a simulation??

• In this simple example, yes, but the same method also works in cases that require more expressivity, like:

• If you don't know whether the agent's intention is to pour beer or juice into the glass

• If you just know that the intention concerns the glass that is in the robot's hand, without knowing its identifier

• If the intention is to fill all the glasses that are on a tray (after some generalization of the above)

• and many other cases

• The important thing is that both the current state of the environment, its expected future states, actions, and intentions for actions are expressed in a uniform manner and can be combined using the inference operator

Page 31: Artificial Intelligence and Lisp #9 Reasoning about Actions

Revisit the Premises• [P t [fill g b]] <-> [H t (subst-in g) empty]

• [D t-1 t [fill g b]] -> [H t (subst-in g) b]

• [G t a] & [P t a] -> [D t t+1 a] (simplified)

• [H 14 (subst-in glass4) empty]

• [G 14 [fill glass4 beer]]

• We expect to be able to obtain the consequence

• [H 15 (subst-in glass4) beer]

• Notice: Domain specific (= Application specific), Ontology based, Current situation

• The full set of 'red' and 'blue' premises can be made to characterize the Act part of the interpretation completely (details next time)

Page 32: Artificial Intelligence and Lisp #9 Reasoning about Actions

Reasoning about Plans – two cases

• Given the logical rules, action laws, initial state of the world, and one or more intentions by the agent – predict future states of the world: done by inference for obtaining logic formulas that refer to later timepoints

• rules, initstate, plan |= effects

• Given the logical rules, action laws, initial state of the world, and an action term expressing a goal for the agent – obtain a plan (an intention) that will achieve that goal

• rules, initstate, plan |= goal

• where red marks the desired result in both cases

• Planning is therefore an inverse problem wrt inference. In formal logic this operation is called abduction. This will be the main topic of the next lecture.