logical formalization of intelligent agent systems

Post on 12-Jan-2016

41 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Logical Formalization of intelligent agent systems. Cheng-Chia Chen Department of Computer Science, National Cheng-Chi University. Contents. Introduction Review of basic nonclassical logics used in agent theory formalize knowledge, belief and actions of agents - PowerPoint PPT Presentation

TRANSCRIPT

Slide-1

Logical Formalization of intelligent agent systems

Cheng-Chia Chen

Department of Computer Science, National Cheng-Chi University

Slide-2

Contents

• Introduction• Review of basic nonclassical logics used in agent t

heory• formalize knowledge, belief and actions of agents• Formalize motivational attitudes of agents• Formalize communication between agents• Conclusion

Slide-3

I. Introduction

• What is an agent ?• Applications • Why an agent theory?

Slide-4

I. Introduction

What is an agent ?

Slide-5

Some definitions

Collected from: http://www.msci.memohis.edu/~franklin/AgentProg.html

Is it an Agent, or just a Program?:

A Taxonomy for Autonomous Agents

Stan Franklin and Art Graesser

Institute for Intelligent Systems

University of Memphis

Slide-6

The AIMA Agent [Russell and Norvig]

• Anything that can be viewed as – perceiving its environment through sensors and – acting upon that environment through effectors.

• too general such that by the definition, every program is an agent.

Slide-7

The Maes Agent [MIT]

• Autonomous agents are – computational systems that – inhabit some complex dynamic environment,– sense and act autonomously in this

environment, and by doing so – realize a set of goals or tasks for which they are

designed.

Slide-8

The Hayes-Roth Agent [KSL,1995]

• Intelligent agents continuously perform three functions: – perception of dynamic conditions in the

environment; – action to affect conditions in the environment;

and– reasoning to interpret perceptions, solve

problems, draw inferences, and determine actions.

Slide-9

The IBM Agent

• Intelligent agents are software entities that – carry out some set of operations – on behalf of a user or another program – with some degree of independence or

autonomy, and in so doing, – employ some knowledge or representation of the

user's goals or desires.

Slide-10

So, what is an agent ?

• An autonomous agent is a system – situated within and is a part of an environment that– senses that environment and – acts on it, over time,– in pursuit of its own agenda and – so as to effect what it senses in the future.

Slide-11

Agent Properties

• Property Other Names Meaning

• autonomous exercises control over its own

actions • goal-oriented pro-active does not simply act in response

purposeful to the environment• communicative social ability communicates with other

agents, perhaps including

people• reactive (sensing and acting) responds in a timely fashion to

changes in the environment

Slide-12

Agent Properties (cont’d)

• Property Other Names Meaning• temporally is a continuously running

continuous process

• learning adaptive changes its behavior based on

its previous experience• mobile able to transport itself from one

machine to another• flexible actions are not scripted• character believable "personality" and

emotional state.

Slide-13

Applications

• Systems and Network Management• Mail and Messaging

– filtering of email, news etc.• Information Access and Management:

– Information filtering and discovery• Collaboration • Electronic Commerce• Adaptive user Interfaces• Entertainment.• and more...

Slide-14

Why an agent theory ?

• For specifications and verification of agent systems

• For reasoning about agent systems• For knowledge representation formalisms• for formalizing concepts of interest in philosophy,

cognitive science.• Basis for analyzing micro and macro aspects of

agent society.

Slide-15

II. Review of basic logics

• Classical propositional logic (CPL)

• Basic modal logic

• logic of knowledge and belief

• deontic logic

• logic of actions and programs(PDL)

Slide-16

Elements of a Logic

• Language • syntax (formal language)• semantics (model theory)• axiomatics (proof theory)• decidability & complexity (computation theory)• automated deduction (Theorem proving)

Slide-17

Classical Propositional Logic(CPL)

• The language L:– a set of proposition symbols (PV) :– p,q, r ... means it-is-raining, it-is-cloudy, ...

• logical connectives: /\ (and), ~ (negation)• (well-formed) formulas (abstract syntax):

P ::= p | P /\ Q | ~P • Definitions:

P \/ Q abbreviates ~(~P /\ ~Q)

P => Q abbreviates ~(P /\ ~Q)

Slide-18

The semantics for CPL

• Goals:– 1. define the contexts in which formulas can be

given truth values.– 2. define the truth conditions for formulas.

• interpretation (world, state): any assignment of truth value {1,0} to propositional symbols

• Truth conditions (or satisfaction relation) |= :• I |= p iff I(p)=T;• I |= P /\ Q iff I |= P and I |= Q• I |= ~P iff not I |= P

• If I |= A, then say I is a model of A.

Slide-19

Some logical notions

• A formula is satisfiable iff it is true in some world.• A formula is valid (a tautology) (|= A) if it is true in

all worlds.• A is a logical consequence of a set of formulas S

(S |= A) iff A is true in all models of S.

• Problems : How to characterize the set

{A | A is a tautology} ?

Slide-20

Calculus and provability

• A calculus C over a language L is a finite set of rules, each of the form:– (A1,A2, ..., An, B)– A1,A2,...,An : Premises – B: conclusion– if n = 0 => axioms

• Example: (A, B, A /\B), (A, A=>B, B),

(A=>B, B, A),...

Slide-21

Provability

• Given a calculus C,

• The set C = {A | A is C-provable(denoted |-C A)}

is defined recursively as follows:– Basis:If (A) is a rule, then A in C ---axioms– Ind: If (A1,..,An,B) is a rule &– A1,...,An in C, then B in C.

Slide-22

An axomatization for CPL

• Let CPL be the calculus:

(1) Axiom schema:– A => (B => A)– (A=>(B =>C)) => ((A=>B)=>(A=>C))– (~A => ~B) => (B => A)

(2) Inference rule:– from A and A => B infer B (MP)

• Theorem: A is valid in CPL iff A is CPL-provable

Slide-23

Basic Modal logic

• The logical study of necessity and possibility• The language:

– CPL augmented with two modal operators: [] (necessity) and ⃟ (possibility).

– P : any proposition , then []P (<>P) means “P is necessarily (possibly)

true”.– Meaning of []p:

• depends on the context it is used, not only determined by the truth value of p

• A family of logics instead of a single logic

Slide-24

Types of necessity

• logical necessity:– e.g, p \/ ~p is logically necessarily true.

• physical necessity:– F=ma

• Epistemic necessity:– e.g., It is believed(known) that ...

• Normal necessity:– e.g., It is obligated (permitted, forbidden) that ...

• time-related (always, eventual)• Others:

– After the programs terminates P must holds,...

Slide-25

Formal Definition

• The language:– Alphabet ():

• PV: a set of propositional variables.• logical connectives: ~ (not), /\ (and), [] (necessity)

– MF: a set of modal formulas defined inductively: • A ::= p | A /\ B | ~ A | []A

– Abbreviations (Macros)• (A \/ B) abbreviates ~(~A /\ ~B);• (A B) abbreviates ~(A /\ ~B)• ⃟ A abbreviates ~[]~A

Slide-26

Possible-world Semantics for modal logic

• Truth conditions for p /\ q, p \/ q, p q, and ~p .– Let p = “I win the game”,– q = “It is 5 p.m.”– Assume I win the game and – the present time is 3 p.m,– then p/\q: false, p\/q: true and pq: false.

• But how about the statement:

[]p =It must be the case that I win the game. “

Slide-27

Meaning of necessity and possibility:

• The game:– Two players A,B, each getting a card from four

cards labeled 1,2,3,4 randomly. • rule:

– The player who get a card larger than the other’s wins.

Slide-28

Scenario I: A gets “2”.

• Then consider the following sentences:– 1. “A may possibly win”

• = “It is possibly true that A win” = “ ⃟A_win”

– 2. “A may possibly not win”– 3. “A must win”– 4. “B must not get “2””

• Which is right ? why?

Slide-29

The answer:

• Statement 1 is right– since (2,1) may be the real world, in which A

wins.• Statement 2 is right

– since (2,3), (2,4) are possible, in which A does not win.

• statement 3 is false – since there are cases (e.g., (2,3), (2,4)) in which

A does not win.• Statement 4 is true since in all possible cases B

does not get 2.

Slide-30

The Rule:

(2,1)A_win~B_2

(2,3)~A_win~B_2

(2,4)~A_win~B_2

Possible worlds

Impossible worlds

(2,?)

Real world

~[]A_win⃟ A_win⃟ ~A_win[] ~B_2

(3,4)

Slide-31

The Possible-world Semantics:

• Let W = the set of worlds

– e.g, {(x,y) | x = 1..4, y =1..4 & x y}

• Let V : W x PV -> {0,1} be a valuation function

s.t., V(w,p) =1 iff p is assigned true at world w.

– e.g, V((2,1), A-win) = 1

• R be a binary relation (I.e., subset of WxW) s.t.

wRw’ iff w’ is a possible world of w.

– e.g, (2,x)R(2,1), (2,x)R(2,3), (2,x)R(2,4).

• The triple M=<W,R,V> is called a (possible-world) structure.

Slide-32

Truth-conditions for modal formulas

M = <W,R,V>: a possible world structure; w: a world W, ∈

• The statement : “A is true at world w in structure M” is defined as follows:– M,w |= p iff V(w,p) = 1– M,w |= A /\ B iff M,w |= A and M,w |= B– M,w |= ~A iff not M,w |= A.– M,w |= ⃟ A iff – A is true at some possible world of w.– M,w |= [] A iff A is true at all possible worlds of w.

Slide-33

Some definitions

• A: modal formula, M: structure,• C: a class of structures• A is valid iff it is true in all worlds of all structures.• A is C-valid iff it is true at all worlds of all structur

es of C.• Problem: Given a class of structures C,

– {A | A is C-valid } = ?

Slide-34

Interesting classes of structures

• Class name Property of R • T reflexive: wRw.• D serial: for all w, there is w’ s.t. w R w’.• 4 transitive: wRw’ & w’Rw’’ ⇒ wRw’’.• 5 Eulidean: wRw’ & wRw’’ w’ R w’’.⇒• B symmetric: wRw’ w’Rw.⇒

• r: any string from {T,D,4,5,B} without repetition.• Kr = the class of the structures whose R satisfying all proper

ties mentioned in r.– (I.e., Every theorem of the logic Kr is valid in all Kr-strutur

e, and vice versa.)

Slide-35

Axiomatization of modal logics

• Axioms definitions • PC all truth-functional tautologies• K [](PQ) ([]P []Q)• T []P P• D []P ~[]~p• 4 []P [][]P• 5 ~[]P []~[]P• B ~P []~[]P.• Inference rule: MP: from P, P Q infer Q

Nec: from P infer []P

Slide-36

Axiomatizations of modal logic

• r: any subset {T,D,4,5,B}.• Kr = the axiom system (calculus) including axioms

K, PC and all of r and inference rules MP and Nec.• Kr-provable formulas are defined recursively as foll

ows: – 1. Every axioms of Kr is Kr-provable.– 2. If P, P Q are Kr-provable then so is Q (MP)– 3. If P is Kr-provable, then so is []P (Nec).

• Theorem[Chellas80]: – A is Kr-valid iff A is Kr-provable.

Slide-37

Some useful modal logics

• Logical system Property of R usage• S5 (KT45) equivalence logic of knowledge• KD serial deontic logic• KD45 almost equ. logic of belief• S4 (KT4) ref. tran. Intuitionistic logic• S4.3 linear(total) temporal logic

w

w

real world must be possiblereal world may and may not be possible

Worlds inside are fully connected

{w’ | w R w’}

Slide-38

Logic of Knowledge and Belief

• Modal logic of knowledge : KT45(S5)• Modal logic of belief: KD45( weak S5)• Epsitemic interpretation of knowledge&belief axioms

– KA means A is known; BA means A is believed.– T: []A A (knowledge axioms)– D: []A ~[]~A (belief axiom)– 4: []A [][] A (positive introspection)– 5:~[]A []~[]A (negative introspection)– K:[]A /\ [](A B) []B (distribution axiom)– Nec: From p infer []p -- agent knows the logic

Slide-39

Extensions to multimodal logics:

– S5 (KD45) can model only one single agent’s knowledge (believes)

– Multi-agent cases: n agents: 1,2,3,...,n;• 2n knowledge(and belief) operators K1,B1,...,Kn,Bn:

• KiA ( BiA ) means agent i knows(resp. believes) A.

– Resulting logic: S5nWS5n • N copies of S5, and N copies of KD45,

each for one agent.e.g., Tj: KjAA where j =1,..,n.

– semantics: Structure M=<W,{Ki,Bi}i=1..n, V> • Each Ki is an equivalence relation on W and Bi is a se

rial,trans. and euclidean relation.

Slide-40

Related Issues[Halpern85]

• Logical Omniscience Problem:• Agents with S5 (KD45) ability are perfect logical reasoners,

but human never be.

• Common knowledge, Distributed knowledge– [E]P = [1]P /\ [2]P.../\[n]P – [C]P = [E]P /\[E][E]P /\ [E][E][E]P /\ ...

= [E]P /\[E][C]P– [D]P = P can be known by an agent who knows all w

hat others known (the wisest man).– Needed and useful in many fields (Economics,distrib

uting sys,AI ...)

Slide-41

Deontic interpretation of modal logic

• Deontic logic (D or KD)– PA means A is permitted; OA means A is obligat

ed; FA means A is forbidden.– A is (strongly) forbidden =

• Doing A or bringing about A will result in punishment (dangerous, disastrous) worlds.

– A is obligated = not doing A or not bring about A will result in punishment. = ~A is forbidden.

– A is (weekly) permitted = A is not forbidden = doing A may not result in punishment.

– Another possible pairs:– weekly forbidden/strongly permitted

Slide-42

Semantic analysis of forbidden, obligation and permission

commit-crime or dead (undesired world)

~drive-carmurder~pay-tax~pay-tax~dead

drive-car~dead pay-tax~ murder

drive-carmurderpay-taxdead

~drive-carpay-tax~murder~dead

~drive-car~pay-tax~pay-taxdead~murder

Permitted worlds

current world

sets of worlds which may become the real world

F murder : since all murder-worlds are red.O pay-tax: since all ~pay-tax world are red.P drive-car: some drive-car-world is white.

Slide-43

Formalization of Deontic logic

• W: The set of all possible worlds• D: A set of undesired, punishment world• V: WXPV -> {0,1} with the constraint that

– V(w,v) = 1 iff w D. ∈• I.e., we use v to denote all sanction or punishment wor

lds.

• R: a binary relation on W, s.t.– wRw’ means w’ is a possible world that the agent

may choose to become the real world from w.

Slide-44

Truth conditions for PA,OA, &FA

– M,w |= FA iff M,w |= [] (Av)• ie., for all w’, if wRw’ & M,w|=A then M,w |= v.

– M,w |= OA iff M,w |= F~A iff M,w |= [](~A v)

– M,w |= PA iff M,w |=~FA iff M,w |= ⃟(A/\ ~v)• I.e., there is a world w’ s.t. wRw’ & M,w |= A /\ ~v.

Slide-45

Properties of the deontic logic:

• By definition:– FA = [] (A v) ;– OA = F~A = [](~A v); – PA = ~FA = ⃟ (A /\ ~v);

• All KD axioms(K, D)• Desirable property: OA => PA: not valid in K but

valid in KD (I.e., R must be serial)

Slide-46

Temporal interpretation of modal logic

real historynow

possible futurereal past

possible past

real future

Taxonomy of temporal structures: • linear v.s. branch-time,• past time v.s. future time v.s. past&future• continuous v.s. discrete

Slide-47

Linear discrete time temporal logic

• Temporal operators:– FA means A is eventually true– GA means A is always true– A U B means A is true until B becomes true– 0A: A is true at the next time.

Slide-48

Meaning of temporal formulas

0 1 2 3 ..... n n+1 m

Fp p Gq q q q q .... q..... q 0r rAUBA A A A B

•Linear discrete-time temporal structure:

initial world

Slide-49

Meaning of temporal formulas

• linear discrete temporal logic:• W = N = {0,1,2,3,...} :time point set• V:NXPV -> {0,1}• Truth conditions:

– M,n |= 0A iff M,n+1 |= A.– M,n |= FA iff there is m n s.t., M,m |= A– M,n |= GA iff for all m n, M,m |= A.– M,n |= A U B iff there is m n s.t., M,m|= B &

for all m > s n, M,s |= A.

Slide-50

Logic of programs and actions

• Modal logic of programs (Dynamic Logic)• PDL: propositional version of DL• The language:

– Primitive programs: a,b,c,...– Primitive propositions: p,q,r... – program constructs: “ ;”, “|”,”*”,”?”. – logic connectives: /\,~, [A] for each program A.

Slide-51

– (Compound) Programs A ::= • a | any primitive program is a program (x++ in C)

• A;B | doing A and then doing B• A+B | doing A or doing B nondeterministically• A* | iterate A a nondeterminstic number of times• A* = t + A + A;A + A;A;A + ...• P? | test if P is true.

Syntax of Programs

Slide-52

Syntax of Formulas

– Formulas(assertions): P ::=– p any primitive proposition is a formula– P /\ Q both P and Q are true– ~P P is not true– [A]P After A terminates, P will be true.– <A>P = ~[A]~P means P holds at some

execution of A.

Slide-53

An Example:

• integer x,y,z– x := 3 ; – y := (1,4); – z := x+1 | y := x

• Problems: – Is it true that z > 0 or y x-2 after executing the

program, suppose initially the program state is (4,3,2) ?

Slide-54

Formalization of the problem:

• two primitive propositions:– p = “z > 0” ; q = “z x-2”

• four primitive programs:– a = “x := 3”, b = “y :=(1,4)”,– c = “z := x+1” , d = “y := x”.

• The program : A = a;b; (c | d)• The problem: is [A] (p \/ q) true ?

Slide-55

Analysis:

• A program state is triple (I,j,k) of integers,– which denote the possible simultaneous values

of variables (x,y,z).• Let W = {(i,j,k) | i,j,k are integers} be the set of all

possible program states.

Slide-56

a = “x := 3”, b = “y :=(1,4)”, c = “z := x+1” , d = “y := x”.

p = “z > 3” , q = “z >= x+1”

(3,1,4)

(3,3,2)

(3,4,2)

(3,1,2)(4,3,2)a b

b

(3,3,2)

(3,3,2)

(3,4,4)

c

c

d

d

p

~p

p

~p

q

~q

q

~q

p\/q

~(p\/q)

p\/q

~(p\/q)

a;b

c+d

a;b;(c+d)

initial programstate

Slide-57

(i,j,k)

(3,j,k)

a: x:=3

(i,1,k)

(i,j,i+1)

b: y:=(1,4)

(i,4,k)

(i,4,k)

b

d: y := x

c: z:= x+1

Slide-58

The Semantic rules

• 0. Let W = the set of all possible program states• 1. Each primitive proposition has a truth value in a pr

ogram state: – denoted by a function: V: W x PV {1,0} s.t.– V(w,p) = 1 iff p is true at state w.

• 2. Each primitive program a is a state transformer, denoted by a binary relation R(a): WxW s.t.,

• w R(a) w’ means the program state can become w’ from w by executing a.

• M=<W,R,V> is called a (program) structure.

Slide-59

Composition rule for programs:

• R(A;B) = R(A)R(B) = {(w,w’’) | there is w’ s.t., w R w’ and w R w’’.

• R(A+B) = R(A) U R(B);• R(A)* = I UR(A) UR(A)R(A) U ...

= R(A)* I.e., ref. and trans closure of R(A).• R(P?) = {(w,w) | P is true at w}.• Define classical program constructs:

– if P then A else Bif P then A else B P?;A + ~P?;B– while P do Awhile P do A (P?;A)* ; (~P?)– Repeat A until PRepeat A until P A;(~P?;A)*;P?

Slide-60

Truth conditions for Formulas

– M,w |= p iff V(w,p)=1– M,w |= P /\ Q iff M,w|=P and M,w|=Q.– M,w|=~P iff not M,w|=P.– M,w|= [A]P iff for all w’, w RA w’ then M,w’|=P.– M,w|=<A>P iff there is w s.t. wRAw’ & M,w’|=p.

• A formula is valid iff it is true at every world of every program structure.

• A formula is satisfiable if it is true at some world of some program structure.

• Subsume Hoare logic: P {A} Q (P [A] Q)

Slide-61

Variants of PDL [Harel84]

• DPDL – atomic programs are deterministic

• SPDL (structure PDL)– remove + and *– add “if then else” and “while do”.

• SDPDL (structure DPDL):– atomic programs are deterministic– replace + and * by “if then else” and “while do”.

Slide-62

PDL as a logic of actions

• Too strong part:– The *-operator may not be necessary– The +-operator is not very natural

• Too weak part:– need a notion of not doing something

• (I.e., A: an action => -A : an action (not doing A)

– need a notion of concurrent/parallel execution of actions. A,B: actions =>

• A&B means (doing A and B in parallel))• A \/ B means A;B + B;A + A&B

– Need internal free choice: A B

Slide-63

Axiomatize PDL

• The following formulas are valid in PDL

1. CPL: all tautologies of propositonal logic

2. K: [A](PQ) /\ [A]P [A]Q

3. cmp: [A;B]P <-> [A][B]P

4. union: [A+B]P <->([A]P /\ [B]P)

5. test: [P?]Q <-> (PQ)

6. mix: [A*]P -> (P /\[A]P /\ [A][A]P /\ …)

∴ [A*]P -> (P /\ [A][A*]P)

7. induction: (P /\ [A*](P [A]P)) [A*]P

Slide-64

PDL

• Valid inference rules in PDL:– MP: From P and P Q infer Q– Gen: From P infer [A]P

• TheoremTheorem::– 1. P is valid in PDL iff P can be proved from the

above calculus. (In symbols, |=PDLP|-PDLP)

– 2. The set {A | A is a valid in PDL} is EXPTIME-complete

Slide-65

III. Formalize knowledge, belief and actions of agents: first integration

• The language:– PV : a set of propositional symbols:p,q,r...– Agt: a set of agents:1,2,3,...– Act: a set of primitive actions:a,b,c...

• Compound actions:(SDPDL)– A := a | A;B | if P then A else B |

while P do A

- do(i,A) denotes the event of performing A by i.

Slide-66

Syntax and agent structures

• Formulas P ::=– p | P/\ Q | ~P | skip(do nothing) | fail

– Bi P // i believes P

– Ki P // i knows P

– [do(i,A)]P // After i doing A, P holds• Semantics:

– (Agent) structure M=<W,V,{Ki},{Bi}, {Ta}>

• Ki models agent i’s knowledge – (epistemic alternatives)

• Bi models agent i’s beliefs

• Ta models primitive action a.

Slide-67

Preconditions for actions [Linder94]

• Two necessary preconditions for Do(i,A):• ability: Ab(i,A)

– a property internal to the agent– modeled as primitives.– I.e., Add C:Agt x Act x W -> {1,0} to structure s.t.,– C(i,a,w) =1 means” i is capable of doing A at wo

rld w “.• opportunity: OP(i,A)

– a property determined by external environment.– modeled by <do(i,A)> true Ex: wash_hair

Slide-68

Truth conditions for Ab & OP

• M,w |= Ab(i,a) iff C(i,a,w) = 1• M,w |= Ab(i,P?) iff M,w |= P -- not very realistic• M,w |= Ab(i, A;B) iff

M,w |= Ab(i,A) /\ [do(i,A)]Ab(i,B)• M,w |= Ab(i, If P then A else B ) iff

M,w |= (P? =>Ab(I,A)) /\ (~P? =>Ab(I,B))• M,w |= Ab(i, while P do A) iff

M,w |= ~P \/ (Ab(i,A) /\ [do(i,A)]Ab(i, while P do A)) iff

n 0 s.t. M,w |= Ab(I,(P?;A)n;~P?)

• M,w |= OP(i,A) iff M,w |= <do(i,A)>true

Slide-69

Properties of the Logic

• Knowledge part: like S5,• belief part: Like KD45.• relationship b/t knowledge and belief:

– |= KiPBiP -- then w Bi w’ =>w Ki w’.• action part: like SDPDL,

– |= [do(I,A)](P/\Q) <-> [do(I,A)]P /\[do(I,A)]Q– |= [do(I,A;B)]P <->[do(I,A)][do(I,B)]P– |= ...

• ability part: – |= Ab(i,A;B) <-> Ab(i,A) /\ [do(i,A)]Ab(i,B);– |= Ab(i,if P then A else B) <->

(PAb(i,A) /\ ~PAb(i,B))– |=...

Slide-70

Ability to bring about facts

• Can(i, A, P) – Agent i knows he isknows he is able to do Aable to do A & has the op. to has the op. to

do Ado A & he knows that doing A leads to Phe knows that doing A leads to P.

– let Poss(i,A,P) Ab(i,A) /\ <do(i,A)>P /\ [do(i,A)]P

– is the practical possibility

– Can(i,A,P) Ki Poss(i,A,P)

– Cannot(I,A,P) Ki ~Poss(I,A,P)

• Ach(i,P) – Agent i knows how to achieve P by itself.

– Ach(i,P) actions A, s.t., Can(i,A,P)

Slide-71

IV. Formalize motivations

• Preference, Goal, commitments.

• New operators and actions:– 1. Pr(i,P) : i prefers P to be true.– 2. G(i,P) : P is i’s goal.– 3. ComDo(A) : commit to do A <-- action!– 4. Com(i,A) : A has been committed by i

Slide-72

Definitions

• Like other modal operators, use a binary relation

Prefi to model agent i’s preference:

• 1. w w prefi w’w’ means

agent i prefers to at state w’ when he is at wagent i prefers to at state w’ when he is at w.– w’ is called a preference world of w.

• 2. truth-condition for Pri P:

– M,w |= Pri P iff P is true at all preference worlds of w.

• Side-effect ProblemSide-effect Problem: |= (PQ /\ Pri P) PriQ

– e.g., Pri tooth-moved Pri pain.

Slide-73

Explicit v.s. Implicit preferences

• Modification:• 1. add Ep:A x formulas -> {0,1} to agent model

– denoting explicit preferences.

• 2. M,w |= Pri P iff

– (1) P is true at all preference worlds of w &– (2) Ep(i,P) = 1.

• Preference v.s. desire/wish

– agent knows its preferences: Pri P => Ki Pri P

– not changed often (persist):Pri P => [do(i,A)]PriP

Slide-74

Define goals

• Goals are unfulfilled, realistic preferences• new operators <i>P (P is implementable by implementable by i)

– it is possible for i to bring about P by doing sth.– M,w |= <i>P iff actions A s.t. M,w |= Poss(i,A, P) --- cf. Ach(i,P)

• G(i,P) Pri P /\ Ki ~P /\ Ki <i>P.

• Some properties:– G(i,P) Ki~P G(i.P) Ki <i>P– G(i,P) ~G(i,~P) --- consistency of goals– G(i,P) Ki G(i,P)– no side effect: Ki(PQ) /\ G(i,P) G(i,Q) not v

alid

Slide-75

Modeling commitments

• Intuitions:– Precondition: Agent can commit to do A iff“he knows doing A can lead to the fulfillment of a goal

P and he can do it”. – Once committed, he should keep track of all rem

aining subtasks in intermediate states.– An agenda is prepared for each agent in the age

nt structure .• Precondition for do(i,ComDo(A)):

– P, Can(i,A,P) /\ G(i,P) => <do(i,(ComDo A))>true

Slide-76

Desirable properties for commitment:

• |= [do(i,ComDo A)]Com(i,A)

• |=Com(i,A) Ki Com(i,A)

• |= Com(i,A) Can(i,A,true) i.e., i can do A• |= Com(i,A;B) Com(i,A) /\[do(i,A)]Com(i,B)• |= Com(i,if P then A else B)

– (Ki P Com(i,A)) /\ (Ki ~P Com(i,B))

• |= Com(i,while P do A) /\ Ki P

• Com(i, A;while P do A).

Slide-77

Actions that change one’s mind

• Primitive actions for changing one’s mind• expand P

– add P to the belief (set) of the actor• contract P

– precondition: Bp– remove P from the belief set

• revise P– precondition: – update one’s belief set to be consistent with P

• Test P (confirm P): Sense if P is true in the real world.

Slide-78

The semantic rule for expansions

• Precondition: ~Bi ~P

• |= ~Bi~P [do(i,expand p)]BiP

Agent i’s doxastic alternatives at w

do(i,expand P) will

remove all ~P-alternatives

Agent i’s new doxastic alternatives at w

~PP

P

Slide-79

Semantic rules for contractions

Agent i’s doxastic alternatives at w

do(i,contract P) :

Add all epistemic ~P-alternatives to doxastic alternativ

es

~P

P

Agent i’s epistemic alternatives at w

~P

P

Agent i’s epistemic alternatives at w

Agent i’s new doxastic alternatives at w

P P

• Precondition: ~KiP• |= ~KiP[do(i,contract p)]~BiP

Slide-80

Semantic rules for revisions

Agent i’s doxastic alternatives at w

do(i,revise P) :

do(,i, expand p ;

contract ~P )

~P

P

Agent i’s epistemic alternatives at w

~P

P

Agent i’s epistemic alternatives at w

Agent i’s new doxastic alternatives at w

P P

~P

~P

• Precondition: ~Ki~p /\ ~BiP• |= ~Ki~P /\ ~BiP [do(i,revise p)]BiP

Slide-81

Semantic rule for test P

• Preconditions:– Agent is competent for P (cmpt(i,P)) and ~KiP

• modelled by a function Cmpt:Agt x formulas->{0,1}

• Post condition:

– |= cmpt(i,P) /\ P [do(i,test p)]KiP

– |= cmpt(i,P) /\ ~P [do(i,test p)]Ki~P

• Axiom: – |= cmpt(i,P) <-> Cmpt(i,~P)

• Semantic rule: Similar to belief revision.

Slide-82

V. Formalize communications b/t agents

• Theoretical background:– Speech Act Theory [Searle69,85]

• Speech is just a kind of actions that may/can change not only the physical environment but also mental states of the communicators.

– A speech act includes• illocutionary force(type):

assertive, directive, declarative, commissive, permissive, prohibitive, expressive.

• Propositional contents:

Slide-83

Classification of speech acts

Force Example

Assertive The door is shut

Directive Shut the door

Commissive I will shut the door

Permissive You may shut the door

Prohibitive You may not shut the door

Declarative I name this door the golden gate

Expressive You should not have shut the door

Propositional contents:the-door-shut; {you,I}-shut-the-door

Slide-84

New actions

• inform(i,j,P): i informs j the fact P;– Preconditions: Ki P /\ (j trust i about P).– |= KiP [inform(i,j,P)](Tr(j,i,P)KjP) – Note:

• inform(i,j,P) may cause revision of j’s knowledge and belief.

• Semantic rule: similar to do(j, revision p)

• Request(i,j,P) , reply(i,j,P):– use Q(i,j,P) to record “i has requested j about P”– Precondition: ~KiP/\~Ki~P /\ Ki(KjP \/ Kj~P) /\ Tr(i,j,

P)– |= Precondition[Request(i,j,P)]Q(i,j,P)

Slide-85

• Reply(j,i,P)– Precondition: i has requested j about P,

– |= (KjP /\ Q(i,j,P)) [reply(j,i,P)]KiP

– |= (Kj~P /\ Q(i,j,P))[reply(j,i,P)Ki~P.

– Semantic rule like do(i,revise P).• more complex preconditions & postcondtions possi

ble dependent on the application.• Other types of speech acts can be formalized by us

ing additional deontic concepts and possibly different models of actions.

Slide-86

Conclusion:

• Review of basic modal logic• Brief introduction of a framework for formalize aspects of

agency.– action, knowledge, belief, goal, preference,– commitment, belief revision, communication

• Research directions:(Agent theory only) – Reasoning with uncertain, incomplete and

contradiction information – Reasoning about resources/ concurrent actions– identify, define and formalize various problems of agent

individual, group or society.

– Ex:<do(G1, )>T/\ <do(G2,)>T <do(G1UG2)>T,…

top related