introduction agent programming

56
Koen Hindriks Multi-Agent Systems Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches you how to think. Computer science is a liberal art. Steve Jobs

Upload: aman

Post on 08-Jan-2016

35 views

Category:

Documents


0 download

DESCRIPTION

Introduction Agent Programming. Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches you how to think. Computer science is a liberal art. Steve Jobs. Outline. Previous Lecture, last lecture on Prolog: “Input & Output” Negation as failure Search - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems

IntroductionAgent Programming

Koen HindriksDelft University of Technology, The Netherlands

Learning to program teaches you how to think.

Computer science is a liberal art.

Steve Jobs

Page 2: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 2 Koen Hindriks Multi-Agent Systems 2

Outline

• Previous Lecture, last lecture on Prolog:– “Input & Output”– Negation as failure– Search

• Coming lectures:– Agents that use Prolog

• This lecture:– Agent Introduction– “Hello World” example in the GOAL agent

programming language

Page 3: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 3 Koen Hindriks Multi-Agent Systems 3

Agents: Act in environments

Choose an action

Percepts

Action

environment

agent

Page 4: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 4 Koen Hindriks Multi-Agent Systems 4

Agents: Act to achieve goals

Percepts

Action

events

actions goals

environment

agent

Page 5: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 5 Koen Hindriks Multi-Agent Systems 5

Agents: Represent environment

Percepts

Action

events

actions goals

plans

beliefs

environment

agent

Page 6: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 6 Koen Hindriks Multi-Agent Systems 6

Agent Oriented Programming• Agents provide a very effective way of building

applications for dynamic and complex environments

+• Develop agents based on Belief-Desire-Intention

agent metaphor, i.e. develop software components as if they have beliefs and goals, act to achieve these goals, and are able to interact with their environment and other agents.

Page 7: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 7 Koen Hindriks Multi-Agent Systems 7

A Brief History of AOP• 1990: AGENT-0 (Shoham)• 1993: PLACA (Thomas; AGENT-0 extension with plans)• 1996: AgentSpeak(L) (Rao; inspired by PRS)• 1996: Golog (Reiter, Levesque, Lesperance)• 1997: 3APL (Hindriks et al.)• 1998: ConGolog (Giacomo, Levesque, Lesperance)• 2000: JACK (Busetta, Howden, Ronnquist, Hodgson)• 2000: GOAL (Hindriks et al.)• 2000: CLAIM (Amal El FallahSeghrouchni)• 2002: Jason (Bordini, Hubner; implementation of AgentSpeak)• 2003: Jadex (Braubach, Pokahr, Lamersdorf)• 2008: 2APL (successor of 3APL)This overview is far from complete!

Page 8: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 8 Koen Hindriks Multi-Agent Systems 8

A Brief History of AOP• AGENT-0 Speech acts• PLACA Plans• AgentSpeak(L) Events/Intentions• Golog Action theories, logical specification• 3APL Practical reasoning rules• JACK Capabilities, Java-based• GOAL Declarative goals• CLAIM Mobile agents (within agent community)• Jason AgentSpeak + Communication• Jadex JADE + BDI• 2APL Modules, PG-rules, …

Page 9: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 9 Koen Hindriks Multi-Agent Systems 9

Outline

• Some of the more actively being developed APLs

– 2APL (Utrecht, Netherlands)

– Agent Factory (Dublin, Ireland)

– GOAL (Delft, Netherlands)

– Jason (Porto Alegre, Brasil)

– Jadex (Hamburg, Germany)

– JACK (Melbourne, Australia)

– JIAC (Berlin, Germany)

• References

Page 10: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 10 Koen Hindriks Multi-Agent Systems 10

2APL – Features2APL is a rule-based language for programming BDI agents:•actions: belief updates, send, adopt, drop, external actions•beliefs: represent the agent’s beliefs•goals: represents what the agent wants•plans: sequence, while, if then•PG-rules: goal handling rules•PC-rules: event handling rules•PR-rules: plan repair rules

Page 11: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 11 Koen Hindriks Multi-Agent Systems 11

2APL – Code SnippetBeliefs: worker(w1), worker(w2), worker(w3)

Goals: findGold() and haveGold()

Plans: = { send( w3, play(explorer) ); }

Rules = {

… goal handling rule

G( findGold() ) <- B( -gold(_) && worker(A) && -assigned(_, A) ) |

send( A, play(explorer) );

ModOwnBel( assigned(_, A) );

,

E( receive( A, gold(POS) ) ) | B( worker(A) ) -> event handling rule

{ ModOwnBel( gold(POS) );

},

E( receive( A, done(POS) ) ) | B( worker(A) ) -> explicit operator for events

{ ModOwnBel( -assigned(POS, A), -gold(POS) );

},

}modules to combine and structure rules

Page 12: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 12 Koen Hindriks Multi-Agent Systems 12

JACK – FeaturesThe JACK agent Language is built on top of and extends Java and provides the following features:•agents: used to define the overall behavior of mas•beliefset: represents an agent’s beliefs•view: allows to perform queries on belief sets•capability: reusable functional component made up of plans, events, belief sets and other capabilities•plan: instructions the agent follows to try to achieve its goals and handle events•event: occurrence to which agent should respond

Page 13: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 13 Koen Hindriks Multi-Agent Systems 13

JACK – Agent Templateagent AgentType extends Agent {

// Knowledge bases used by the agent are declared here.

#private data BeliefType belief_name(arg_list);

// Events handled, posted and sent by the agent are declared here.

#handles event EventType;

#posts event EventType reference; used to create internal events

#sends event EventType reference; used to send messages to other agents

// Plans used by the agent are declared here. Order is important.

#uses plan PlanType;

// Capabilities that the agent has are declared here.

#has capability CapabilityType reference;

// other Data Member and Method definitions

}

Page 14: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 14 Koen Hindriks Multi-Agent Systems 14

Jason – Features• beliefs: weak and strong negation to support both closed-world

assumption and open-world

• belief annotations: label information source, e.g. self, percept

• events: internal, messages, percepts

• a library of “internal actions”, e.g. send

• user-defined internal actions: programmed in Java.

• automatic handling of plan failures

• annotations on plan labels: used to select a plan

• speech-act based inter-agent communication

• Java-based customization: (plan) selection functions, trust functions, perception, belief-revision, agent communication

Page 15: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 15 Koen Hindriks Multi-Agent Systems 15

Jason – Planstriggering event

test on beliefs

plan body

Page 16: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 16 Koen Hindriks Multi-Agent Systems 16

Summary

Key language elements of APLs:

• beliefs and goals to represent environment

• events received from environment (& internal)

• actions to update beliefs, adopt goals, send messages, act in environment

• plans, capabilities & modules to structure action

• rules to select actions/plans/modules/capabilities

• support for multi-agent systems

Page 17: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 17 Koen Hindriks Multi-Agent Systems 17

How are these APLs related?

AGENT-01

(PLACA )

Family of LanguagesBasic concepts: beliefs, action, plans, goals-to-do):

AgentSpeak(L)1, Jason2

Golog 3APL3

1 mainly interesting from a historical point of view2 from a conceptual point of view, we identify AgentSpeak(L) and Jason3 without practical reasoning rules

Main addition: Declarative goals

2APL ≈ 3APL + GOAL

A comparison from a high-level, conceptual point, not taking into account any practical aspects (IDE, available docs, speed, applications, etc)

Java-based BDI Languages

Agent Factory, Jack (commercial), Jadex, JIAC

Mobile Agents

CLAIM, AgentScape

Multi-Agent SystemsAll of these languages

(except AGENT-0, PLACA, JACK) have versions implemented

“on top of” JADE.

Pro

log

-bas

ed

Page 18: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 18 Koen Hindriks Multi-Agent Systems 18

ReferencesWebsites• 2APL: http://www.cs.uu.nl/2apl/ • Agent Factory: http://www.agentfactory.com • GOAL: http://mmi.tudelft.nl/trac/goal• JACK: http://www.agent-software.com.au/products/jack/• Jadex: http://jadex.informatik.uni-hamburg.de/• Jason: http://jason.sourceforge.net/• JIAC: http://www.jiac.de/

Books• Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-

Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason

• Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications.presents a.o.: Brahms, CArtAgO, GOAL, JIAC Agent Platform

Page 19: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems

The GOALAgent Programming Language

Page 20: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 20 Koen Hindriks Multi-Agent Systems

THE BLOCKS WORLDThe Hello World example of Agent Programming

Page 21: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 21 Koen Hindriks Multi-Agent Systems 21

The Blocks World

• Positioning of blocks on table is not relevant.• A block can be moved only if it there is no other block on top of it.

Objective: Move blocks in initial state such that result is goal state.

A classic AI planning problem.

Page 22: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 24 Koen Hindriks Multi-Agent Systems 24

Representing the Blocks World

Basic predicate:• on(X,Y).

Defined predicates:• tower([X]) :- on(X,table).

tower([X,Y|T) :- on(X,Y),tower([Y|T]).• clear(X) :- block(X), not(on(Y,X)).• clear(table).• block(X) :- on(X, _).

EXERCISE:

Prolog is the knowledge representation language used in GOAL.

Page 23: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 25 Koen Hindriks Multi-Agent Systems 25

Representing the Initial State

Using the on(X,Y) predicate we can represent the initial state.

beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

Initial belief base of agent

Page 24: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 26 Koen Hindriks Multi-Agent Systems 26

Representing the Blocks World• What about the rules we defined before?• Add clauses that do not change into the knowledge base.

tower([X]) :- on(X,table).tower([X,Y|T]) :- on(X,Y),tower([Y|T]).clear(X) :- block(X), not(on(Y,X)). clear(table).block(X) :- on(X, _).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}

Static knowledge base of agent

Page 25: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 27 Koen Hindriks Multi-Agent Systems 27

Why a Separate Knowledge Base?

• Concepts defined in knowledge base can be used in combination with both the belief and goal base.

• Example– Since agent believes on(e,table),on(d,e), then infer:

agent believes tower([d,e]).– If agent wants on(a,table),on(b,a), then infer: agent

wants tower([b,a]).

• Knowledge base introduced to avoid duplicating clauses in belief and goal base.

Page 26: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 28 Koen Hindriks Multi-Agent Systems 28

Representing the Goal State

Using the on(X,Y) predicate we can represent the goal state.

goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

Initial goal base of agent

Page 27: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 29 Koen Hindriks Multi-Agent Systems 29

One or Many Goals

In the goal base using the comma- or period-separator makes a difference!

goals{ on(a,table), on(b,a), on(c,b).}

goals{ on(a,table). on(b,a). on(c,b).}

• Left goal base has three goals, right goal base has single goal.

• Moving c on top of b (3rd goal), c to the table, a to the table (2nd goal) , and b on top of a (1st goal) achieves all three goals but not single goal of right goal base.

• The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

Page 28: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 30 Koen Hindriks Multi-Agent Systems 30

Mental State of GOAL Agent

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

The knowledge, belief, and goal sections together constitute the specification of the Mental State of a GOAL Agent.

Initial mental state of agent

Page 29: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 31 Koen Hindriks Multi-Agent Systems 31

Inspecting the Belief & Goal base

• Operator bel()to inspect the belief base.

• Operator goal()to inspect the goal base.– Where is a Prolog conjunction of literals.

• Examples:– bel(clear(a), not(on(a,c))).– goal(tower([a,b])).

Page 30: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 32 Koen Hindriks Multi-Agent Systems 32

Inspecting the Belief Base• bel() succeeds if follows from the belief base

in combination with the knowledge base.

• Example:– bel(clear(a), not(on(a,c))) succeeds

• Condition is evaluated as a Prolog query.

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

Page 31: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 33 Koen Hindriks Multi-Agent Systems 33

Inspecting the Belief Base

Which of the following succeed?1.bel(on(b,c), not(on(a,c))).

2.bel(on(X,table), on(Y,X), not(clear(Y)).

3.bel(tower([X,b,d]).

[X=c;Y=b]

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

EXERCISE:

Page 32: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 34 Koen Hindriks Multi-Agent Systems 34

Inspecting the Goal Base

• goal() succeeds if follows from one of the goals in the goal base in combination with the knowledge base.

• Example:– goal(clear(a))succeeds.– but not goal(clear(a),clear(c)).

Use the goal(…) operator to inspect the goal base.

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

Page 33: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 35 Koen Hindriks Multi-Agent Systems 35

Inspecting the Goal Base

Which of the following succeed?1.goal(on(b,table), not(on(d,c))).

2.goal(on(X,table), on(Y,X), clear(Y)).

3.goal(tower([d,X]).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

EXERCISE:

Page 34: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 36 Koen Hindriks Multi-Agent Systems 36

Negation and Beliefs

not(bel(on(a,c))) = bel(not(on(a,c)))?

• Answer: Yes.– Because Prolog implements negation as failure.

– If φ cannot be derived, then not(φ) can be derived.

– We always have: not(bel()) = bel(not())

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

EXERCISE:

Page 35: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 37 Koen Hindriks Multi-Agent Systems 37

Negation and Goals

not(goal()) = goal(not())?

• Answer: No.

• We have, for example:goal(on(a,b)) and goal(not(on(a,b))).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,b), on(b,table). on(a,c), on(c,table).}

EXERCISE:

Page 36: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 38 Koen Hindriks Multi-Agent Systems 38

Combining Beliefs and Goals

• Achievement goals:– a-goal() = goal(), not(bel())

• Agent only has an achievement goal if it does not believe the goal has been reached already.

• Goal achieved:– goal-a() = goal(), bel()

• A (sub)-goal has been achieved if the agent believes in .

Useful to combine the bel(…) and goal(…) operators.

Page 37: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 39 Koen Hindriks Multi-Agent Systems 39

Expressing BW Concepts

• Define: block X is misplaced• Solution:

goal(tower([X|T])),not(bel(tower([X|T]))).

• But this means that saying that a block is misplaced is saying that you have an achievement goal:

a-goal(tower([X|T])).

Possible to express key Blocks World concepts by means of basic operators.

Mental States

EXERCISE:

Page 38: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 40 Koen Hindriks Multi-Agent Systems

ACTIONS SPECIFICATIONSChanging Blocks World Configurations

Page 39: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 41 Koen Hindriks Multi-Agent Systems 41

Actions Change the Environment…

move(a,d)

Page 40: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 42 Koen Hindriks Multi-Agent Systems 42

and Require Updating Mental States.• To ensure adequate beliefs after performing an action the belief base

needs to be updated (and possibly the goal base).

– Add effects to belief base: insert on(a,d) after move(a,d).– Delete old beliefs: delete on(a,b) after move(a,d).

Page 41: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 43 Koen Hindriks Multi-Agent Systems 43

and Require Updating Mental States.• If a goal has been (believed to be) completely achieved, the goal is

removed from the goal base.

• It is not rational to have a goal you believe to be achieved.• Default update implements a blind commitment strategy.

move(a,b)

beliefs{ on(a,table), on(b,table).}goals{ on(a,b), on(b,table).}

beliefs{ on(a,b), on(b,table).}goals{ }

Page 42: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 44 Koen Hindriks Multi-Agent Systems 44

Action Specifications• Actions in GOAL have preconditions and

postconditions.• Executing an action in GOAL means:

– Preconditions are conditions that need to be true:• Check preconditions on the belief base.

– Postconditions (effects) are add/delete lists (STRIPS):• Add positive literals in the postcondition• Delete negative literals in the postcondition

• STRIPS-style specificationmove(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) }}

Page 43: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 45 Koen Hindriks Multi-Agent Systems 45

move(X,Y){

pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )}

post { not(on(X,Z)), on(X,Y) }

}

Example: move(a,b)• Check: clear(a), clear(b), on(a,Z), not( on(a,b) )• Remove: on(a,Z)• Add: on(a,b)

Note: first remove, then add.

Actions Specifications

table

Page 44: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 46 Koen Hindriks Multi-Agent Systems 46

move(X,Y){

pre { clear(X), clear(Y), on(X,Z) }

post { not(on(X,Z)), on(X,Y) }

}

Example: move(a,b)

Actions Specifications

beliefs{ on(a,table), on(b,table).}

beliefs{ on(b,table). on(a,b).}

Page 45: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 47 Koen Hindriks Multi-Agent Systems 47

move(X,Y){

pre { clear(X), clear(Y), on(X,Z) }

post { not(on(X,Z)), on(X,Y) }

}

1. Is it possible to perform move(a,b)?

2. Is it possible to perform move(a,d)?

Actions SpecificationsEXERCISE:

knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

No, not( on(a,b) ) fails. Yes.

Page 46: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 48 Koen Hindriks Multi-Agent Systems

ACTION RULESSelecting actions to perform

Page 47: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 49 Koen Hindriks Multi-Agent Systems 49

Agent-Oriented Programming

• How do humans choose and/or explain actions?

• Examples:• I believe it rains; so, I will take an umbrella with me.• I go to the video store because I want to rent I-robot.• I don’t believe busses run today so I take the train.

• Use intuitive common sense concepts:

beliefs + goals => action

See Chapter 1 of the Programming Guide

Page 48: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 50 Koen Hindriks Multi-Agent Systems 50

Selecting Actions: Action Rules

• Action rules are used to define a strategy for action selection.

• Defining a strategy for blocks world:– If constructive move can be made, make it.– If block is misplaced, move it to table.

• What happens:– Check condition, e.g. can a-goal(tower([X|T]))be derived given

current mental state of agent?– Yes, then (potentially) select move(X,table).

program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 49: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 51 Koen Hindriks Multi-Agent Systems 51

Order of Action Rules

• Action rules are executed by default in linear order.• The first rule that fires is executed.

• Default order can be changed to random.• Arbitrary rule that is able to fire may be selected.

program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 50: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 52 Koen Hindriks Multi-Agent Systems 52

Example Program: Action RulesAgent program may allow for multiple action choices

dTo table

Random, arbitrary choice

program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 51: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 53 Koen Hindriks Multi-Agent Systems 53

The Sussman Anomaly (1/5)

• Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C).

• Planning for these two sub-goals separately and combining the plans found does not work in this case, however.

a

c

Initial state

b c

b

a

Goal state

Page 52: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 54 Koen Hindriks Multi-Agent Systems 54

The Sussman Anomaly (2/5)• Initially, all blocks are misplaced• One constructive move can be made (c to table)• Note: move(b,c) not enabled.• Only action enabled: c to table (2x).

Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table).

We have bel(tower([c,a]) and a-goal(tower([c])).

c

b

a

Goal state

a

c

Initial state

b

Page 53: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 55 Koen Hindriks Multi-Agent Systems 55

The Sussman Anomaly (3/5)• Only constructive move enabled is

– Move b onto c

Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table).

Note that we have:

a-goal(on(a,b),on(b,c),on(c,table)),but not: a-goal(tower[c])).

Current state

c

b

a

Goal state

ac b

Page 54: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 56 Koen Hindriks Multi-Agent Systems 56

The Sussman Anomaly (4/5)• Again, only constructive move enabled

– Move a onto b

Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y).

Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[b,c]).

c

b

a

Goal state

ac

b

Current state

Page 55: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 57 Koen Hindriks Multi-Agent Systems 57

The Sussman Anomaly (5/5)• Upon achieving a goal completely

that goal is automatically removed.• The idea is that no resources should

be wasted on achieving the goal.

In our case, goal(on(a,b),on(b,c),on(c,table)) has been

achieved, and is dropped. The agent has no other

goals and is ready.

c

b

a

Goal state

a

c

b

Current state

Page 56: Introduction Agent Programming

Koen Hindriks Multi-Agent Systems 2012 58 Koen Hindriks Multi-Agent Systems 58

Organisation• Read Programming Guide Ch1-3 (+ User Manual)

• Tutorial:– Download GOAL: See http://ii.tudelft.nl/trac/goal (v4537)– Practice exercises from Programming Guide– BW4T assignments 3 and 4 available

• Next lecture:– Sensing, perception, environments– Other types of rules & macros– Agent architectures