[lecture notes in computer science] advances in artificial intelligence volume 991 || measuring...

10
Measuring Agreement and Harmony in Multi-Agent Societies: A First Approach Fl~ivio M. de Oliveira Instituto de Informfitica - PUCRS Av. Ipimnga, 6681 - pr6dio 16- sala 160 Bairro Ipiranga 90619-900 PORTO ALEGRE- RS BRAZIL fax.: +55-51-339-1564 e-mail: [email protected] Abstract The existence of independent goals is a natural and often desirable characteristic in societies of autonomous agents. Depending on the application, some societies might be more tolerant to independence than others. Nevertheless, agents must have means for negotiate their goals gracefully and efficiently. Indeed, we find in the literature some proposals of goal negotiation strategies. At the present state of the art, there is a need for frameworks under which one can compare such strategies and study their influence in the social behavior of the agents. We present here some analytical tools to measure the negotiation characteristics in a society. The underlying notions in these tools are the ideas of agreeability and harmony. By agreeability we mean the ability of an agent to adopt the goals of another agent and/or induce their own goals on another agent. Harmony is the global agreeability in a society. We give mathematical definitions for these notions and illustrate how the definitions can be applied to the study of societies at various levels. Keywords: Distributed AI, Multi-Agent Systems, Logic Programming 1. Introduction The field of Distributed Artificial Intelligence is traditionally divided in two main approaches: distributed problem-solving (DPS) and Multi-agent systems (MAS) [BON 88, DEM 90, SIC 92]. In the first approach, there is some problem to solve, or a task to be executed, and the developer designs a system composed of multiple agents to accomplish it. In the second approach, there is a society of autonomous agents that will organize themselves to solve the problem; the existence of the society is independent of any particular problem or task. In both cases, the notion of agent is fundamental: an agent can be defined as an entity capable of perceiving its environment and executing actions that cause changes in the environment and/or in the agent's internal state.

Upload: ariadne

Post on 12-Feb-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

Measuring Agreement and Harmony in Multi-Agent Societies: A First Approach

Fl~ivio M. de Oliveira

Instituto de Informfitica - PUCRS Av. Ipimnga, 6681 - pr6dio 16- sala 160

Bairro Ipiranga 90619-900 PORTO ALEGRE- RS

BRAZIL fax.: +55-51-339-1564

e-mail: [email protected]

Abstract

The existence of independent goals is a natural and often desirable characteristic in societies of autonomous agents. Depending on the application, some societies might be more tolerant to independence than others. Nevertheless, agents must have means for negotiate their goals gracefully and efficiently. Indeed, we find in the literature some proposals of goal negotiation strategies. At the present state of the art, there is a need for frameworks under which one can compare such strategies and study their influence in the social behavior of the agents. We present here some analytical tools to measure the negotiation characteristics in a society. The underlying notions in these tools are the ideas of agreeability and harmony. By agreeability we mean the ability of an agent to adopt the goals of another agent and/or induce their own goals on another agent. Harmony is the global agreeability in a society. We give mathematical definitions for these notions and illustrate how the definitions can be applied to the study of societies at various levels.

Keywords: Distributed AI, Multi-Agent Systems, Logic Programming

1. In troduct ion

The field of Distributed Artificial Intelligence is traditionally divided in two main approaches: distributed problem-solving (DPS) and Multi-agent systems (MAS) [BON 88, DEM 90, SIC 92]. In the first approach, there is some problem to solve, or a task to be executed, and the developer designs a system composed of multiple agents to accomplish it. In the second approach, there is a society of autonomous agents that will organize themselves to solve the problem; the existence of the society is independent of any particular problem or task. In both cases, the notion of agent is fundamental: an agent can be defined as an entity capable of perceiving its environment and executing actions that cause changes in the environment and/or in the agent's internal state.

Page 2: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

233

In this paper, we focus on societies of rational, autonomous agents. An autonomous agent has (implicit or explicit) goals, which may be not the same goals of the other agents or of the society as a whole. A rational agent has some explicit representation of its goals, and chooses its actions according to them. The existence of independent goals is thus a natural and often desirable characteristic in societies of autonomous agents. Depending on the application, some societies might be more tolerant to independence than others; Galliers [GAL 90] pointed out the potential benefits of conflicts for societies. Nevertheless, agents must have means for negotiate their goals gracefully and efficiently. Indeed, we find in the literature some proposals of goal negotiation strategies [WER 90, KHE 94]. At the present state of the art, there is a need for frameworks under which one can compare such strategies and study their influence in the social behavior of the agents. We present here some analytical tools to measure the negotiation characteristics in a society. The underlying notions in these tools are the ideas of agreeability and harmony. By agreeability we mean the ability of an agent to adopt the goals of another agent and/or induce their own goals on another agent. Harmony is the global agreeability in a society. By means of the mathematical apparatus of metric spaces, we give precise definitions for these notions and suggest applications of these definitions to the study of societies and of individual agents.

In this paper, we consider that agents have their goals represented as sets of first-order clauses (logic programs); we call such sets goal theories, in the sense that they are composed of basic goals (ground facts) and rules to derive secondary goals from the basic ones. Intuitively, the priority of a goal is inversely proportional to the number of steps needed to derive it. Societies are finite sets of agents. Strategies for goal negotiation may be defined locally (one for each agent) or globally (one for the society). In the first case, we represent strategies by functions from goal theories to goal theories and, in the second case, by functions from sets of goal theories to sets of goal theories. The section 2 gives some general definitions that establish the basis for organizing metric spaces of agents and societies. The section 3 presents the definition of (dis)agreement and agreeability. In the section 4, we develop the notion of harmony. In the section 5 we discuss some directions for future work, specially the possibility of extending our approach to other types of goal languages.

2. Distance Between Clause Theories

The ideas described in this paper are variations over one central theme: how can we measure the "proximity" between the goal theories of agents in a society? Oliveira, Viceari and Coelho [OLI 94] presented a way of organizing taxonomies of attribute- values pairs into a metric space. Such a topological setting yields a consistent framework for the idea of proximity. Following this approach, we define here a distance (which we call model distance) for theories represented by sets of first-order Horn clauses [LLO 84]. Then, in the next sections, we develop the definitions of agreeability and harmony in terms of model distance. The first step is to define a metric for sequences of finite sets.

Page 3: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

234

Definition. Let S be the set of all (finite and infinite) cumulative sequences of finite sets, i.e., sequences with the form S = { C 1, C2, C3 .... }, where C i/~ Ci+ 1 for all i. The distance d: S xS ---, [0,1] is defined by

d(S1,S2) = 0 if Sl = s2

d(S 1,$2) = 1 - #(C In A C2n)/#(C In O C2n) if s 1 ,, s 2

Where

n is the least index such that C l n * C2n; C l n is the n-th element of S 1 and C2n is the n-th element of $2; #C is the cardinality (number of elements) of C.

It can be shown that the set S with the distance d is a metric space, i.e., d satisfies the properties [LIP 65]

(i) d(x,y) ~- 0 (ii) d(x,y) = 0 implies x = y (iii) d(x,y) = d(y,x) (iv) d(x,z) ,: d(x,y) + d(y,z)

The complete proof is a little complex, and was not included here only for reasons of space.

Equipped with this general result, we now apply it to the case of clause theories - more precisely, to models of clause theories. The natural approach to compare two sets of first-order Horn clauses would be to define a distance function between their meanings, which, in model-theoretic semantics, are defined by their least Herbrand models [LLO 84]. Unfortunately, such models are often infinite; even when they are finite, it may be computationally expensive to build them. We adopt a compromise solution: to compare the (partial) process of computation of the models. As defined in Lloyd [LLO 84], the least Herbrand model of a clanse theory is known to be the least fixed point of the immediate consequence mapping: given a clause theory P (or a logic program P), we define a mapping on interpretations Tp: I ---, I as:

Tp(I) = { A ~ B(P) I A ~ B 1, B2 ..... B n, n ~ 0, is a ground instance of a clause in P, and B 1, B 2 ..... Bn ~ I }

To compute the least Herbrand model of a elause theory P, we apply iteratively the Tp mapping, starting with the empty interpretation:

Tp0(O) = TI~{})

Page 4: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

Tp l ( { } ) = Tp(Tp0({})) U Tp0({})

Tp2({}) = Tp(Tp(['p0({})) U Tpl ( { } )

235

Tpn({}) = Tp(Tpn- 1({})) U Tpn- 1({})

In Tp({}) we generate the facts in P (clauses of the form A ,--); in Tp(Tp({})) we generate the consequences of facts, and so on. The least Herbrand model of P is

n

lim M( P) -- T~ (O)= n_,| ~.oT~(~)

We can think of Tpn({}) as a partial model of P, and the whole process as a sequence

of partial models Sp = {}, Tp0({}), Tp l ({} ) .. . . . Tpn({}) .. . . . which is in fact an element of S as defined above. In other words:

Definition. Let L be a language of Horn clauses. Let Sp/~ S be the set of all sequences of the form Sp above, where P/~ L. We define the model distance d: ~ (L) x ~ (L) ---, [ 0,1 ] by

d(P1,P2) = 0 if P1 = P2

d (P1 , P 2 ) ffi 1 - if P1 * P2 #(Tp, ({}) U T" p,(O))

Where n is the least index such that Tpln({}) ~ Tp2n({}). (~o (L),d) is a metric space, since Tp is a function.

Example: Let us see a quite simple example of comparing concepts represented in logic programs. Consider the following three programs:

cl:

is_a(A,bird):- atrib(A,has_wings,yes), atrib(A,has_feathers,yes).

atdb(tweety ,has_wings ,yes).

Page 5: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

atrib(tweety,has_feathers,yes). atrib(birdy,has_feathers,yes).

236

c2:

is_a(A,bird):- atrib(A,flies,yes), atrib(A,has_feathers,yes).

atrib(A,flies,yes):- atrib(A,has_wings,yes).

atrib(tweety ,has_wings ,yes). atrib(tweety ,has _feathers,yes). atrib(birdy,has_feathers,yes).

c3:

is_a(A,bird):- atrib(AJaas_feathers,yes).

atrib(tweety,lms_wings,yes). atrib(tweety,has_feathers,yes). atrib(birdy 2aas_feathers,yes).

is_a(l,C) means that I is a member (instance) of class C. atrib(l,A,V) means that instance 1 have value V for attribute A. The three programs are implementations for the concept of bird. All of them have finite models: the model of cl is:

{ alrib(tweetyjaas_wings,yes), atrib(tweety,has_feathers,yes), atrib(birdyJaas_feathers,yes), is_a(tweety,bird) }

the model of c2 is

{ atrib(tweety,has_wings,yes), atrib(tweety,has_feathers,yes), atrib(birdy3aas_feathers,yes), atrib(tweey, flies, yes), is_a(tweety,bird) }

the model of c3 is

{ atrib(tweety,has_wings,yes), atrib(tweety,lmsfeatlaers,yes), atrib(birdy,has_feathers,yes), is_a(tweety,bird), is_a(birdy,bird) }

Page 6: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

237

The difference in c2 appears in the second step of application of Tp, when atrib(tweey, flies, yes) is generated by the corresponding rule, and is_a(tweety,bird) is generated by cl. Thus the distance between cl and c2 is

n

d ( c l , c 2 ) 1 #(T~,l({})n T" = _ a ( { } ) ) =

11 II # (T,,({}) w T,2 ({}))

# (atrib (tweety, wings, yes ),atrib (tweety, feathers, yes),atrib (birdy, feathers, yes )) 1 - ll n

# (T,,({}) u T,2({}) )

3 = l - - = 0 .4

5 The difference in c3 appears just in the third step of application of Tp, when isa(birdy,bird) is generated, which is not generated by cl. Thus the distance between cl and c3 is

#(Tcl({})nn d ( c l , c 3 ) = I - Tc3ulJJ =

n t,f i , t , ~

n n # ( Tc , ( { l )u To3 ({ }))

# ( atrib ( twee ty . w i n g s , yes ). a tdb (tweety . feathers . yes ). atrib ( b l r d y , feathers . yes ). is _ a ( t w e e t y , bird ) ) l -

n n # ( T c l ( { } ) u T o 3 ( { } ) )

= 1 _ 4 = 0 . 2 - - 5

An algorithm for calculating d halts for theories with finite models and for theories with distinct infinite models, since in that case there exists n such that TPln({ }) _ Tp2n({ }). It does not halt, however, for theories with equal, infinite models. It is computationally less expensive than comparing directly the models, because it stops as soon as it finds a difference. In practice, however, there may be situations where n is too high. Implementations might consider some maximum value for n: by default, if no difference appears in n iterations, the theories are considered equivalent. The default is reviewed when some conflict arises. 3. Agreeability: the Agent-Level Case Let us consider a simple society composed of two agents A1 and A2, with their goals represented as clause theories 1.We could say that two agents are "agreeable" if and only if the model distance d(A1,A2) = 0. However, this is too restrictive. For

lln this paper, we use the terms "clause theory" and "logic program" interchangeably.

Page 7: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

238

example, the two agents below do not satisfy that condition, although their goal theories are logically equivalent:

AI: A2: a ( t ) . a ( t ) . b(X):- a(X). c(X):- a(X). cCX):- b(X). b(X):- c(X).

There is, in fact, a conflict: both A1 and A2 have the goal b(t), but not for the same reasons (recall that the priority of a goal is inversely proportional to the number of steps needed to derive i0. Nevertheless, a simple negotiation strategy might be able to detect the equivalence and solve the conflict. The point here is that we want to think of agreeability not as the equality of goals, but rather as the ability to negotiate - which means that their goal theories are compatible somehow. Such an ability is what distinguishes a society from a chaotic set of isolated individuals. These considerations led us to formulate the following definition:

Definition. Let A be an agent equipped with a goal theory G A E / o (G), where G is the a language of Horn clanses, called the goal language, and a negotiation procedure f: ~o(G) x ha(G) ---, bo (G). We say that A is agreeable to some agent B if and only if the sequence

<GAn> = GA, fB(GA), fB(fB(GA)) ..... fBn(GA) ..... wherefB(GA) =f(GA,GB).

n

is convergent in (bo(G),d), i.e., exists Q in ha(G) such that l i l T l f o ( P A ) = Q �9 n- -b e~o

We call d the degree of disagreement of A and B. The idea here is that, if A needs to negotiate with B, it is able to do it by successive applications of f. If A is not agreeable to B, its negotiation procedure can not guarantee a consensus, since it can be applied over and over again, and never reach stability. The agreeability expresses an inclination of an agent; agreement is a position taken by it, it is an attitude. The degree of disagreement is a measure of the intensity of this attitude.

The definition of agreeability is individual, relative to a particular agent: A being agreeable to B does not imply that B is agreeable to A. Moreover, it is a property of a particular state of an agent, in relation to other agent, also in a particular state, as is the distance between A and B. We can thus study the variation over time of these characteristics, and make comparisons among different negotiation strategies and/or goal theories. If we wish to implement systems with automatic control of agreeability, we can relax the definition: an agent A is agreeable to B if d~Bn(GA),GB) s 6, where n and 6 are pre-defined, application-dependent values.

Of course, we could define "A and B are agreeable to each other if A is agreeable to B and vice-versa"; that sounds good for two agents, but what if we wish to extend it to

Page 8: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

239

n agents? We should say "a set of agents S = { AI ' A2' ..., An } is agreeable if A i is agreeable to Aj for every i,j < n, i ,~ j". This is too restrictive and, even worse, much more expensive. In the next section, we discuss an alternative definition, developing the global notion of harmony.

4. Harmony: the Society-Level Case

In the preceding section, we presented an agent-level definition of agreeability. We will now define a corresponding notion for the society level, by treating the whole society as having a single goal theory, which is defined, at each application of Tp, by the intersection of the partial models of all goal theories in the society. Also, instead of considering a separate negotiation procedure for each agent in the society, we consider that the society has a global negotiation strategy F. That would be the case of having a special agent (a "moderator") with the responsibility of managing disagreements among the members of the society. First, let us define a metric for societies:

Definition. Let Sbe the set of all societies of agents S = { A 1, A 2, ..., A k }2, k E

N. Let SA = { AI ' A2' ..., Ap } and SB = { BI ' B2' ..., Bq } be two societies in S. The model distanced: SxS- - - , [0 ,1 ] is defined by

d(SA,SB) = 0 if SA = SB

d(PpP2) 1- #(TsA({})OTsB(O)) ifSA,,SB (0) u

q q

" N , where 7ss({}) = , ( 0 ) and T s s ( 0 ) = T~, ({}). (S,d) is a metric space. The i - I i - I

proof is the same as in the single-agent case, mutatis mutandis.

Now for the definition of harmony:

Definition. Let S = { AI ' A2' ..., An } ~ S be a society equipped with a negotiation

function F : a~(G) n --~ ~ (G) n. We say S is harmonic if and only if the sequence

<sn> = S, F (S), F (F (S)) ..... F n(s ) ....

is convergent in (S,d).

2 As there is no ambiguity, for simplicity of notation we denote the goal theory of an agent Ai by the same symbol Ai, and not by GAi as in the preceding section.

Page 9: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

240

If F is a contraction (a contraction in a metric space (X,d) is a function f such that d(f(x)~y)) ,: d(x,y) for all x and y in X [LIP 65]), then every sequence of the form <sn> above is Cauchy-convergent. If the metric space (S,d) is complete, then every

contraction F has a unique fixed point, which is the limit of every sequence <sn>. Thus, completeness of (S,d)is an interesting safety condition for negotiation functions.

With the notion of harmony, we characterize formally the ability of a society to deal of its internal conflicts. As the agent-level definition of agreeability, harmony is relative to the state of the society. As in the agent-level case, we can study the behavior of different negotiation strategies, observing the variation of harmony over time in the society.

5. Future Work

The ideas presented here are applications of a general approach: using the mathematical apparatus of metric spaces to formalize the notion of approximation for theories, by means of convenient metrics. The resulting framework can be easily adapted to implementations with limited computational resources, by considering finite sub-sequences of <GAll> or <Sir>. The representation chosen for goals - sets of Horn dames - is simple and has a well-known denotational semantics, with a little loss of generality. Our motivation in this case was our interest in studying the application of Inductive Logic Programmiug (ILP) teelmiques [LAV 94] to negotiation in Multi-agent systems. Nevertheless, goal theories do have some limitations: for example, they are dosed under logical implication, which may lead to some non-intuitive behavior, as Wainer remarks [WA194]. We are presently studying ways to extend our approach to other types of representations. All we need is a model-theoretic semantics for the representation, and an incremental definition of the models. The knowledge structures defined by Fagin, Halpern and Yardi [FAG 91], seem to be an interesting candidate for further investigation.

Acknowledgments

The ideas described in this paper were developed thanks to many fruitful and pleasant discussions with Dr. Antbnio Carlos da Rocha Costa e Dr. Rosa Maria Viccari, from UFRGS (Brazil), and Dr. Helder Coelho, from INESC (Portugal). Their comments and criticisms were very important.

This work has financial support from CNPq.

Page 10: [Lecture Notes in Computer Science] Advances in Artificial Intelligence Volume 991 || Measuring agreement and harmony in multi-agent societies: A first approach

241

R e f e r e n c e s

[LAV 94] LAVRAC, N. Inductive Concept Learning Using Background Knowledge. In: Pequeno, T.; Carvalho, F. (eds.) Proceedings of the XI Brazilian Symposium on Artificial Intelligence. Fortaleza, Universidade Federal do Cear~i, 1994. p. 1-16.

[BON 88] BOND, A.H.; GASSER, L. (eds.) Readings in Distributed Artificial Intelligence. San Mateo, California: Morgan Kanfmatm, 1988.

[DEC 87] DECKER, K.S. Distributed Problem-Solving Techniques: a Survey. IEEE Transactions on Systems, Man and Cybernetics, 17(5):729-740, September/October 1987.

[DEM 90] DEMAZEAU, Y.; MULLER, J.P. (eds.) Decentralized Artificial Intelligence. Morgan Kaufmann, 1990.

[FAG 91] FAGIN, R.; HALPERN, J.Y.; YARDI, M.V. A Model-Theoretic Analysis of Knowledge. Journal of the ACM, 2:382-428,April 1991.

[GAL 90] GALLIER, J.R. The Positive Role of Conflict in Cooperative Multi- Agent Systems. In: DEMAZEAU, Y.; MULLER, J.P. (eds.) Decentralized Artificial Intelligence. Morgan Kaufmann, 1990.

[KHE 94] KI~,DRO, T.; GENESERETH, M.R. Modeling Multiagent Cooperation as Distributed Constraint Satisfaction Problem Solving. In: Cohn, A. (ed.) Proceedings of the 11 th European Conference on Artificial Intelligence. New York, J. Wiley & Sons, 1994. p. 249-253.

[LIP 65] LIPSCHLffZ, S. General Topology. McGraw-Hill, 1%5.

[LLO 84] LLOYD, J.W. Foundations of Logic Programming. Berlin, Springer- Verlag, 1984.

[OLI 94] OLIVEIRA, F.M.; VICCARI, R.M.; COELHO, H. A Topological Approach to Equilibration of Concepts. In: Pequeno, T.; Carvalho, F. (eds.) Proceedings of the XI Brazilian Symposium on Artificial Intelligence. Fortaleza, Universidade Federal do Cear~i, 1994. p. 527-523.

[SIC 92] SICHMAN, J., DEMAZEAU,Y.; BOISSIER, O. When can Knowledge- based Systems be Called Agents? In: IX Simpdsio Brasileiro De Intelig~ncia Artificial, Rio de Janeiro, RJ, Out. 1992. Proceedings. Rio de Janeiro, SBC, 1992.

[WAI 94] WAINER, J. Yet Another Semantics of Goal and Goal Priorities. In: Cohn, A. (ed.) Proceedings of the l l th European Conference on Artificial Intelligence. New York, J. Wiley & Sons, 1994. p. 269-273.

[WER 90] WERNER, E. Distributed Cooperation Algorithms. In: DEMAZEAU, Y.; MULl .ER, J.P. (eds.) Decentralized Artificial Intelligence. Morgan Kaufmann, 1990.