evaluating and comparing agent-oriented software ...hoa/hoathesis.pdfevaluating and comparing...

166
Evaluating and Comparing Agent-Oriented Software Engineering Methodologies A Minor thesis submitted in partial fulfillment of the requirements for the degree of Master of Applied Science in Information Technology. By Khanh Hoa Dam School of Computer Science and Information Technology, RMIT University, Australia June 27, 2003

Upload: volien

Post on 22-Mar-2019

228 views

Category:

Documents


0 download

TRANSCRIPT

Evaluating and Comparing Agent-Oriented Software

Engineering Methodologies

A Minor thesis submitted in partial fulfillment of the requirements for the degree of

Master of Applied Science in Information Technology.

By Khanh Hoa Dam

School of Computer Science and Information Technology,

RMIT University, Australia

June 27, 2003

Declaration

I certify that all work on this thesis was carried out between November 2003 and June 2003

and it has not been submitted for any other academic award at any other college, institute or

university. The work presented was carried under the supervision of Dr. Michael Winikoff and

Associate Professor Lin Padgham who proposed the topic. All other work in the thesis in my

own except where acknowledged in the text.

Preliminary versions of some results or discussions in this thesis have been previously published.

Chapters 2, 3, and 4 contain material that appeared in the article “Comparing Agent-Oriented

methodologies” [25]. This paper will appear at the Fifth International Bi-Conference Workshop

on Agent-Oriented Information Systems (AOIS-2003), which will be held in Melbourne, Australia

in July. This paper discusses the comparison of three agent-oriented methodologies (MaSE,

Prometheus, and Tropos) based on the results of the survey and the case study.

Signed

Khanh Hoa Dam,

June 27, 2003

ii

Acknowledgments

I would like to thank my first supervisor, Dr. Michael Winikoff, for his patient guidance and

support through the rough ways of doing this research and writing the thesis. His experience

and insight was a steady source of my inspiration, stimulating my enthusiasm for Agent Oriented

Software Engineering.

I am also very grateful to Associate Professor Lin Padgham as the second supervisor of my thesis

for her invaluable comments and suggestions along the way of writing this thesis.

I would like to thank Hiten Bharatbhai Ravani, Robert Tanaman, Sindawati Hoetomo, Sheilina

Geerdharry, and Yenty Frily for their work over the summer with the different methodologies.

Finally, I thank Alexander Moses Rosenberg for proof-reading many drafts of this thesis.

Contents

Declaration i

Acknowledgments ii

Contents vi

List of Figures viii

List of Tables ix

Abstract x

1 Introduction 1

1.1 Goal of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 5

2.1 Agent-based computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Intelligent Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.2 The Personal Itinerary Planner System . . . . . . . . . . . . . . . . . . . 8

2.2 Agent-Oriented Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . 10

iii

CONTENTS iv

2.3 Agent-Oriented Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.1 Gaia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3.2 Multiagent Systems Engineering (MaSE) . . . . . . . . . . . . . . . . . . 21

2.3.3 MESSAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.3.4 Prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.3.5 Tropos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4 Software Engineering Methodology Evaluation . . . . . . . . . . . . . . . . . . . 47

2.4.1 Methods for Evaluating Methodologies . . . . . . . . . . . . . . . . . . . 48

2.4.2 Comparisons of Object-Oriented Methodologies . . . . . . . . . . . . . . 55

2.4.3 Comparisons of Agent-Oriented Methodologies . . . . . . . . . . . . . . . 58

3 Proposed Evaluation Approach 59

3.1 The Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.1.1 The Purpose of the Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 60

3.1.2 The Evaluation Type and Procedure . . . . . . . . . . . . . . . . . . . . 61

3.2 The Evaluation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3.2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.2.2 Modelling Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.2.3 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.2.4 Pragmatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4 Evaluation Results 81

4.1 Results of the Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.1.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

CONTENTS v

4.1.2 Modelling Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.1.3 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.1.4 Pragmatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.2 Results of the Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

4.2.1 Gaia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

4.2.2 MaSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.2.3 MESSAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.2.4 Prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

4.2.5 Tropos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4.3 Structural Analysis - The Commonalities . . . . . . . . . . . . . . . . . . . . . . 105

4.3.1 Capturing Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.3.2 The Role of Use Cases in Requirements Analysis . . . . . . . . . . . . . . 106

4.3.3 Roles/Capabilities/Functionalities . . . . . . . . . . . . . . . . . . . . . . 107

4.3.4 Social System - Static Structure and Dynamics . . . . . . . . . . . . . . . 107

4.3.5 Individual Agent - Static Structure and Dynamics . . . . . . . . . . . . . 109

4.3.6 Agent Acquaintance Model . . . . . . . . . . . . . . . . . . . . . . . . . . 110

4.4 Structural Analysis - The Differences . . . . . . . . . . . . . . . . . . . . . . . . 110

4.4.1 Early Requirements in Tropos . . . . . . . . . . . . . . . . . . . . . . . . . 110

4.4.2 Environmental Model in MESSAGE and Prometheus . . . . . . . . . . . 111

4.4.3 Deployment Model in MaSE . . . . . . . . . . . . . . . . . . . . . . . . . 112

4.4.4 Data Coupling Diagram in Prometheus . . . . . . . . . . . . . . . . . . . 113

4.5 Towards a Mature and Complete AOSE Methodology . . . . . . . . . . . . . . . 114

4.5.1 Requirements Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

vi

4.5.2 Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.5.3 Detailed Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.5.4 Implementation and Testing/Debugging . . . . . . . . . . . . . . . . . . . 118

5 Conclusion 119

5.1 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.1.1 Evaluated Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.1.2 Evaluation Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.1.3 Quantitative Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

A Questionnaire 124

B Students’ Reviews 134

List of Figures

2.1 The Personal Itinerary Planner System . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Relationship between Gaia’s models . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3 The QueryInterestManager and replyRetrieveInterest Protocol Definitions . 18

2.4 Role schema for role ItineraryGenerator . . . . . . . . . . . . . . . . . . . . . . . 19

2.5 Service Model for Itinerary Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.6 MaSE’s process steps and artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.7 PIPS Role diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.8 The Concurrent Task Diagram for handling user queries . . . . . . . . . . . . . . 26

2.9 Agent Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.10 SolutionProvider Agent Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.11 Deployment Diagram for PIPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.12 The workflow of MESSAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.13 Organisation view of PIPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.14 Agent/Role Schema for Personal Itinerary Planner System (PIPS) Assistant . . . 33

2.15 The Interaction Diagram for the Itinerary Assistant (IA) Gatherer role and the

PIPS Assistant role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.16 Prometheus Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

vii

viii

2.17 Functionality Descriptor for Itinerary Planning . . . . . . . . . . . . . . . . . . . 37

2.18 Data Coupling Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.19 PIPS System Overview Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.20 Agent Overview Diagram for Itinerary Agent . . . . . . . . . . . . . . . . . . . . 40

2.21 Goal Diagram – PIPS in connection with Tourism Commission . . . . . . . . . . 42

2.22 Actor Diagram for the PIPS architecture . . . . . . . . . . . . . . . . . . . . . . . 45

2.23 Capability diagram represents for the Search Activity capability . . . . . . . . . 46

3.1 Evaluation framework at a high level view . . . . . . . . . . . . . . . . . . . . . . 65

4.1 Agent Class Diagram automatically generated by agentTool . . . . . . . . . . . . 100

List of Tables

2.1 Agent-Oriented Software Engineering methodologies . . . . . . . . . . . . . . . . 15

4.1 Comparing methodology’s concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.2 Comparing methodology’s modelling language . . . . . . . . . . . . . . . . . . . . 87

4.3 Comparing methodology’s process . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.4 Comparing methodology’s pragmatics . . . . . . . . . . . . . . . . . . . . . . . . 94

4.5 The basic features of a “next-generation” agent-oriented methodology . . . . . . 115

ix

Abstract

Agent technology has evolved rapidly over the past few years along with a growing number of

agent languages, architectures, and theories proposed in the literature. It is arguably regarded

by the computer science and software engineering communities as one of the most active and

important areas of research and development having emerged in information technology since

the 1990s. Agent technology has numerous applications in a wide variety of domains such as

air traffic control, space exploration, information management, business process management,

e-commerce, holonic manufacturing, and defence simulation.

Along with the growing interest in agent applications, there has been an increasing number of

agent-oriented software engineering methodologies proposed in recent years. These methodolo-

gies are developed and specifically tailored to the characteristics of agents. The roles of these

methodologies are to provide methods, models, techniques, and tools so that the development

of agent-based systems can be carried out in a formal and systematic way. Despite the large

number of agent-oriented methodologies, a complete and mature agent-oriented methodology for

developing agent systems is lacking. One of the steps towards fulfilling this demand is to unify

the work of different existing methodologies, similar to development of the Unified Modelling

Language in the area of object-oriented analysis and design.

The goal of this thesis is to understanding the relationship between five key agent-oriented

methodologies: Gaia, MaSE, MESSAGE, Prometheus and Tropos. More specifically,

we evaluate and compare these five methodologies by performing a feature analysis which

is carried out by evaluating the strengths and weaknesses of each participating methodology

using an attribute-based evaluation framework. This evaluation framework addresses four major

areas of an agent-oriented methodology: concepts, modelling language, process and pragmatics.

xi

The objectivity of the comparison is increased by including inputs from the authors of the

methodologies using a questionnaire and by conducting a small experimental evaluation

of the methodologies. The comparative study also goes further with a structural analysis

where the key commonalities and distinguishing differences of the five selected methodologies are

identified in terms of models, techniques, and tools. We also make some preliminary suggestions

regarding a unification of these agent-oriented methodologies by combining their strong points

and avoiding their limitations.

Chapter 1

Introduction

Since the 1980s, agent technology has attracted an increasing amount of interest from the re-

search and business communities [52, 77]. In particular, the last decade has witnessed a steadily

increasing number of different agent theories, architectures, and languages proposed in the liter-

ature. This increasing interest in agent technology is mainly due to its potential to significantly

improve the development of high-quality and complex systems [53]. Indeed, there have been

numerous agent-based applications in a wide variety of domains such as air traffic control, space

exploration, information management, business process management, e-commerce, holonic man-

ufacturing, and defence simulation [54, 77].

Despite its popularity and attractiveness as a research area, agent technology still faces many

challenges in being adopted by the industry and possibly taking over from objects technology

as the dominant software development technology [83]. A key area of research is Software

Engineering methodology:

“One of the most fundamental obstacles to large-scale take-up of agent technology

is the lack of mature software development methodologies for agent-based systems.”

[77, page 11].

Indeed, the development of industrial-strength applications requires the availability of software

engineering methodologies. These methodologies typically consist of a set of methods, models,

and techniques that facilitate a systematic software development process, resulting in increased

1

CHAPTER 1. INTRODUCTION 2

quality of the software product.

Although many Agent Oriented Software Engineering (AOSE) methodologies have been pro-

posed, few are mature or described in sufficient detail to be of real use. None of them is in fact

complete (in the sense of covering all of the necessary activities involved in software engineering)

and is able to fully support the industrial needs of agent-based system development.

In addition, although a large range of agent-oriented methodologies are available, there is a

lack of appropriate study and evaluation and comparison of the existing methodologies. Several

approaches have been applied to review and classify a large range of agent-oriented methodologies

or perform comparisons on a small number of methodologies. Unfortunately, such evaluations

or comparisons are mostly subjective and are solely based on inputs from a single assessor (or

group of assessors). Furthermore, numerous key issues relating to software engineering generally

and the agent-oriented paradigm specifically are not addressed in those studies.

We believe that the area of agent-oriented methodologies is growing rapidly and that the time

has come to begin drawing together the work from various research groups with the aim of

developing the “next generation” of agent-oriented software engineering methodologies.

A crucial step is to understand the relationship between various key methodologies, including

each methodology’s strengths, weaknesses, and domain of applicability. An important part of

this step is also identifying the key commonalities and differences among the existing agent-

oriented methodologies. By doing so, we may contribute towards building a unified approach to

agent oriented software development.

1.1 Goal of Thesis

The main goal of this thesis is to perform a systematic and comprehensive evaluation of several

prominent agent-oriented methodologies. Such an evaluation would ideally be carried out using a

complete framework via different evaluation methods such as feature analysis, survey, case study,

etc. However, in practice, it is difficult to obtain a complete evaluation framework. Therefore,

we try to construct a framework that is complete in the sense that it can be used to fulfill our

purposes. It is also important that the evaluation is as objective as possible. This means that

CHAPTER 1. INTRODUCTION 3

the results of the evaluation reflect a wide range of viewpoints, including the author of this thesis

and his supervisors, the authors of the methodologies, and users of methodologies.

The subgoals of this thesis are:

• Establishing a generic evaluation framework that can be applied to perform a compar-

ative analysis of any agent-oriented software engineering methodology. This evaluation

framework should four major areas of an agent-oriented methodology: concepts, modelling

language, process and pragmatics

• Using this framework and following various evaluation procedures (e.g. survey and case

study) to carry out a feature analysis. The purpose is to identify the key strengths and

weaknesses of selected prominent agent-oriented methodologies.

• Identifying the similarities and differences among selected agent-oriented methodologies

using a structural analysis of their process and models (i.e. structure).

• Proposing a preliminary unification of the evaluated agent-oriented methodologies by com-

bining their strong points as well as providing recommendation to improve their current

limitations.

1.2 Outline of Thesis

The remainder of this thesis is structured as follows:

• Chapter 2: “Background” provides a theoretical background for the research and a litera-

ture review of relevant recent work.

• Chapter 3: “Proposed Evaluation Approach” provides a description of the evaluation

methods that are used and the evaluation framework that is employed to perform the

evaluation.

• Chapter 4: “Evaluation Results” presents, discusses and analyses the results of the evalua-

tion as well as describes a preliminary unification of the participant agent-oriented software

engineering methodologies.

CHAPTER 1. INTRODUCTION 4

• Chapter 5: “Conclusion” presents the conclusions drawn from the research and raises

suggestions for future research.

Chapter 2

Background

The main purpose of this chapter is to provide some insight into the problem which this research

attempts to tackle. This has mainly resulted from a literature review of two areas: agent-

oriented methodologies and the existing techniques used for comparing software engineering

methodologies. Hence, the first part of this chapter provides some background on the agent-

oriented approach, including an introduction to agents in Section 2.1. A description of a simple

case study agent-based application is also presented in this section. In Section 2.2, we highlight

the trend of software engineering paradigms and the current position of the agent-oriented

paradigm in that movement. We also provide (in Section 2.3) a description of the five agent-

oriented software engineering methodologies which we chose to evaluate in this research.

The second part of this chapter (Section 2.4) presents a literature review of techniques and

methods for comparing and evaluating software engineering methodologies.

2.1 Agent-based computing

In this section, we introduce the basic ideas of intelligent agents at a fairly high level of ab-

straction (Section 2.1.1). We also touch briefly on the comparison between agents and objects.

Additionally, a description of a case study agent-oriented information system is presented (Sec-

tion 2.1.2).

5

CHAPTER 2. BACKGROUND 6

2.1.1 Intelligent Agents

“What’s an agent anyway?” [37]. This question was firstly asked in [37] and many answers have

been proposed to date. According to the Oxford Dictionary, an agent is “one who is authorised

to act for or in the place of another”. However, among the Artificial Intelligence community

the concepts associated with agents, also called software agents or intelligent agents, have been

discussed for many years. Even though there are different perspectives on agents [5, 40], there

is an emerging agreement that an agent is “an encapsulated computer system, situated in some

environment, and capable of flexible autonomous action in that environment in order to meet its

design objectives” [115, page 67]. Examples of software agents can be varied such as robot soccer

playing at the RoboCup or intelligent shopping agents helping travellers find airfares or holiday

bargains. The applications of agent technologies reside in a wide range of domains, including

air traffic control, space exploration, information management, business process management,

e-commerce, holonic manufacturing, and defence simulation [77].

Most agent definitions tend to fall in either of the two categories called strong agency and

weak agency [36, 40, 82, 115]. Weak agency definitions describe agents with the following

characteristics [115]:

• Situatedness: Agents are embedded in an environment. They use their sensors to perceive

the environment and affect it via their effectors. For example, a robot soccer player is an

agent situated in a soccer field. It has different sensors such as cameras for keeping track

where the ball and other players are. It also uses several effectors such as its legs or its

body for kicking or passing the ball.

• Autonomy: Agents can operate and make their own decision on which action they should

take, independent of humans or other agents. Hence, agents cannot be directly invoked

like objects. Regarding our robot soccer player example, autonomy can be reflected by the

fact that when having the possession of the ball, a robot agent is able to decide whether

to kick the ball towards the goal or to pass the ball to its teammates. These decisions are

made without direct intervention of humans of other robot soccer agents on the field.

• Reactivity: Agents can perceive their environment and respond in a timely fashion to

CHAPTER 2. BACKGROUND 7

changes that occur in it. For instance, when our robot soccer player sees the ball within

its control area, it needs to quickly respond to that event by either passing the ball or

doing something else with the ball.

• Pro-activeness: Agents are pro-active if they have goals that they pursue over time. The

ultimate goal of robot soccer players is to win the game by scoring goals and preventing

the other team from scoring goals. As a result, most of their actions such as passing the

ball to their teammates, kicking for goals, etc. need to contribute toward the achievement

of these main purposes.

• Social ability: Agents are able to interact with other agents and humans in order to achieve

their goals. Relating to our example, social ability requires each robot soccer agent to be

able to communicate and coordinate with its teammates.

Strong agency is defined by a similar set of attributes to those of weak agency but the notion

is extended, e.g. viewing agents as having mentalistic and intentional notions such as beliefs,

desires, goals, intentions and commitments [82]. Additionally, several other features such as

mobility(i.e. agents can move around), veracity (i.e. agents are truthful), benevolence (i.e.

agents do what they are told to do) are sometimes attributed to strong agency.

The above properties of agents make them flexible and robust entities. Flexibility results from

the ability of agents to react to changes and to adapt to the environment by using their social

skills (i.e. ability to negotiate with other agents, and ability to take advantage of opportunities).

Meanwhile, the capability of exercising choice over their actions and interactions (i.e. autonomy)

and pursuing goals (i.e. pro-activeness) improve the agents’ robustness. These qualities, flexibil-

ity and robustness, are very useful in complex, dynamic, open and failure-prone environments.

Together with the explosion of distributed information systems including the Internet, these

types of environment are becoming more and more popular. As a result, the need to have tech-

nologies whose key entities have such intelligent characteristics as agents has been increasing.

On the other hand, objects, the fundamental entities in the currently dominant approach, the

object-oriented software engineering, cannot offer the same properties as agents do. In fact, the

notions of autonomy, flexibility, and pro-activeness can hardly be found in traditional object-

oriented approaches [83]. Although agents with these characteristics can be implemented using

CHAPTER 2. BACKGROUND 8

object-oriented techniques, the standard object-programming model in fact does not specify how

to build systems that integrate these types of behaviour. Hence, those differences are significant

as far as designing and building agent-based systems [118, p22-23].

2.1.2 The Personal Itinerary Planner System

The concept of agents leads to emerging types of applications, which are called agent-based

systems or agent-oriented information systems. These systems often do not have a single au-

tonomous agent. Rather, they are comprised of many homogenous or heterogenous agents.

Those agents may be different in terms of type, working environment or programming platform.

Hence agent-based systems are also sometimes referred to as multiagent systems [85].

The robot soccer player which we mentioned in the previous section is only a simple example

of a software agent. In this section, we present another agent-based application which we have

designed and which formed the basis for the case study component of our research. It illustrates

some basic characteristics of a real application that may suit the properties of agents as discussed.

In addition, our experimental evaluation which is discussed in a later section involves the design

of this application. As a result, most of the example models provided in section 2.3 are also

based on this application. Therefore, we present it here.

The Personal Itinerary Planner System (PIPS) is designed to assist a traveller or visitor

in finding activities (see Figure 2.1). A typical scenario of the use of PIPS is that a traveller

is visiting an unfamiliar city. The visitor wants to have some fun by spending some time on

a weekend or evening to go sightseeing. He/she then accesses PIPS. After telling PIPS where

he/she is, when he/she is looking for things to do, and what are his/her interests, the application

responds with a number of itineraries.

There are several constraints relating to an itinerary. Firstly, it must include one or more

activities such as visiting a local attraction (e.g. museum, zoo, aquarium, etc.), attending a

restaurant or going to a show or a sporting event. Activity types provided in PIPS are meals,

visiting attractions (constrained by what time they are open), attending events (e.g. shows,

concerts constrained by what time the show starts and ends), hiring equipment (e.g. hang

glider), sports activities and guided tours.

CHAPTER 2. BACKGROUND 9

Secondly, the activities in an itinerary must form a coherent plan. This condition means that

time gaps between events should not exist. Thirdly and more importantly, an itinerary must be

practical, meaning that it meets constraints on time and space. For instance, two consecutive

activities taking place at two different locations should be separated by a transport activity

that allows the visitor to move from the location of the first activity to that of the second. For

example, a simple itinerary might consist of the following: go to a football match (3pm-5:30pm),

catch a tram (details included) to Flinders St. (5:30pm-6pm), visit the aquarium (6pm-7pm).

Tourism Database

Weather Database

Transportation Database

• Tram 01 to MCG (2.45pm – 3pm) • St Kilda – Essendon (3pm – 5.30pm) • Tram 32 to Flinders St (5.30pm – 6pm) • Visit the Aquarium (6pm – 7pm) • …….. • ………….

Time/Date Location Interests

Personal Itinerary Planner System

Figure 2.1: The Personal Itinerary Planner System

User input to PIPS consists of several parameters. They include the user’s interests and the

data and time range for which an itinerary is sought. Some other constraints that may be part

of a request are transport types to be used (e.g. bus or tram) and starting/ending locations.

The user’s preferences should also be taken into account. They include the number of itinerary

options returned by PIPS and the activities that the user wants to go to several times. Another

feature of PIPS is handling changes. Once the user selects an itinerary PIPS will alert him/her

to relevant or significant changes (e.g. if the weather changes making an outdoor activity im-

possible, if a concert becomes sold out, etc.). PIPS may access real data on Melbourne, using

the Australian Tourism Data Warehouse system.

CHAPTER 2. BACKGROUND 10

The required characteristics of PIPS fit an agent-based application for several reasons. Firstly,

PIPS needs to be situated in an environment (e.g. the Internet) where it can access live data

involving travelling and tourism information. It has some autonomy in the sense that the

decision-making process of searching for appropriate itineraries and choosing which itineraries

should be presented to the user is solely depend on the system itself. More importantly, PIPS has

to be reactive, meaning that it is able to detect significant or relevant changes in the environment

(e.g. if the weather changes making an outdoor activity impossible, if a concert becomes sold

out, etc.). Also it can respond to these changes quickly by adapting the current itinerary and/or

letting the user know. PIPS may have several agents which try to accomplish their own task.

For example, one agent might provide the primary interface between a mobile user and the

information system (i.e. the agency). Another agent might be responsible for developing route

plans by communicating with a transport information system.

In addition, there may be an agent that monitors weather condition by obtaining real-time

weather information. These agents are also autonomous in that they act and work independently

from each other. However, they need to interact with each other pro-actively in order to achieve

the main goal of the overall system which is providing suitable plans to the user. PIPS is also

expected to work in a distributed environment where the users can have access to the application

via different methods such as using personal computers, mobile devices such as PDAs, etc.

2.2 Agent-Oriented Software Engineering

“Software engineering is the application of a systematic, disciplined, quantifiable app-

roach to the development, operation, and maintenance of software; that is, the appli-

cation of engineering to software.” [46].

To date, a wide range of software engineering paradigms (e.g structured programming, object-

oriented paradigms, component-ware approaches, etc.) have been proposed with the aim of

either facilitating the engineering process of producing software or increasing the complexity of

the applications that can be built [106]. Among them, Agent-Oriented Software Engineering

(AOSE) has been emerging as a promising approach to achieve these objectives. By definition,

AOSE is the application of agents to software engineering in terms of providing a means of

CHAPTER 2. BACKGROUND 11

analysing, designing, and building software systems [55]. In this section, we briefly highlight the

trend of software engineering paradigms over the past few decades with the purpose of indicating

why and how AOSE has the potential of being an efficient and powerful software engineering

approach.

In the early days of programming languages, programmers wrote programs at a level close

to the machine. They used to view the whole system as a basic unit of software. Hence,

modular design did not exist in those days. This technique is, however, only practical for simple

applications. As time went on, software systems became more complex and the old “ad hoc”

programming approach became impractical. Programmers needed to organise their code in a

more structured way, making it easier to manage. Structured programming was introduced

to answer that demand. According to this software engineering paradigm, the basic units of

software are procedures or subroutines. These subroutines are designed to perform a specific

task and can be reused in various situations. Additionally, the concept of encapsulation (i.e.

the hiding of implementation details) was introduced since the code inside each subroutine is

“wrapped” and its state is only determined by external given arguments. This new approach

promoted modular design and consequently eased the process of developing and maintaining

software.

Together with the explosion of information technology in the 80s and the 90s, there was a

large demand for having a wide range of software applications that are both of a high quality

and meeting complex requirements. However, structured analysis techniques were unable to

deal with that demand. In that context, the object-oriented approach was introduced. Its

effectiveness resides in many aspects such as information hiding, data abstraction, encapsulation,

and concurrency [106, page353–359]. Object-orientation also attempts to close the gap between

the real world and its representation, i.e. the software application, by modelling real entities

as objects. These useful properties of object-oriented paradigms bring better maintenance,

improved modifiability and increased functionality to software engineering [83]. As a result,

object-oriented programming and design has quickly become the dominant software engineering

approach in both academia and industry. Following it, there have been several paradigms which

expand object-orientation such as component-ware, design patterns, and application frameworks.

They also contribute to an attempt to achieve software reuse.

CHAPTER 2. BACKGROUND 12

Even though object-orientation has proven its usefulness and power as a software engineering

paradigm, it still seems not to be able to cope with the increasing complexity of software systems.

These complexities result from different sources [55]. One of them is the rapid and radical change

of the information system environment. Software systems are now becoming not just more inter-

connected, and more decentralized but also more interdependent. These changes are amplified

with the increasing popularity of the Internet and the World Wide Web. Furthermore, the

complexities within software come from the increasing number of interactions between subcom-

ponents, which is in fact an inherent property of large systems. Therefore, building high-quality

and industrial-strength software becomes more and more difficult.

The concept of agents as being autonomous, sociable, flexible, etc. (as discussed in 2.1.1)

promises a new solution to those issues because it leads to a new way of thinking about software

systems. Such a system is no longer a collection of passive objects. Therefore, there has been

a growth of interest in agents as a new paradigm for software engineering [53, 54, 55, 77]. The

credentials of agent-based approaches as a software engineering paradigm are two-fold. Firstly,

the technical embodiment of the agency can result in advanced functionalities. Multiagent

systems which consist of autonomous agents can expand the complexity and quality of the real-

world applications [54, 77, 82]. In fact, the autonomy aspect of multiagent systems suits the

highly distributed environment where different autonomous agents within a system act and work

independently to each other. In addition, the inherently robust and flexible characteristics of

multiagent systems allow them to work in a more dynamic and/or open environment with error-

prone information sources. The reliability and failure-tolerance of the system are increased and

so is its ability to adapt to changes in the environment.

Secondly, agents with their rich representation capabilities promise more effective and reliable

solutions for complex organisational processes [52, 77]. Jennings and Wooldridge in [53, 55]

pointed out agent-orientation provides three essential tools which assist the developers to manage

complexity: decomposition, abstraction, and organisation. Firstly, they show that agent-oriented

decomposition is the effective means of decomposing the problem space of a complex system.

Secondly, they prove the suitability of agents as an abstraction tool, or a metaphor, for the

design and construction of systems. Thirdly, they indicate the appropriateness of applying

the agent-oriented philosophy for modelling and managing organization relationships to deal

CHAPTER 2. BACKGROUND 13

with the dependencies and interactions that exist in complex systems. Various researchers and

software engineers have also come to that agreement [77].

2.3 Agent-Oriented Methodologies

As discussed in Section 2.2, agents have been increasingly recognised as the next prominent soft-

ware engineering approach. However, developing software without a methodology is analogous

to cooking without a recipe. Software engineers are unable to produce complex and high-quality

applications in an ad-hoc fashion. Methodologies are the means provided by software engineer-

ing to facilitate the process of developing software and, as a result, to increase the quality of

software products. By definition, a software engineering methodology is a structured set of con-

cepts, guidelines or activities to assist people in undertaking software development [3, 50, 113].

It is also important for a methodology to provide notations which allow the developers to model

the target system and its environment. In addition to the methodology, there are also tools

that support the use of such methodologies. For instance, diagram editors help the developers

drawing symbols, models described in the methodology. The Rational Unified Process (RUP)

is a good example of a software engineering methodology [72]. It uses the notation described in

the Unified Modelling Language (UML) [4] and its typical tool support is Rational Rose 1

Despite their current dominance, RUP, UML and other object-oriented methodologies are re-

garded as unsuitable to the analysis and design of agent-based systems. The main reason is the

inherent differences between the two entities, agents and objects, as we discussed in section 2.1.1.

As a result, object-oriented methodologies generally do not provide techniques and model to the

intelligent behaviour of agents [55]. Therefore, there need to be software engineering method-

ologies which are specially tailored to the development of agent-based systems.

In answering that demand, there has been an increasing number of agent-oriented methodologies

proposed in recent years (see Table 2.1). A common property of these methodologies is that they

are developed based on the approach of extending existing methodologies to include the relevant

aspects of agents. They are broadly categorised into two groups: extensions of Object-Oriented

1Rational Rose is a visual modelling and development tool based upon the Unified Modelling Language. A

detailed description of the software is available at http://www.rational.com

CHAPTER 2. BACKGROUND 14

methodologies and extensions of Knowledge Engineering frameworks [47].

• Extensions of Object-Oriented (OO) methodologies: The agent-oriented methodologies

which belong to this category either extend existing OO methodologies or adapt them

to the aim of AOSE. There are several reasons for following this approach [47]. First of

all, the agent-oriented methodologies which extend object-oriented design can benefit from

the similarities between agents and objects. Secondly, they can capitalise on the popular-

ity and maturity of OO methodologies. In fact, there is a high chance that they can be

learnt and accepted more easily. Finally, several techniques such as use cases and class

responsibilities card (CRC) used for object identification can be used for agents with the

similar purpose (i.e. agent identification).

• Extensions of Knowledge Engineering (KE) techniques: There are, however, some aspects

of agents that are not addressed in OO methodologies. For instance, OO methodologies

do not define techniques for modelling the mental states of agents. In addition, the social

relationship between agents can hardly be captured using OO methodologies. These are the

arguments for adapting KE methodologies for agent-oriented software engineering. They

are suitable for modelling agent knowledge due to the fact that the process of capturing

knowledge is addressed by many KE methodologies [47]. Additionally, existing techniques

and models in KE such as ontology libraries, and problem solving method libraries can be

reused in agent-oriented methodologies.

As can be seen from Table 2.1, there is a large number of agent-oriented methodologies available

in the literature. Several efforts were directed to studying most of these existing methodologies

at a coarse-grained level [1, 18, 47, 104]. A detailed discussion of them is given in Section 2.4.3

where we examine related research.

The purpose of this thesis is different. We tend to focus on several prominent agent-oriented

methodologies to examine them in depth in order to identify their strengths, weaknesses, domains

of applicability as well as commonalities and differences between them. Furthermore, there has

been an experimental evaluation involving students using selected AOSE methodologies and

providing feedbacks which contribute to the methodologies assessment. Since we had a limited

number of volunteer students (five summer students), we were not able to compare all of the

CHAPTER 2. BACKGROUND 15

AOSE methodologies. Therefore, we selected five of the exiting agent-oriented methodologies to

evaluate. The selection is based on several factors such as the methodology’s significance and

relevance with respect to the field of agents, and its available resource such as documentation,

tool support, etc. The selection is in fact part of a the evaluation and its detailed description

can be found in section 3.1.

Methodology Authors Category References

AAII Kinny, Georgeff & Rao KE [60, 61, 95]

ADEPT Jennings et. al. KE [51]

AO methodology for

enterprise modelling

Kendall et. al. OO [59]

AOR Wagner OO [108, 109, 110]

Agent UML Odell et al OO [84]

Cassiopeia Collinot et al OO [21]

CoMoMAS Glaser KE [43]

DESIRE Treur et al OO [6]

Gaia Wooldridge, Jennings & Kinny OO [116, 117]

OPEN agents Debenham & Henderson OO [27]

MaSE DeLoach & Wood OO [28, 29, 30, 32, 114]

MAS-CommonKADS Iglesias et. al. KE [48]

MASSIVE Lind OO [76]

MESSAGE Caire et al. (EURESCOM) OO [12, 13, 14, 15, 19, 41]

PASSI Cossentino et al OO [10, 22, 23]

Prometheus Padgham & Winikoff OO [88, 89, 90, 91]

Styx Bush et al. OO [11]

Tropos Mylopoulos et al. KE [7, 42, 81]

Table 2.1: Agent-Oriented Software Engineering methodologies. Notation: “OO” is for extension

of Object-Oriented and “KE” for extension of Knowledge Engineering

The five methodologies which were chosen are: Gaia, MaSE, MESSAGE, Prometheus, and

Tropos. In the following sections, we briefly describe each of them. It is noted that most of the

examples and figures given during the discussion of each methodology are based on the design

CHAPTER 2. BACKGROUND 16

of the same application. They extracted from the design documentation of the students who

performed the case study (see Section 4.2 involving the use of each of the five methodologies to

analyse and design the Personal Itinerary Planner System (see Section 2.1.2).

2.3.1 Gaia

Gaia is one of the first methodologies which is specifically tailored to the analysis and design of

agent-based systems [116, 117]. Its main purpose is to provide the designers with a modelling

framework and several associated techniques to design agent-oriented systems. Gaia separates

the process of designing software into two different stages: analysis and design. Analysis involves

building the conceptual models of the target system, whereas the design stage transforms those

abstract constructs to concrete entities which have direct mapping to implementation code.

Figure 2.2 depicts the main artifacts of each stage: Role Model and Interaction Model

(Analysis), and Agent Model, Services Model, and Acquaintance Model (Design). A

detailed description of the process steps which the developers need to follow to build these

models is described below.

Requirements statement

Roles model

Acquaintance model

Services model

Interaction model

Agent model

Analysis

Design

Figure 2.2: Relationship between Gaia’s models (re-drawn based on [117])

CHAPTER 2. BACKGROUND 17

Analysis

It is noted that, Gaia assumes the availability of a requirements specification. It means that

before beginning the Gaia’s development process, the analysts need to have a reasonable under-

standing of what the system should do. These requirements form the overall objectives of the

system which influence the analysis and design phase.

Gaia encourages the developers to view an agent-based system as an organisation. The software

system organisation is similar to a real world organisation. It has a certain number of entities

playing different roles. For instance, a university organisation has several key roles such as

administration, teaching, research, students, etc. These roles are played by different people in

the university such as managers, lecturers, students, etc. Inspired by that analogy, Gaia guides

the designers to the direction of building agent-based system as a process of organisational

design.

At the first step of the analysis phase, Gaia requires the analysts to define key roles in the

system. At this step, these roles only need to be listed and described in an informal manner.

The main purpose of this step is understanding what roles exist in the system and roughly what

they do. Gaia calls the artifact of this step a prototypical roles model.

Different roles in an organisation interact with each other to achieve their own goals and also

to contribute toward the overall goals of the organisation. For example, to fulfill the goal of

providing itineraries tailored to specific interest of users, in the context of a Personal Itinerary

Planner System, the role who provides the itineraries needs to communicate with the role that

keeps track of the interest of users. These interactions need to be defined in the next step of the

Gaia analysis phase. The product of this step is the interaction model. This model consists

of a set of protocol definitions for each role. Each protocol definition defines the purpose, the

initiator, the responder, the inputs, the outputs and the processing during the course of the

interaction. The valid sequence of messages involved in a conversation, however, is not required

at this stage. Instead, Gaia focusses on the basic nature and purpose of the interaction and

abstracts from exact instantiation details.

Figure 2.3 shows an example of two protocol definitions: one for queryInterestManager (the top

block) and the other for replyRetrieveInterest (the bottom block). The former forms part of

CHAPTER 2. BACKGROUND 18

the ItineraryGenerator role and the latter of the InterestManager role. The top block states that

the protocol queryInterestManager is initiated by the role ItineraryGenerator and involves the

role InterestManager. The protocol involves ItineraryGenerator finding the interests of a user

given by his/her details, and results in ItineraryGenerator being informed about the interests of

that user. The bottom block, on the other hand, involves the protocol replyRetrieveInterest

and the role ItineraryGenerator is now the responder.

queryInterestManager

ItineraryGenerator InterestManager user details

Find interest of user interests of user found

replyRetrieveInterest

InterestManager ItineraryGenerator interests list user details

Send interest of user

ITINERARYGENERATOR aware of the interests of the user

Figure 2.3: The QueryInterestManager and replyRetrieveInterest Protocol Definitions (re-

drawn based on the PIPS design documentation produced by Sheilina Geerdharry)

The final step of Gaia analysis phase involves elaborating the key roles identified in the first

step. This process includes the identification of permissions of roles, their responsibilities as

well as the protocols and activities in which they participate. This detailed description of a

role is depicted by a Role Schemata. A set of Role Schemata forms the Role Model, which is

considered as the key artifact of this analysis phase. Responsibility defines the functionality of

a role and is divided into two types: liveness properties (“something good happens”) and safety

properties (“nothing bad happens”) [117]. Permissions (i.e. rights) define which resources the

agents playing that role can and cannot use when performing a particular action. Activities

are “private” actions which do not involve interactions with other roles. Protocols are actions

that involve interactions with other roles and are derived from the protocol model built in the

previous step.

Figure 2.4 shows a role schema for the role ItineraryGenerator in PIPS. This role involves making

CHAPTER 2. BACKGROUND 19

ROLE SCHEMA: ItineraryGenerator

DESCRIPTION: This role involves generating itineraries for the user.

PROTOCOLS: queryInterestManager, queryDatabaseKeeper, makeItinerary, replyGenerateItinerary

PERMISSIONS: reads userName // username

interestsKeywords //interests of the user timeFrom //start time for itinerary timeTo //end time for itinerary activityList //List of available activities timeStart //start time of activity timeEnd //end time of activity

generates itineraries //coherent itineraries for the user

RESPONSIBILITIES LIVENESS: ItineraryGenerator = GenerateItinerary

GenerateItinerary = queryInterestManager . queryDatabaseKeeper* .

makeItinerary . replyGenerateItinerary SAFETY:

Figure 2.4: Role schema for role ItineraryGenerator (re-drawn based on the PIPS design docu-

mentation produced by Sheilina Geerdharry)

up coherent itineraries for the user, depending on the users interests, time range specification,

and available activities. There are several protocols are associated with this role such as queryIn-

terestManager, queryDatabaseKeeper, etc. This role has the permission to read the user details,

the user input in terms of time and location, and it is permitted to produce coherent itineraries to

the user. The liveness property of the role ItineraryGenerator indicates the sequential execution

of its associated protocols and activities in generating itineraries.

It is noted that the Gaia analysis process is not purely linear as described. In contrast, the

analysts are encouraged to go back to add a new role, new protocols or move forward to add

new permissions, activities, etc.

CHAPTER 2. BACKGROUND 20

Design

Having finished the analysis phase, the analysts basically complete building the conceptual model

of the system with abstract entities. These entities do not necessarily have any direct realization

within the system. They can now move to the second phase (i.e. the Design phase) where those

abstract entities are transformed into concrete entities which typically have direct counter-parts

in the run-time system.

The design stage requires the developers to build three models. First, an agent model which

includes various agent types is constructed. Agent types are the counterparts of objects in

object-oriented approaches. They are basic design units of an agent-based system and their

realization at run-time are agent instances. Agent types in the system under development are

defined on the basis of the roles that they play. Therefore, the important feature of this step is

to map roles identified in the analysis phase to agent types. A role can be mapped to one or

more agent types and vice versa. Some general guidelines are proposed to help this process. For

instance, a close relationship between several different roles indicates that they can be grouped

together in a single agent type.

SERVICE INPUTS OUTPUTS PRE- CONDITIONS POST- CONDITIONS query interest manager

userName userInterests userInterests = nil (userInterests = nil) V (userInterests != nil)

query database keeper

interestKeywords, timeFrom, timeTo

activityList activityList = nil (activityList = nil) V (activityList != nil)

make itinerary

activityList, timeStart, timeEnd

makeRequestResults makeRequestResults = nil

(makeRequestResults = nil) V (makeRequestResults != nil)

reply generate itinerary

makeRequestResults

Figure 2.5: Service Model for Itinerary Agent (re-drawn based on the PIPS design documentation

produced by Sheilina Geerdharry)

The second artifact developed in Gaia’s design phase is a service model which depicts the

services that each role provide. A service is a “coherent, single block of activity in which an

agent will engage” [116]. Each service should be represented by its properties: inputs, outputs,

CHAPTER 2. BACKGROUND 21

pre-conditions and post-conditions. Inputs and outputs are derived from the protocol model.

Pre-conditions and post-conditions which define the constraints on services are derived from the

safety properties of a role. An example of the Service Model is shown in Figure 2.5. The Itinerary

agent (playing the ItineraryGenerator role discussed earlier) has four main services: query the

interest manager, query the database keeper, making itinerary and replying the generating

itinerary agent. Each of these services is described with its inputs, outputs, pre-conditions, and

post-conditions.

The final model which the designers need to complete is the acquaintance model. It depicts

the communication links existing between agent types. It is in fact a directed graph in which

nodes represent agent types and arcs show communication pathways.

Gaia does not address implementation and there is no tool support that we, or Michael Wooldridge,

one of the authors of Gaia2, are aware of.

2.3.2 Multiagent Systems Engineering (MaSE)

Multiagent Systems Engineering (MaSE) [28, 29, 30, 32, 114] is an agent-oriented software

engineering methodology which is an extension of the object-oriented approach. MaSE does not

view agents as being necessarily autonomous, proactive, etc.; rather agents are “simple software

processes that interact with each other to meet an overall system goal.” [29, page 1]. In fact,

they view agents as specialisations of objects which may have some of the characteristics of weak

agency as we discussed in section 2.1.1. MaSE’s authors argue that, regarding agents in this

way, one may avoid having to define what an agent is, which was an arguable topic at the time

the methodology was being developed. In addition, all the components in the system are equally

treated regardless of whether they posses intelligence or not. Because of this inherent perspective,

MaSE is constructed according to the application of existing object-oriented techniques to the

analysis and design of multiagent systems.

As a software engineering methodology, the main goal of MaSE is to provide a complete-lifecycle

methodology to assist system developers to design and develop a multi-agent system. Similar

to Gaia, it also assumes the availability of an initial requirements prior specification to the

2Email correspondence, 18th December 2002.

CHAPTER 2. BACKGROUND 22

Initial System Context

Goal hierarchy

UseCases

Deployment Diagrams

Agent Architecture

Conversa-tions

Agent Classes

Roles Concurrent Tasks

Sequence Diagrams

CapturingGoals

Applying Use Cases

Refining Roles

Constructing Conversations

Creating Agent Classes

Assembling Agent Classes

System Design

Analysis

Design

Figure 2.6: MaSE’s process steps and artifacts (re-drawn based on [32])

CHAPTER 2. BACKGROUND 23

start of software development under the methodology process. The process consists of seven

steps, divided into two phases. The Analysis phase consists of three steps: Capturing Goals,

Applying Use Cases, and Refining Roles. The remaining four process steps, Creating Agent

Classes, Constructing Conversations, Assembling Agent Classes, and System Design, form the

Design phase (Figure 2.6).

Capturing Goals

One of the main extensions of MaSE to object-oriented techniques is the introduction of goals.

A goal, as defined in MaSE, is an abstraction of a set of requirements, which can be functional or

non-functional requirements. MaSE perceives goals from the system’s point of view rather than

from the users’ perspective. For instance, if the goal of a user is “figure out a list of activities

that interest him/her” then the system goal of the Planner Itinerary Personal System (PIPS,

Section 2.1.2) would be “provide a list of activities based on a user’s interests”. Turning the

focus on the system goals rather the users’ goals, as argued by the authors of MaSE, “seems to

be more natural when talking about the system itself” [29, page 4].

System goals relate to the first step of the analysis phase in MaSE. This step has two parts. At

the first part, MaSE requires the analysts identify system-level goals based upon the requirements

specification. In fact, goals are captured by abstracting from detailed functional requirements.

The second part involves structuring these goals and representing them in a hierarchy model

called a Goal Hierarchy. This goal model is a product of a goal decomposition process in

that goals are broken down in subgoals, subgoals to sub-subgoals and so on. MaSE provides a

classification of goals into four different types, which may help the analysts in the process of

decomposing goals. Firstly, a summary goal is an abstraction of two or more common goals.

Secondly, a non-functional goal is one that is derived from a set of non-functional requirements.

Thirdly, two or more similar goals can be grouped together to form a combined goal. Fourthly,

some goals, known as partitioned goals, are implied by the conjunction of their children and do

not need to be achieved – it is sufficient to achieve their children goals.

CHAPTER 2. BACKGROUND 24

Applying Use Cases

Similar to Gaia, MaSE also views an agent-based system as an organisation and the process

of developing them as an organisational design. Therefore, the key requirement of the MaSE

analysis phase is to identify roles in the system and the interactions or communications between

them. Capturing Goals is the first step toward this objective. The second step involves Use

Cases, a technique which is commonly found in object-oriented methodologies. It includes

extracting main scenarios from the initial system context or copying them from it if they exist.

The use cases should show how a goal can be achieved during a normal system operation as well

as erroneous conditions.

The second part of this step is to apply those use case. Firstly, an initial set of roles is identified

based on goals and use cases scenarios. Secondly, the sequence of events that occur in the

interaction or communication between roles is represented in a Sequence Diagram. This

model is also analogous to UML sequence diagrams except that entities are roles rather than

objects.

Refining Roles

The main task of the final step of the Analysis phase is refining the initial set of roles defined

in the previous step. It may involve renaming roles, decomposing them into multiple roles or

combining with other roles. The main consideration of the analysts at this step is mapping goals

to roles. Each role can have multiple goals and every system goal identified at the first step

needs to be assigned to a role. Additionally, the tasks which a role should perform to achieve its

associated goals are defined at this step. This information is represented in the Role Model,

which is a key artifact of the analysis phase. The Role Model includes the roles, the goals which

those roles are responsible for, the tasks that each role performs to achieve its goals and the

communication paths between the roles. Figure 2.7 shows an example of Role Model that are

constructed for the design of PIPS. As can be seen, the AccountVerifier role has two goals 1.2

and 1.2.2, which are references to the goals (maintaining security and user verification) identified

in the “capturing goals” step. In order to accomplish these goals, the AccountVerifier need to

perform the “Determine Validity” task. This task is also formed the communication between the

CHAPTER 2. BACKGROUND 25

AccountVerifier role and the AccountManager role when the latter wants to request an account

evaluation.

Figure 2.7: PIPS Role diagram (extracted from the PIPS design documentation produced by

Yenty Frily)

In order to achieve the goals assigned to it, a role needs to perform a certain number of tasks.

At this step, the analysts are required to identify these tasks and depict them. Tasks are defined

as a sequence of actions a role needs to perform to achieve goals. Tasks are represented by the

Concurrent Task Model. It consists of a set of finite machine automata where the states

describe the processing that happens internal to the agent and state transitions express the

communication between roles or between tasks as shown in Figure 2.8.

Creating Agent Classes

If roles are the most important concept in the analysis phase, then agents are their counterparts

in the design phase. MaSE represents an agent by an agent class. The main task of this step

CHAPTER 2. BACKGROUND 26

Figure 2.8: The Concurrent Task Diagram for handling user queries (extracted from the PIPS

design documentation produced by Yenty Frily)

is to build a number of agent classes that constitute the target system. This is an analogy to

object-oriented design where developers are required to identify the object classes.

At this point in the methodology, an agent class is defined by two properties. First, the roles

which an agent plays need to be specified. The allocation of roles to agent classes must satisfy the

condition that all the identified roles are assigned to a number of agent classes. This requirement

ensures that all the system goals are addressed in the design phase. Second, the conversations

between agent classes also need to be determined at a high level. They can be derived from

the the external communication of the roles that the agent plays. Agent classes and their

interrelationships are depicted in the Agent Class Diagram. An example of this model is

shown in Figure 2.9 , where five main agent classes in the PIPS system are displayed together

with the conversations between them. As can be seen, the QueryAgent plays the role of a Query

Manger, whereas the SolutionProvider/Solver agent is assigned two roles: ItineraryManager

and RefineManager. The main conversation between them is SupplySolution/Solver in which

the SolutionProvider agent distributes requested itineraries to the QueryAgent.

CHAPTER 2. BACKGROUND 27

Figure 2.9: Agent Class Diagram (extracted from the PIPS design documentation produced by

Yenty Frily)

Constructing Conversations

In the previous step, conversations taking place between agents are identified. The details of

these conversations are described in this second step of the design phase using Communication

Task Diagrams, which are a form of finite state machines. Each conversation is described by

two Communication Class diagrams – one is for the initiator of the conversation and the other

for the responder. Each diagram shows the valid sequence of messages exchanged between the

initiator and the responder.

Assembling Agents

Agent classes that exist in the system are identified in the Creating Agent Classes step together

with the roles they each play. However, up to this point, the designers have not specified in

detail how each agent is designed to perform the task and to participate in the conversations

in order to achieve their assigned goals. To do so, the agent architecture needs to be defined.

MaSE uses an Agent Architecture Diagram to depict the high level of an agent architecture.

It includes the components that build up the architecture as well as the interrelationships (i.e.

CHAPTER 2. BACKGROUND 28

connectors) between them. The methodology in fact does not dictate any particular implemen-

tation platform. Designers can use existing agent architectures such as Belief-Desire-Intention

(BDI) or develop them from scratch. An example of an Agent Architecture Diagram for the

SolutionProvider agent is shown in Figure 2.10. The SolutionProvider agent has three compo-

nents four components. They interact to each other to provide the solution (i.e. itineraries) to

the users. A more detailed description of this process step can be found in [32].

Figure 2.10: SolutionProvider Agent Architecture (extracted from the PIPS design documenta-

tion produced by Yenty Frily)

System Design

The fourth and final step of the design phase is System Design. At this step, agent classes defined

in previous steps are instantiated and deployed throughout the system. The design artifact of

this step is a Deployment Diagram which specifies the numbers, types and locations of agents

within a system. Figure 2.11 shows a Deployment Diagram for our example system, the PIPS.

CHAPTER 2. BACKGROUND 29

Agents are represented by the three-dimensional boxes whilst the connecting lines indicate actual

conversation between agents. Agents located in a same dashed-box will execute on the same

physical platform.

Figure 2.11: Deployment Diagram for PIPS (extracted from the PIPS design documentation

produced by Yenty Frily)

MaSE has extensive tool support in the form of agentTool [29]. Its latest version 2.03 supports

all seven steps of MaSE. It also provides automated support for transforming analysis models

into design constructs.

2.3.3 MESSAGE

MESSAGE [12, 13, 14, 15, 19, 41] is the end product of a two-year project (Project P907)

hosted by the European Institute for Research and Strategic Studies in Telecommunications

(EURESCOM). The main purpose of the project is to develop an agent-oriented development

3http://www.cis.ksu.edu/∼sdeloach/ai/agentool.htm

CHAPTER 2. BACKGROUND 30

methodology as an extension to existing methodologies to allow them to support agent-oriented

software engineering. Consequently, the life-cycle model of the Rational Unified Process (RUP)

for software development [72], which provides a generic software engineering project life-cycle

framework, is adapted to MESSAGE. The current status of the methodology is, however, limited

to analysis and design activities. Other phases of the RUP life-cycle such as implementation,

testing, and deployment have not been described “due to the limitations of time and resources

of the project” [13]. Additionally, the Unified Modelling Language (UML) [4] is selected as a

foundation for MESSAGE’s modelling language [15] and is extended by adding entity and rela-

tionship concepts required for agent-oriented modelling. The main applications of MESSAGE

are expected to reside in the telecommunication sector even though the methodology can be

applied to other domains as well.

Organisation Chart

Organisation View

Business Processes

Company Activities Description

Company Goals Description

Task/Goal View

Agent View

Domain View

Interaction View

Figure 2.12: The workflow of MESSAGE (re-drawn based on [13])

Analysis

MESSAGE is developed based on a view-oriented approach. The full analysis model is described

via five different views: Organisation view (OV), Goal/Task view (GTV), Agent/Role view (AV),

Interaction view (IV) and Domain view (DV). They focus on different perspectives of the full

CHAPTER 2. BACKGROUND 31

model but together they provide a comprehensive representation. Figure 2.12 shows the coarse-

grained relationship between the five views and the main inputs to them.

User

KM System

Itinerary Database

Itinerary startTime endTime startLocation

Itinerary Assistance

ATDW Database

Travel Database

Sends

Retrieves Travel Details

Retrieves Event Details

Stores Itinerary User Interest

Figure 2.13: Organisation view of PIPS (re-drawn based on the PIPS design documentation

produced by Hiten Bharatbhai Ravani)

• Organization view: This depicts the concrete entities existing in the system and its

operational environment. These concrete entities include the agents, organisations which

are groups of agents working together to achieve a common goal, roles (i.e. the position of

agents in an organisation), and external resources. The Organisation view also describes

the relationships between these entities at a high level. An example of the Organisation

view is shown in Figure 2.13. The Knowledge Management (KM) system interacts with

the ItineraryAssistance role and with three external resources, the Itinerary Database to

store itineraries relating to users’ interests, the Australian Tourism Database Warehouse

(ATDW) Database to retrieves event details and the Travel Database to retrieves travel

details. The ItineraryAssistance role interacts with Users to gather their requirements and

provide itineraries. It is noticed that the ItineraryAssistance does not interact directly with

the three external databases. All these interactions are carried through the KM system.

• Goal/Task view: Similar to the Goal Hierarchy provided in MaSE (Section 2.3.2), this

CHAPTER 2. BACKGROUND 32

view shows the structure of goals, i.e. how a goal is decomposed into subgoals. In addition,

it represents how tasks can be performed by agents or roles to fulfill their goals.

• Agent view: In contrast to the OV which captures the overall structure of the system,

the Agent view emphasises the individual agents and roles. Similar to Gaia, MESSAGE

also uses a series of schemas supported by diagrams to define the characteristics of a role or

an agent. Such characteristics include the goals an agent/role needs to achieve, the events

it receives, the resources it can use, etc. For instance, Figure 2.14 describes the PIPS

Assistant role in terms of its capability, knowledge and beliefs, and agent requirements.

• Interaction view: Interactions between agents or roles are represented in the Interaction

view. This view consists of a number of interaction diagrams. In these diagrams, the parties

that participate in a communication are described together with other information such as

the motivation of the interaction, the events that cause the interaction. In addition, the

relevant information which each party needs to provide or obtain during the communication

is also identified. Figure 2.15 shows an example of the interaction diagram describing the

“Itinerary requirements request” interaction between the Itinerary Assistant (IA) Gatherer

role and the PIPS Assistant role.

• Domain view: This shows the specific concepts and relations which are relevant to the

domain in which the target system is working. MESSAGE makes use of typical UML class

diagrams to depict the domain view. Each UML class corresponds to a domain specific

concept. For example, in our example of the PIPS, there are several concepts such as

transportation, events, user interests, itineraries, etc. These concepts are usually added

into the domain view when the analysts perform the construction of the other four views.

Even though MESSAGE does not require the developers to follow any strict order in constructing

the above five models, the methodology does provide some heuristics. For instance, at the first

step, the Organization and Goal/Task views are developed. The Agent/Role view and Domain

view are then constructed based on the inputs and the analysts’ experience with the previous

two views. Finally, using the input from the four views previously created, the analysts can

build the Interaction view.

CHAPTER 2. BACKGROUND 33

Role Schema Personal Itinerary Planner System (PIPS) Assistant

Goals Assisting user to form itineraries, Assisting user select most

appropriate itinerary

Capability Some learning capability is required to manage co-ordination

among the other agents depended on the agent.

Knowledge,

Beliefs

Knowledge about travel details, user interest, event details

as it needs to forward this messages as and when required.

Agent require-

ments

This role will be played by the itinerary assistant gatherer

and itinerary assistant selector agents that actually assist in

forming itineraries

Figure 2.14: Agent/Role Schema for Personal Itinerary Planner System (PIPS) Assistant (re-

drawn based on the PIPS design documentation produced by Hiten Bharatbhai Ravani)

<Participation> <Participation> IA Gatherer PIPS Assistant

Itinerary requirements

requests

Itinerary requirements startTime endTime startLocation

Figure 2.15: The Interaction Diagram for the Itinerary Assistant (IA) Gatherer role and the

PIPS Assistant role (re-drawn based on the PIPS design documentation produced by Hiten

Bharatbhai Ravani)

CHAPTER 2. BACKGROUND 34

In addition, as an extension to RUP, MESSAGE emphasises the iterativeness of the software

development process. In fact, the process to develop the above five views is described as stepwise

refinement. In the first step (i.e. level 0), the relationship between the target system and its

users, its environment or other relevant organisational context are described. After that, the

system is recursively analysed at finer and finer resolution (i.e. level 1, 2, etc.). This analysis

process keeps going to the level of detail in which the major components of the system are

sufficiently described so that the design stage can take place. The main aim of level 0 Analysis

to to produce the five different views at the level where the analysts can have an overall view

of the system, its environment and the functionalities it provides. This mainly involves the

identification of entities and their coarse-grained relationship. Level 1 analysis and subsequent

levels involves describing these entities (organisation, agents, roles, tasks, etc) in more detail.

Design

In contrast to the analysis phase where process steps and their artifacts are clearly defined, the

MESSAGE design stage is not well-documented. The authors of MESSAGE argued that this

provides flexibility in the sense that the designers are allowed to choose among different design

approaches [13, page 29]. However, in our view, the developers may find it difficult to proceed

through the design phase since they are not equipped with sufficiently detailed documentation,

techniques and guidelines. In one of the documentations provided with [13], the methodol-

ogy presents two approaches that has been tried in the MESSAGE project. Below are rough

descriptions of the two approaches.

The Multiagent system organization and architecture driven design approach considers the target

system as an organization of several agents. Each agent is regarded as a subsystem which has

its own internal architecture. An agent’s internal architecture defines the connections between

the components that build up the agent. These components are concrete entities that are trans-

formed and stepwise refined from abstract entities produced in the analysis phase. MESSAGE

presents the five design steps that form this transformation process

At the first step, all the entities created in the analysis phase are refined and translated into com-

putation entities, including classes, interfaces, attributes, and methods. MESSAGE states that

this step can be achieved using conventional software engineering techniques such as generating

CHAPTER 2. BACKGROUND 35

and refining use cases and sequence diagrams. However, the methodology does not provide any

examples or techniques to further explain how it can be carried out. The second step requires

the designer to decide which agent architecture should be used. This decision is very important

since it affects other design activities. MESSAGE encourages the designers to look into the lit-

erature for proposed agent architectures prior to starting the design. According to MESSAGE,

there are several factors that may influence the determination of the agent architecture. One of

them is the functional considerations involving the system functionality which are defined in the

analysis phase. For instance, a cognitive architecture seems suitable for applications that need

reasoning and learning mechanisms. Meanwhile, systems that support information extraction,

and/or parallelism of agent interactions may need an reactive architecture.

Having selected an agent architecture to apply, the designers move to the next step where

application agents are produced. These agents are derived by tailoring the generic components

of the chosen architecture to the role of the agents in the target system.At this stage, the designers

may realize that there are several updates that need to be applied to the five existing views.

The step of the design phase allows them to perform these activities. However, consistency and

coherence between analysis models and design constructs should be seriously taken into account.

Finally, the results from previous steps are structured based on the Organization view. This

step is important since it provides an architectural representation of the system in terms of a

group of agents.

The above approach is more concerned with high level design where the design constructs are

platform independent. MESSAGE also addresses another approach which involves designing the

target system at a lower level. At this level, the target agent platform has a larger impact on

the design decisions. MESSAGE takes the FIPA agent platforms as an example to address some

target platform issues and the low level design activities.

2.3.4 Prometheus

The Prometheus methodology [88, 89, 90, 91] is a detailed AOSE methodology, which aims to

cover all of the major activities required in the developing agent systems. The aim of Prometheus

is to be usable by expert and non-expert users. In fact, it has been taught to and used by third-

CHAPTER 2. BACKGROUND 36

year undergraduate computer science students at RMIT University. The methodology uses an

iterative process which consists of three phases: system specification, architectural design and

detailed design. Each of them is elaborated in detail below.

System specification

The system specification is the first phase of Prometheus. Its main purpose is building the sys-

tem’s environment model, identifying the goals and functionalities of the system, and describing

key use case scenarios.

Actions, p e r ce p ts, incid e nts

Scenarios

I nte r a ctiond ia g r a m s

P rot ocol s

I nitia l F u nctiona l ityd e scr ip tor s

A g ent d escrip t ors

C ap ab il it yd escrip t ors

P l and escrip t ors

D at ad escrip t ions

E v entd escrip t ors

Sy st emO v erv iew

A g entO v erv iew

C ap ab il it yov erv iew

a g e ntg r ou p ing

a g e nta cq u a inta nce

sh a r e dd a ta

m e ssa g e s

Deta

iled

Deta

iledde

sign

desig

n

Arch

itect

ural

Ar

chite

ctur

al de

sign

desig

nSy

stem

Syst

emsp

ecifi

catio

nsp

ecifi

catio

n

f ina l d e sig n a r tif a ctinte r m e d i a ted e sig n toolcr ossch e ckd e r i v e s

KeyKey

S ys t em g o a l s

Figure 2.16: Prometheus Overview (extracted from [88])

Firstly, as we described in section 2.1.1, one of the main characteristics of agents is “situated-

ness”. It means that agent systems are situated in an environment that is changing and dynamic.

To some extent, situated agents need to interact with the environment. As a result, building

the environment model is an important step in this system specification stage.

Modelling an environment involves two activities: identifying percepts which are incoming infor-

mation from the environment and determining actions which are the means by which an agent

CHAPTER 2. BACKGROUND 37

affects its environment. Percepts and actions are defined using descriptors. Additionally,

external resources such as data, information, etc. need to be identified.

Secondly, goals and functionalities of the system need to be captured at this stage. At the first

step, system goals are identified mainly based upon the requirements specification. Goals are

decomposed into subgoals if necessary. After that, system functionalities that achieve these goals

are defined. Another step which helps the analysts identify the system functionalities is defining

use case scenarios. These depict examples of the system in operation. Use cases in Prometheus

are similar to those in object-oriented design but are produced in a more structured form. A

typical use case scenario consists of steps describing incoming percepts, messages sent and the

actions. Similar to percepts and actions, goals, functionalities, and use case scenarios are also

captured using a descriptor form. For example, Figure 2.17 describes the Itinerary Planning

functionality in the Personal Itinerary Planner System (PIPS, Section 2.1.2).

Description: Produces a single itinerary, or a collection of activities, that is coher-

ent and practical.

Percepts/Triggers: RequestItinerary (message), ActivityFound (message), Trip-

Planned (message), ItineraryRanked (message)

Actions: N/A

Messages sent: FindActivity (message), PlanTrip (message), RankItinerary (mes-

sage), ItineraryPlanned (message).

Data used: UserDB.

Data produced: N/A.

Interactions: QueryHandling (via RequestItinerary, ItineraryPlanned), Activi-

tyFinding (via FindActivity, ActivityFound), TripPlanning (via PlanTrip, Trip-

Planned), ItineraryRanking (via RankItinerary, ItineraryRanked).

Figure 2.17: Functionality Descriptor for Itinerary Planning (extracted from the PIPS design

documentation produced by Robert Tanaman)

CHAPTER 2. BACKGROUND 38

Architectural Design

Between the requirements capturing phase and the low level design phase where the system is

modelled as computational entities which suit a particular agent platform, Prometheus has an

intermediate phase called architectural design. The three main activities involved in this stage

are: defining agent types, designing the overall system structure, and defining the interaction

between agents.

AccessAuthorizing UserDB

ActivityFinding ActivitiesDB

HistoryTracking

InterestManaging

ItineraryPlanning

ItineraryRanking

LocationTracking LocationDB

OnlineDisplaying

ProfileManaging

UsersListQueryHandling

TripPlanning TripDB

Figure 2.18: Data Coupling Diagram (extracted from the PIPS design documentation produced

by Robert Tanaman using the Prometheus Design Tool (PDT))

Objects are regarded as the basic entity in object-oriented design and agents are their counterpart

in agent-oriented design. Therefore, determining which agents should exist in the target system

is an important step. The designers are able to make that decision by grouping the system

functionalities which were previously defined in the system specification phase. Functionalities

are grouped based upon two criteria. Functionalities that are related to each other (e.g using the

same data) are likely to be in the same group (cohesive criterion). On the other hand, if there

are significant interactions between two functionalities, then there is a high chance that they

should be grouped (coupling criterion). Prometheus also provides the data coupling diagram

and agent acquaintance diagram as aids to the functionalities grouping process. The data

CHAPTER 2. BACKGROUND 39

ActivityFinder

FindActivityActivityFound

ActivitiesDB

ItineraryPlanner

RequestItinerary

TripPlanned

ItineraryPlanned

PlanTrip

UserDB

LocationTracker

TrackEndLocation

TrackStartLocationEndLocationTracked

StartLocationTracked

LocationDB

ServiceAssistant

ChangePassword

CreateProfile

DeleteProfile

Login

Logout

SelectItinerary

SubmitRequest

UpdateInterest

DisplayFarewell

DisplayItinerarySet

DisplayItinerary DisplayMainMenu

InformAccessDenied

InformInvalidPassword

InformUnmatchedPassword

NotifyExistingID

NotifyProfileCreated

NotifyProfileDeleted

NotifyUnknownUserUsersList

TripPlanner TripDB

Figure 2.19: PIPS System Overview Diagram (extracted from the PIPS design documentation

produced by Robert Tanaman using the Prometheus Design Tool (PDT))

coupling diagram (e.g. Figure 2.18) shows the relationship between functionalities in terms of

data used. Meanwhile, the agent acquaintance diagram highlights interactions between agents

as links.

After agent types are defined, the system’s structure needs to be captured in a system overview

diagram, which is “arguably the single most important design artifact in Prometheus” [89]. The

system overview diagram (Figure 2.19) is constructed based on the designers’ understanding

of the system up to this stage of the development process. It depicts the agent types and

the communication links between them and the data used, which was defined in the previous

step. Furthermore, it shows the system’s boundary and its environment in terms of actions,

percepts and external data. In short, the system overview diagram provides the designers and

CHAPTER 2. BACKGROUND 40

implementers with a general picture of how the system as a whole will function.

The system overview diagram, however, only provides the static structure of the system. At this

stage, the designers are also required to capture the dynamic behaviour of the system. There

are two types of diagrams which Prometheus uses to represent the system dynamics. Interaction

diagrams are borrowed from object-oriented design to show interaction between agents. They

are developed based upon use cases scenarios that are defined in the system specification stage.

At a lower level of detail, interaction protocols define the intended valid sequence of messages

between agents. They are developed as considering alternatives at each point of interaction

diagrams.

ActivityFound

RequestItinerary

TripPlanned

FindActivity

ItineraryPlanned

PlanTrip

UserDB

ItineraryPlanning

ItineraryRanked RankItinerary

ItineraryRanking

Figure 2.20: Agent Overview Diagram for Itinerary Agent (extracted from the PIPS design

documentation produced by Robert Tanaman using the Prometheus Design Tool (PDT))

Detailed Design

The final stage of the current Prometheus methodology is the detailed design. This is where the

internal structure and behaviour of each agent are addressed. This stage emphasises on defining

CHAPTER 2. BACKGROUND 41

capabilities, internal events, plans and detailed data structure for each agent type defined in the

previous step. Firstly, an agent’s capabilities are depicted via a capability descriptor which

contains information such as which events are generated and which events are received. The

capability descriptor also includes a description of the capability, details involving interactions

with other capabilities and references to data read and written by the capability. Secondly, at

a lower level of detail, there are other types of descriptors: individual plan descriptors, event

descriptors, and data descriptors. These descriptors provide the details so that they can be used

in the implementation phase.

The detailed design phase also involves constructing agent overview diagrams. These are

very similar to the system overview diagram in terms of style but give the top level view of each

agent’s internals rather than the system as a whole. Agent overview diagrams, together with the

capability descriptors, provides a high level view of the components within the agent internal

architecture as well as the their connectors (interactions). They show the top level capabilities

of the agent, the flow of tasks between these capabilities and data internal to the agent.

Prometheus is supported by two tools [89]. The JACK Development Environment (JDE), devel-

oped by Agent Oriented Software (www.agent-software.com) includes a design tool that allows

overview diagrams to be drawn. These are linked with the underlying model so that changes

made to diagrams, for example adding a link from a plan to an event, are reflected in the model

and in the corresponding JACK code. The Prometheus Design Tool (PDT) provides forms to

enter design entities. It performs cross checking to help ensure consistency and generates a

design document along with overview diagrams. Neither PDT nor the JDE currently support

the system specification phase.

2.3.5 Tropos

Tropos [7, 42, 81] is an agent-oriented software development methodology created by a group of

authors from various universities in Canada and Italy. The methodology is designed specifically

for agent-based system development. Similar to the other AOSE methodologies we described

above, agent-related concepts such as goals, plans, tasks, etc. are included in all the development

phases. Nevertheless, one of the significant differences between Tropos and the other methodolo-

CHAPTER 2. BACKGROUND 42

TourismCommission

PIPS

Easy to use

ProvideInformation

PIPS Usable

ProvideGuidelines

FormPracticalItinerary

User-friendlyInterface

Simplicity

FormCoherent

Plan

+

+++

Access toTourismDatabase

Access toTransportation

Database

Access toWeatherDatabase

Access toUser's Record

SearchInformation Generate

Output

Figure 2.21: Goal Diagram – PIPS in connection with Tourism Commission (re-drawn based on

the PIPS design documentation produced by Sindawati Hoetomo)

gies is its strong focus on early requirements analysis where the domain stakeholders and their

intentions are identified and analysed. This process also allows the reason for developing the

software to be captured. Tropos is the only one of the five selected methodologies that includes

an early requirements phase. Below is a detailed description of each development stage.

Early Requirements

The requirements phase of Tropos is influenced by Eric Yu’s i* modelling framework [119]. The

main objective of this early requirements analysis is to identify the stakeholders in the target

domain and their intentions. Tropos uses the concepts of actors and goals to model stakeholders

and intentions respectively. In Tropos, goals are divided into two different groups. “Hardgoals”

eventually lead to functional requirements whilst “softgoals” 4 relate to non-functional require-

ments. There are two models that represent them at this point in the methodology. Firstly,

the actor diagram depicts the stakeholders (hereafter called actors) and their relationships in

the domain. The latter are called “social dependencies” that reflect how actors depend on one

4Softgoals are goals whose satisfaction conditions cannot be precisely defined

CHAPTER 2. BACKGROUND 43

another for goals to be accomplished, plans to be executed, and resources to be supplied.

Secondly, the goal diagram shows the the analysis of goals and plans with regard to a spe-

cific actor who has the responsibility of achieving them. Tropos suggests three basic reasoning

techniques to analyse goals and plans: mean-end analysis, goal/plan decomposition and contri-

bution analysis. Means-end analysis is a technique for refining goals which involves breaking

down a goal into subgoals so that the plans, resources and softgoals that provide means for

accomplishing the goal (i.e. the end) are defined. Contribution analysis studies the interactions

between goals, i.e. how achieving a particular goal can positively or negatively affect the fulfill-

ment of other goals. Goals are also structured in a hierarchy that represents the AND and OR

decomposition. This can be performed by the AND/OR decomposition technique.

Actor diagrams and goals diagrams are not two separate models in Tropos. Instead, the goal di-

agram is a detailed description with regard to each specific actor of the actor diagrams. Because

actor dependencies result from goals and plans, the activities of building the two models depict-

ing these concepts should be related. Figure 2.21 shows the dependency of Tourism Commission

on PIPS (the Personal Itinerary Planner System, see section 2.1.2) to provide information (hard

goal). It also requires a usable PIPS (softgoal). These goals are then decomposed into sub-

goals. For instance, the goal “provide information” is fulfilled by the composite achievement

of two sub-goals “search information” and “generate output”. The subgoal “search informa-

tion” in turn has several sub-goals such as “access to tourism database”, and “access to user’s

records”. Additionally, the positive contribution of other goals to the softgoal “easy to use” is

also shown. A “user-friendly interface” that offers “simplicity” and provides guidelines promotes

the fulfilment of the goal “easy to use”.

Late Requirements

Similar to the Early Requirements stage, Late Requirements analysis has the same conceptual

and methodological approach. The importance of this phase is the modelling of the target

system (or “system-to-be” as it is called in Tropos) within its environment. The system-to-be is

modelled as one or more actors. The dependencies of these special actors are also identified by

following a similar process to that used in the Early Requirements phase. These dependencies

in fact define the functional and non-functional requirements of the system.

CHAPTER 2. BACKGROUND 44

Architectural Design

Architectural design is the next stage of Tropos methodology. Subsystems (actors) and data

and control flows (connectors) are defined to form the system architecture. The methodology

specifies three steps which system designers can apply to proceed through this phase.

The main aim of the first step is to define the overall architectural organisation of the system.

This is represented in terms of actors and their dependencies, which is also similar to the mod-

els constructed in previous phases. Nevertheless, architectural design requires a more detailed

representation (called extended actor diagram). New actors are introduced into the system

based on the different analysis techniques performed at various levels of abstraction. For in-

stance, if a particular architectural style is selected, then new actors may be included according

to that choice. In addition, some non-functional requirements which are derived from previous

steps may introduce new actors to fulfill them. Furthermore, upon goal analysis of the systems’s

goals, goals are decomposed into sub-goals, which likely results in the inclusion of sub-actors.

These sub-goals are then delegated to new sub-actors.

Figure 2.22 shows the decomposition in sub-actors of the Personal Itinerary Planner System

(PIPS, Section 2.1.2) and the delegation of some goals from the PIPS to them. The PIPS

depends on the Tourism Database Manager to have access to the Tourism Database, on the User

Record manager to have access to user records, on the Output Generator to generate output,

and so on. In addition, each sub-actor (e.g. User Info Manager) can be itself decomposed in

sub-actors (e.g. Account Manager and Interest Manager) responsible for the achievement of one

or more goals (e.g. store user account and store user interest).

The second step involves identifying the capabilities needed by the actors to accomplish their

goals and plans. This process is conducted based on the extended actor diagram created in the

first step. In fact, an actor’s capabilities are defined based on the dependency relationships be-

tween it and the other actors in the system. These includes in-going and out-going dependencies.

Each dependency basically results in one or more capabilities.

After all the capabilities of actors are specified and listed, the designer can move to the next

step, which is also the final step of the architectural design phase. At this step, agent types are

defined together with the allocation of capabilities to them. These decisions of which agent types

CHAPTER 2. BACKGROUND 45

PIPS

Access toTourism Database

Access toTransportation

Database

Access toWeather Database

Generate Output Provide Guideline

TourismDatabaseManager

Transp.DatabaseManager

WeatherDatabaseManager

OutputGenerator

GuidelineProvider

Generate Plan User Interface

PlanManager

GUIManager

Store UserInformation

Store UserHistory

User InfoManager

Access to UserRecord

UserRecord

Manager

HistoryManager

Store UserAccount

Store UserInterest

AccountManager

InterestManager

Figure 2.22: Actor Diagram for the PIPS architecture (re-drawn based on the PIPS design

documentation produced by Sindawati Hoetomo)

exist in the system and what capabilities should be assigned to them are up to the designers.

The analysis of the extended actor diagram may help the designers. Tropos provide a set of

pre-defined agent patterns that can help the designers to fulfill this task.

Detailed Design

The Tropos detailed design phase involves defining the specification of agents at the micro level.

There are three different types of diagrams which the designers need to produce to depict the

capability, the plan of agents and the interaction between them. Tropos uses UML activity

CHAPTER 2. BACKGROUND 46

Figure 2.23: Capability diagram represents for the Search Activity capability (re-drawn based

on the PIPS design documentation produced by Sindawati Hoetomo)

diagrams to represent capabilities and plans at the detail level. For capability diagrams, the

starting state is some external events. Each plan forms a activity node whereas transition

arcs model events. Plan diagrams are fine-grained representation of each plan node in the

capability diagrams. For instance, Figure 2.23 depicts the capability of the Search Activity

of the ItineraryPlanner agent, which is responsible for generating itineraries based on users’

requests in terms of their interests and locations.

The interactions between agents in the system are represented by agent interaction diagrams.

They are in fact AUML sequence diagrams. Each entity represents an agent type and the

communication arcs between agents correspond to asynchronous message arcs.

CHAPTER 2. BACKGROUND 47

Implementation

Having finished the detailed design stage, the developers can now move to the final step of Tropos,

the Implementation phase. Tropos chooses a BDI platform, in particular JACK Intelligent

Agents, for the implementation of agents. JACK is an agent-oriented development environment

which supports the programming of BDI agents. JACK provides five main language constructs:

agents, capabilities, database relations, events and plans. At this stage, developers need to

map each concept in the design phase to the five constructs in JACK. This in fact can be

done by mapping Tropos concepts to BDI concepts and BDI concepts to JACK constructs.

Actors, resources, goals and tasks are mapped into BDI agents, beliefs, desires, and intentions

respectively. BDI concepts are then mapped to JACK concepts – a belief is considered as a

database relation, a desire is posted as a BDIGoalEvent and an intention is implemented as a

plan.

The only tool support for Tropos that we are aware of is a set of icons for the Dia diagram

editor5.

2.4 Software Engineering Methodology Evaluation

In the previous section, we have performed a brief literature review on agents and agent-oriented

methodologies. In this section, we review the literature involving the evaluation of software

engineering methodologies.

As we discussed in section 2.2, the evolution of software engineering has been taking place from

the early days when ad-hoc programming was the dominant method of producing software.

During this period, a large number of software engineering methodologies have been offered

to the computer science and software engineering communities. On the one hand, they have

provided a rich resource but on the other hand the decision of choosing which methodology

to use to design and implement a particular system is more difficult and critical [44, 49]. The

decision of adopting a new methodology may affect the success of a software product, the current

organisation practice as well as cost, training and other issues.

5Available at http://www.lysator.liu.se/∼alla/dia/

CHAPTER 2. BACKGROUND 48

Having recognised those difficulties involved in the selection of an appropriate methodology,

there has been a large amount of effort spent on evaluating and comparing software engineering

methodologies. This section briefly discusses various key methods, techniques and frameworks

which have been proposed in this research area (Section 2.4.1). Since object-oriented method-

ologies are considered as the “predecessor” of agent-oriented methodologies, we also look back

at work on evaluating and comparing a number of object-oriented methodologies (Section 2.4.2).

Related research in the area of comparing agent-oriented methodologies is also briefly discussed

(Section 2.4.3).

2.4.1 Methods for Evaluating Methodologies

“Making a choice from the apparently very wide range of methods and tools available

can in itself be a complex and costly process ....” [74].

Therefore, it is necessary to carry out systematic evaluations. In answering a pressing need

for methods for comparing or evaluating methodologies, there have been a number of major

initiatives in this area over the past decades. Amongst these are:

• The book “Methods for Comparing methods” [75] written by David Law and pub-

lished by the UK National Computing Centre (NCC) in 1988. Its main purpose is to

provide a scientific approach to the comparison of methodologies, especially regarding

how design methodologies and requirements specification methodologies can be usefully

evaluated.

• The DESMET (Dertermining an Evaluation methodology for Software Methods

and Tools) project which started in 1990 and began to publish details in 1994 [62, 63, 64,

65, 66, 67, 68, 69, 70, 71, 74, 98]. Its participants included the UK National Computing

Centre, The University of North London and several European software consultants. The

main contribution of DESMET resides in the effort to develop a common framework for

evaluation methods, tools and techniques in the software engineering domain. DESMET

has been commonly regarded as a source of inspiration for all aspects concerning both

qualitative and quantitative evaluation.

CHAPTER 2. BACKGROUND 49

• The NIMSAD (Normative Information Model-based System Analysis and De-

sign) framework initiated by Jayaratna in his book “Understanding and Evaluating Method-

ologies: NIMSAD a Systematic Framework” [50] published in 1994. Its significance resides

in its difference to other approaches at that time by dealing with the methodology evalu-

ation area from a more general, philosophical or theoretical perspective.

• The book “Information Systems Development: Methodologies, Techniques and

Tools” [3] written by Avison and Fitzgerald. Its first edition was published in 1988, the

second in 1995 and the latest was recently published in 2002. Apart from detailed and

well-described concepts relating information system development methodologies, its main

contribution to the area of methodology comparison is the review of existing evaluation

approaches and the proposal of a generic framework for methodology classification.

Besides the above significant works, there have been various approaches [9, 16, 105, 107, 113]

which generally either adapt them or extend and tailor them to suit a particular evaluation

purpose. In the remainder of this section, we classify the major approaches to methodology

evaluation into four main groups: Feature-based evaluation, Quantitative evaluation, NIMSAD

framework and Other evaluation approaches.

Feature-based evaluation

Feature-based evaluation (also often called Feature Analysis) is the most prominent and pop-

ular comparison approach which has been used. It is regarded as a qualitative method [62]. It

involves building an evaluation framework that can be represented in terms of a set of proper-

ties, qualities, attributes or characteristics [75, page 19]. These features are able to describe the

evaluated methodology sufficiently well so that it can be assessed and compared for a particular

purpose. They often reflect user requirements for specific tasks or activities performed on a

particular domain.

Assessing a methodology against a framework of attributes and features involves some judgement

of how well it supports or to what extent it has a specific attribute or feature. In other words,

the methodology assessment is determined on the basis of its ratings on the different attributes

and features.

CHAPTER 2. BACKGROUND 50

As discussed in [75], deriving a set of attributes or features is a difficult task since there is

no universal agreement on the standard set of features. The choice of features may essentially

reflect the subjective opinion of a particular group of assessors, which in turn depends on their

background, interests and knowledge. There have been several approaches to devise evaluation

features [62, 75]. For instance, one can consider a software development methodology as a set of

the models, processes, techniques and interactions. The task of evaluating a methodology then

becomes assessing its support for features of each of these components. The second approach

involves using an expert perspective on what the key features of such a methodology should

be. They may be derived either from experience or from the theories and principles of software

engineering. For instance, one can follow the guidelines6 in [113] to derive a number of features

that fit the evaluation purpose. In practice, both techniques are often used together to generate

a list of evaluation criteria. The framework of features generally needs to consider not only the

technical aspects but also economic, cultural and quality issues.

Devising a set of features and criteria for the evaluation framework is only one of the major con-

cerns in performing a Feature Analysis. Another concern involves the selection of the evaluation

procedure, i.e. the way in which the evaluation is organised. Similar to features generation, there

are also different ways of organising a Feature Analysis. They range from a simple comparison

performed by a single assessor to a formal process conducted in an organization to select a

methodology for its development process. Kitchenham [62] has categorised them in four major

groups:

1. Screening Mode approach : This approach can be performed by a single person who

is responsible for both generating the evaluation criteria and assessing the methodologies.

The evaluation is solely based on his/her understanding of the methodologies according

to their documentation. This technique does not require a large amount of time or ef-

fort/cost but it is not reliable. This is due to the fact that the entire evaluation is based

on the assessors’ subjective opinion which may not be representative of the users of the

methodology. Also, the results of the evaluation may not be correct since the assessors

may make a wrong assumption on a specific aspect of the methodology.

6Proposed in 1989 by the Software Engineering Institute, Carnegie-Mellon University, they have been regarded

one of the most useful sources for the assessment of software development methods

CHAPTER 2. BACKGROUND 51

2. Case Study approach : This approach is proposed in [62, 75]. The evaluated methodolo-

gies are used to develop a real project. In contrast to the screening mode, there now are

two distinct roles in the evaluation process. The first role is the evaluator who is respon-

sible for selecting the methodologies, generating the evaluation criteria and selecting the

testing project. The second role is played by the software developers who assess each fea-

ture of the methodology based on their experience of using it to develop the trial project.

This approach has several advantages such as providing a practical evaluation and the

evaluation is performed by actual users of the methodologies. Nevertheless, its limitations

include that the result collected from a doing a project is probably not representative of

some specific features that the methodology addresses. In addition, the assessment is also

affected by the background, the ability and learning curves of the software developers in

using and understanding the methodologies. Finally, it is also relatively expensive since a

certain number of people needs to be involved in the evaluation.

3. Formal Experiment approach : Similar to the case study approach, there are also at

least two different roles in this approach. The evaluators need to select the methodologies,

build the set of features, plan and run the experiments, and analyse the results. They

also need to choose an appropriate experimental design, select (randomly or deliberately)

the experimental subjects (i.e. users) and probably classify them into different classes of

users. In addition, the experimental subjects are trained in the use of the methodology

if necessary. This approach is likely to produce the most reliable results since it seems

to reduce the influence of single assessor differences. It is, however, the most costly and

lengthy approach. It may require a large number of participants in the evaluation process.

4. Survey approach : This approach does not involve the practical use of the evaluated

methodologies. Rather, it relies on the assessment of the experts and users who have been

using some of the evaluated methodologies. Similar to the above three approaches, the

evaluators also need to derive a set of evaluation criteria together with the judgement

scale. After that, they design and run the survey. Several tasks are involved with this

process, including choosing the type of survey (e.g. web-based survey or personal inter-

view), building the survey documentation (e.g. questionnaire), and identifying people who

will be asked to join in the survey. Finally, the evaluators run the survey and collect and

CHAPTER 2. BACKGROUND 52

analyse the responses according to the survey design. The advantages of this approach are

its tendency to take less time and effort than the formal experiment approach. In addition,

it may reflect the opinions of a wide range of methodologies’ users. Its limitations include

the difficulty in finding the “right” people to ask to participate in the survey, especially,

if the evaluated methodologies are still not popular and mature enough to have a large

number of users. In addition, unlike the formal experiment approach the evaluators are

not able to control the experience, background and capabilities of the participants.

The feature-based evaluation approach gains its popularity from its high flexibility. Firstly, it can

be performed by anyone or any organisation without the need to have several evaluation facilities

such as a measurement programme in place. Secondly, a feature analysis can be conducted to

any required level of detail, ranging from very simple evaluations such as screening mode to

sophisticated procedures like formal experiments. Thirdly, this approach can be used not only

to assess methodologies but also any type of tools and development processes.

Nonetheless, there are several outstanding limitations relating to feature-based evaluation. First

of all, the subjectivity involved in using the feature-based approach is of concern. As mentioned

earlier, subjectivity can come from two sources: the set of evaluation criteria and the judgement

of a particular methodology against them. Secondly, the inconsistency in the judgement of dif-

ferent assessors may pose some issues. It results from the fact that different people have different

backgrounds, experiences and ability to understand and use a methodology. For instance, some

features have a higher score than others just simply due to the fact that the assessors tend to be

more familiar with them. Thirdly, aggregating the scores of all the evaluations criteria to a single

number that represents the quality of a methodology is a difficult task. Besides, the evaluators

need to deal with the relative importance among features - some of them significantly represent

the quality of a methodology whilst others do not. Finally, one may face the redundancy crisis

where too many features (e.g. more than 100) are produced. Managing the judgment scores of

all the features becomes a more difficult task. In addition, the evaluators need to collate and

analyse the scores.

CHAPTER 2. BACKGROUND 53

Quantitative evaluation approaches

Feature-based evaluation or Feature Analysis in usually a qualitative technique. Even though

it can be “quantified” in the sense of judging scores, assessing scales, weights and aggregating

them, it still deals with quality aspects of the methodology. On the other hand, quantitative

evaluations assess a methodology according to some measurable results produced by its use.

These can be the software applications produced or the changes in the development process.

Similar to the Feature Analysis, the main procedures to perform a quantitative evaluation are

case studies, formal experiments, and surveys. These also proceed in an analogous fashion. The

main difference as proposed in DESMET [62] is that before running a case study, an experi-

ment or a survey, the evaluators must formulate and validate the hypotheses. Formulating an

hypothesis involves defining the consequences the assessors expect the evaluated methodologies

to bring when applied to a project or a development process. These effects needs to be defined

in such a way that they are measurable and detailed. Validating a hypothesis is to ensure that

the evaluators are able to correctly interpret the results based on it.

Overall, quantitative evaluation methods tend to produce more reliable results than qualitative

or feature-based approaches do. In fact, DESMET classifies them in terms of the extent of

risk that an evaluation draws an incorrect conclusion about the methodologies being evaluated.

The relative risk related with each quantitative technique ranges from low (Quantitative Case

Study method) to very low (Quantitative Formal Experiment). In contrast, qualitative or fea-

ture analysis approaches have from a very high risk (Feature-based screening mode) to low risk

(Feature-based experiment). Despite producing higher confidence in evaluation results, quantita-

tive methods are not as flexible as feature-based approach. To conduct a quantitative evaluation,

an organization needs to have some prior infrastructure such as a measurement program, a set

of standards for metrics, etc. Furthermore, there are some aspects of the methodology such as

process visibility or controllability that are best evaluated using qualitative methods.

The NIMSAD framework

The above two approaches are the most practical ways of performing a comparative analysis on

system development methodologies. There are also alternative approaches for understanding and

CHAPTER 2. BACKGROUND 54

evaluating methodologies in a more theoretical way. One of them is the NIMSAD(Normative

Information Model-based System Analysis and Design) framework. It was proposed

by Jayaratna in his book “Understanding and Evaluating Methodologies: NIMSAD a System-

atic Framework” [50]. Being a evaluation framework, NIMSAD is not in fact a method for

practically and efficiently comparing methodologies as such. Rather, it provides an alternative

way of understanding and evaluating methodologies on the basis of “the models and epistemol-

ogy of systems thinking perspective” [50, page 50]. The basic scaffolding for the framework is

constructed based on a fairly wide and encompassing view of an approach to problem solving.

The framework contains four major elements: the methodology context, the methodology user, the

methodology itself, and the evaluation of the above three. The fourth component, the evaluation,

is missing from many other frameworks according to Jayaratna [50, page 53]. The process

of evaluating a methodology involves answering various questions that address the first three

components. For instance, the questions relating to the methodology context element deal with

the mechanism employed by the methodology in assisting the understanding and identification

of the clients, their experiences and commitments, the culture and context of the target system,

etc. The questions concerning the second element, the methodology users, involve their belief,

values, and ethical positions, their experiences, skills, and motives, etc. The questions addressing

the third element (the methodology itself ) are similar to the evaluation criteria that found in

other frameworks. These questions find the way the methodology provides specific assistance

for understanding, defining and modelling the problem, implementing the design, etc. The

fourth element, evaluation, is performed at three stages: prior to intervention (i.e. before a

methodology is adopted), during intervention (i.e. during its use) and after intervention is

complete (i.e. assessment of the success or failure of the methodology).

The NIMSAD framework has been applied to evaluate three well-known methodologies which

have different approaches to systems development. These are Structured Systems Analysis and

Systems Specification, ETHICS, and SMM [50].

Other approaches

Apart from the above three major evaluation approaches, there are various methods and frame-

works that have been proposed in the literature. For instance, Law [75] suggested that the

CHAPTER 2. BACKGROUND 55

evaluators are able to employ the set of evaluation criteria and make a direct comparison be-

tween methodologies rather than examine all the compared methodologies together. He argued

that the latter which involves judging on the basis of a rating scale may be less discerning

and sensitive. He indicated that the direct comparison is suitable for quick, subjective expert

comparison at a relatively high level of detail.

Also mentioned in [75], the easiest way for an oganisation to choose a methodology, method or

tool is following the recommendations of some institution. This approach, however, is not a safe

option since the organisation has to be careful in taking account of its own needs, requirements,

etc. In a slightly different direction, DESMET [62] refers to this approach as a Qualitative

Effects Analysis where the selection of a method or tool is done on the basis of expert opinion.

This, however, assumes the existence of a knowledge base of expert opinion regarding generic

methodologies and techniques.

2.4.2 Comparisons of Object-Oriented Methodologies

In the early 90’s, Object-Oriented (OO) software engineering experienced a similar period to

that which Agent-Oriented (AO) approaches are going through now. Driven by the attrac-

tiveness of the OO paradigm, software engineering researchers proposed a large number of OO

methodologies supporting the development of OO systems. As cited in [38], a famous quote

of Edward Berard describes this situation as:“I have good news and bad news. The good news

is that there has been a great deal of work in the area of ’object-oriented software engineering’.

The bad news is that there has been a great deal of work in the area of ’object-oriented software

engineering’”. This major issue led to difficulties in deciding between different methodologies

for object-oriented design.

Consequently, a large amount of effort was spent on evaluating, comparing and understanding

object-oriented methodologies [2, 3, 8, 24, 33, 34, 35, 38, 39, 45, 58, 78, 79, 87, 92, 94, 96, 99,

102, 105, 107, 111, 112, 113]. Since AO and OO methodologies share the same foundation of

software engineering goals and principles, it seems important and useful for us to review the

main evaluation approaches applied to OO methodologies. These may provide inspirations and

valuable resources for our research relating to AOSE methodologies evaluation.

CHAPTER 2. BACKGROUND 56

Comparative studies of OOSE methodologies generally fall into the four major methods of eval-

uation which we discussed in the previous section. Some of them target a full OO methodology,

whereas others target at several aspects or process stages. For instance, in [8, 24, 33, 35, 45, 78,

113], the complete software development process, the modelling language as well as tool support

of a methodology are examined. On the other hand, [2, 105, 112] focus on the comparison of

requirements specification techniques; whereas object-oriented design and analysis are addressed

in [38, 79, 92, 107]. In [39, 94, 96], only the modelling language (i.e. models and notations) of

different object-oriented methodologies are evaluated. Overall, significant work in the area of

OO methodology comparisons can be classified into three styles: comparison against a frame-

work (feature analysis), comparison by meta-modelling and comparison by outcome (quantitative

evaluation).

Comparison against a framework

Because of the popularity and flexibility of feature analysis as a powerful evaluation approach,

this type of comparative study on object-oriented methodologies has attracted more significant

work than the other comparison styles. Comparison against a framework basically involves

building an evaluation framework containing a hierarchy of attributes or features. These form

the evaluation criteria on which the assessment is based. The evaluation framework also varies -

some [111] try to construct an ideal OO development methodology against which others can be

compared, whereas most of the work constructs their own framework [3, 24, 35, 38, 94, 96, 113].

Overall, an emerging agreement of those comparison frameworks is that they address major

components of an OO methodology such as concepts, models, process, tools, pragmatics. It is

also noted that we found many evaluation criteria in this area that are potentially useful to our

purpose of comparing agent-oriented methodologies.

Comparison by meta-modelling

This type of comparative study involves developing a common frame of reference (i.e. meta-

modelling) for viewing different participating methodologies. It generally requires the construc-

tion of a meta-model based on a combination of different features provided by the compared

CHAPTER 2. BACKGROUND 57

methodologies. Among the significant works, there is a formal approach to evaluate six different

OO methodologies [45] using the meta-modelling comparison technique, which was initiated by

Hong, Goor and Brinkkemper in 1993. They argue that in order to make the comparison ac-

curate and objective, those methodologies should be compared based on a uniform, formal and

unbiased basis. As a result, the meta-modelling evaluation technique is chosen. At the first step,

the goals, concepts, techniques, processes and graphical notations of each methodology are iden-

tified. The collected information is then used to build a meta-model of each methodology which

consists of two sub-models: a meta-process model and a meta-data model. The meta-process

model captures the process of the methodology whereas the meta-data model describes the con-

cepts and techniques relating to it. Based on the meta-models of the selected methodologies, the

research describes the process of comparing them according to three major features: the analysis

and design steps, the concepts, and the techniques provided. Meta-modelling techniques have

their attraction in terms of being more objective and accurate compared with feature-based

techniques. However, we found that the meta-models constructed do not capture other issues

of software engineering principles such as reusability, maintainability and modifiability. Addi-

tionally, meta-modelling techniques tend to be useful only for cases when we want to perform

a comparison on a certain number of methodologies. Another representative of this kind of

comparative study can also be found in [34].

Comparison by outcome

Comparison of object-oriented methodologies by outcome is a form of quantitative evaluation

which we discussed in section 2.4.1. OO methodologies are evaluated on the basis of assessing

the outcome in terms of several measurable effects. For example, the quality of the product

or the complexity of the process as a result of applying the methodologies can be examined.

Representatives of this type of study in the object-oriented methodologies are the formal experi-

ment conducted to assess analysis techniques for information requirements determination found

in [105], and the metrics-based evaluation of object-oriented software development methods

in [33] and in [99].

CHAPTER 2. BACKGROUND 58

2.4.3 Comparisons of Agent-Oriented Methodologies

As opposed to object-oriented methodologies, there has not been much work in comparing

agent-oriented methodologies. Shehory and Sturm [100] performed a feature-based evaluation

of several Agent Oriented Software Engineering (AOSE) methodologies. Their criteria included

software engineering related criteria and criteria relating to agent concepts. In another paper

[104], they used the same techniques in addition to a small experimental evaluation to perform

an evaluation of their own Agent Oriented Modelling Techniques (AOMT). This work suffers

from subjectivity in that the criteria they identified are those that they see as important and,

naturally, AOMT focuses on addressing these criteria.

A framework to carry out an evaluation of agent-oriented analysis and design modelling methods

has been proposed by Cernuzzi and Rossi [18]. The proposal makes use of feature-based evalu-

ation techniques but metrics and quantitative evaluations are also introduced. The significance

of the framework is the construction of an attribute tree, where each node of the tree repre-

sents a software engineering criterion or a characteristic of agent-based system. Each attribute

is assigned a score and the score of attributes on the node is calculated based on those of its

children. They have applied that framework to evaluate and compare two AOSE methodologies:

the Agent Modelling Techniques for Systems of BDI (Belief, Desire and Intention) Agents and

MAS-CommonKADS.

In [86], O’Malley and DeLoach proposed a number of criteria for evaluating methodologies

with a view to allowing organisations to decide whether to adopt AOSE methodologies or use

existing OO methodologies. Although they performed a survey to validate their criteria, they

do not provide detailed guidelines or a method for assessing methodologies against their criteria.

Their example comparison (between MaSE and Booch) gives ratings against the criteria without

justifying them. Their work is useful in that it provides a systematic method of taking a set of

criteria, weightings for these criteria (determined on a case by case basis), and an assessment

of a number of methodologies and determining an overall ranking and an indication of which

criteria are critical to the result.

Chapter 3

Proposed Evaluation Approach

The previous chapter provided some insight into the problem which this research attempts to

tackle. We have briefly touched on the concepts related to agents and agent-based systems. We

also described the process, models, and techniques in five prominent agent-oriented method-

ologies. Furthermore, the methodology evaluation methods and frameworks available in the

literature were also reviewed. On this basis, in this chapter we present the methods, including

evaluation types and procedures, that we employed to assess the five selected agent-oriented

methodologies (Section 3.1). We also describe in detail the evaluation framework on which our

assessment is based (Section 3.2).

3.1 The Evaluation Methods

Before proceeding with assessment, one needs to decide what evaluation methods should be

used. This section answers this question by firstly sketching the purpose of our evaluation of

agent-oriented methodologies (Section 3.1.1). After that we describe the evaluation types and

techniques that we used in this research (Section 3.1.2).

59

CHAPTER 3. PROPOSED EVALUATION APPROACH 60

3.1.1 The Purpose of the Evaluation

There seems to be an agreement among the evaluation methods community [3, 50, 62, 75] that

the first and very important step of any evaluation is to identify its purpose. Law emphasised

that “no rational comparison is possible without defining the purpose of the exercise” [75, page

13]. Depending on the purpose, the method of carrying out an evaluation and the results

may vary significantly. For example, an organisation conducting an evaluation with the aim of

adopting a new methodology for its existing development process may require a formal but costly

evaluation procedure. This is due to the importance of the evaluation results with respect to

the organization’s success. The selected methodology must be best suited to the organisation’s

needs and must require no significant changes to its current practice process.

The purpose of our evaluation of several prominent agent-oriented methodologies is different.

Since the development of AOSE methodologies is still at an early age, practical evaluation pur-

poses such as choose adopting a selected AOSE methodology to the current software development

of an industrial organization seem inapplicable. Rather, our aim is:

• better understand the nature of AOSE methodologies, including their philosophies, objec-

tives, features, etc.

• identify their strengths and weaknesses as well as the commonalities and differences in

order to perform classifications and to improve future agent-oriented software system de-

velopment.

Furthermore, it is emphasised that we are not trying to search for, in isolation, a “best method-

ology”. We believe that it is not always the case that all the AOSE methodologies are mutually

exclusive. In fact, different methodologies may be appropriate to different situations, thus a

methodology should be selected on the basis of considering different issues. These influencing

factors can be the context of the problem being addressed, the domain, and the organisation

and its culture. However, we also expect that the evaluation would help in practical choices such

as identifying the domains of applicability of each evaluated methodology.

CHAPTER 3. PROPOSED EVALUATION APPROACH 61

3.1.2 The Evaluation Type and Procedure

There are several factors, as proposed in DESMET [62], that may affect the decision of choosing

an appropriate type of evaluation and procedure to carry out the evaluation. These are the

available time, the level of confidence we need to obtain in the results of the evaluation, and the

cost of the evaluation.

Taking into account the main purpose of our evaluation, we examine each factor below.

• Evaluation timescales: Our research takes several months. Therefore, a small Feature

Analysis survey and Case Study are likely to be candidates. In contrast, a quantitative

Case Study may take more than the four months which are available to us.

• Confidence in results: As stated in our purpose, we are not aiming at obtaining very highly

reliable results which are required in the context of industry. Rather, results ranging from

medium to high confidence are suitable. Thus, all three forms of Feature Analysis (Case

Study, Survey and Formal Experiment) plus Quantitative Survey and Case Study are

candidates.

• Costs of an evaluation: Different from DESMET, we interpret evaluation cost as the

number of students who are needed as software developers if a case study is carried out.

We had five students who volunteered to do their summer project. Therefore, a Feature

Analysis case study and survey seem to be most suitable.

Hence, the options which were open for us to make our evaluation are a Feature Analysis incorpo-

rating a small survey and case study. In addition, to understand the similarities and differences

between the methodologies we performed a Structural Analysis.

As suggested in [75], a multi-stage selection is needed if the evaluators face a wide range of

methodologies. We were in a similar situation since, as discussed in section 2.3, there are more

than fifteen agent-oriented methodologies in the literature. Since we did not have enough time

and resources to examine all these methodologies in detail, it is necessary to reduce the number of

apparently suitable methodologies to a few so that we are able to perform a sufficiently detailed

evaluation. Hence, the multi-stage approach to evaluation seems appropriate.

CHAPTER 3. PROPOSED EVALUATION APPROACH 62

The main evaluation procedures that we performed are described below.

First round qualification

In order to help perform a preliminary qualification round to select several methodologies to

evaluate, we imposed several criteria upon which the selection can be based.

i Relevance: To some extent, the methodology must be widely regarded as an agent-oriented

methodology rather than, for example, an object-oriented methodology.

ii Documentation: The selected methodology needs to be described in sufficient detail. For

example, it needs to have been presented in books, journal papers, or detailed technical

reports rather than just a conference paper. In addition, it is also important that we are

able to access these descriptions.

iii Tool support : Methodologies that have supporting tools are preferred over those that do

not. Since the evaluation process involves the practical use of each selected methodology

to design PIPS, the availability of tool support is a practical advantage. It is also a good

indication of maturity and of the development effort that has gone into a methodology

The decision of selecting the five methodologies to evaluate was based on the above criteria. Their

detailed descriptions are provided in Section 2.3 whilst the assessment for them is discussed in

Chapter 4.

Feature Analysis

As a first step, we built an evaluation framework which contains attributes, features and criteria.

These are based on the existing work in the comparison of OO methodologies. In addition,

there are features and attributes that are unique to agent-oriented methodologies. By including

various of issues that have been identified as important by a range of authors, we avoid biasing

the evaluation by including only issues that we consider important.

Having constructed the framework, we then carried out our evaluation for each methodology.

However, it is emphasised that our evaluation mainly focused on the technical features of the

CHAPTER 3. PROPOSED EVALUATION APPROACH 63

methodologies. An organisation carrying out a full evaluation may take into account more

features, and assign scores depending on the effect of the different feature in its environment or

toward a particular project.

Case Study

Over the summer (Dec. 2002 - Feb 2003), we had five students each develop designs for the

Personal Itinerary Planner System (PIPS, Section 2.1.2) using the five different methodologies

(Gaia, MaSE, MESSAGE, Prometheus, and Tropos). The aim of this small case study was to

investigate each methodology’s ability to solve a small problem which is a simplified extract

from a large real problem. Its main merit is to explore whether a methodology is in fact

understandable and usable. In addition, the evaluated methodologies were employed to design

the same application. They were used independently by a group of users which had a similar

background, i.e. they were all second-year undergraduate students. For these two reasons, the

case study is useful since the evaluation results can be made based on a direct comparison

between the selected methodologies. Therefore, during the experiment, the students were asked

to take notes of the perceived advantages as well as limitations of the methodology they were

using.

Survey

Aside from our own evaluation, we had others (specifically the methodologies’ developers and

the five students) do an assessment of each methodology. Doing this, on the one hand, allows

us to avoid the comparison being totally affected by our biases.

Specifically, a questionnaire which covers most of the issues addressed in the framework was

created. After that, for each methodology we asked the authors of the methodology to fill in a

questionnaire assessing the methodology. The five students who each used a selected methodol-

ogy were also asked to answer the questionnaire.

The aim of the questionnaire was not to provide a statistically significant sample in order to

carry out a scientific experiment. As previously discussed, this was not practical due to the

CHAPTER 3. PROPOSED EVALUATION APPROACH 64

time, cost and resources constraints. Rather, the aim was to avoid any particular bias by having

a range of viewpoints.

Structural Analysis

A feature-based evaluation (including case study and survey) generally only addresses how much

support an agent-oriented software engineering (AOSE) methodology appears to provide for a

feature, i.e. what degree of support seems to be present. Our purpose in this research is also

to attempt to understand the common and different aspects of the five AOSE methodologies.

In other words, we tried to explore, what models and processes the five methodologies share

and what the distinguishing aspects for each of them are. We believe this will contribute to the

development of “next generation” agent-oriented software engineering methodologies.

3.2 The Evaluation Framework

In this section, we describe a methodology evaluation framework within which the feature-based

comparison is conducted. The framework consists of a set of criteria which addresses not only

classical software engineering attributes but also properties which are uniquely found in AOSE.

In order to avoid using an inappropriate comparison framework, the properties in our framework

are derived from a survey of work on comparing AOSE methodologies and on comparing OOSE

methodologies. The evaluation framework covers four major aspects of each AOSE methodology:

Concepts, Modelling language, Process, and Pragmatics. This framework is adapted from various

frameworks proposed in [3, 24, 35] for comparing Object-Oriented methodologies. In addition to

the above four components, those frameworks consider other areas such as Support for Software

Engineering and Marketability. For our framework, we have decided to address “Support for

Software Engineering” criteria in various places in the above four major aspects. With regard to

“Marketability” issues, since all of our evaluated AOSE methodologies are still being researched

and developed, we do not believe that marketability criteria are applicable.

For each feature or attribute or criterion of the evaluation framework, a brief description is

provided together with several guidelines that help us in assessing a methodology against this

feature. In addition, there are two kinds of evaluation features. One indicates how much support

CHAPTER 3. PROPOSED EVALUATION APPROACH 65

Concepts Modelling Language

Process Pragmatics

Evaluation Framework

Internal Technical Social Usability Technical Management

Figure 3.1: Evaluation framework at a high level view

a methodology appears to provide for a feature, i.e. what degree of support seems to be present.

For this type of evaluation criteria, we use a judgement scale ranging from 1 to 5 where 1

indicates a low level of support and 5 implies that the methodology provides a high level of

support. The other type of evaluation feature indicates what is supported by a methodology.

These criteria are marked with the text ‘‘Narrative’’ next to them.

For each feature or criterion, we briefly describe its definition together with the relevant sources

in the literature where it is discussed. A brief discussion of why it is important is also provided if

necessary. Furthermore, we provide some preliminary guidance to identify the degree of support

of an agent-oriented methodology with respect to a particular feature. The guidance is phrased

as questions and refer to the methodology.

3.2.1 Concepts

As we discussed in section 2.1.1, the concepts related to agents distinguish an agent-based sys-

tem from other types of systems. Hence, one of the important facets of evaluating agent-oriented

methodologies is an examination of the methodologies’s support for agent-based systems’ char-

acteristics such as autonomy, pro-activeness, reactivity, etc. In other words, it is necessary to

understand to what extent an AOSE methodology supports the development of agent-based

systems that possess these characteristics.

We divide the agents’ properties into two groups: internal features and cooperation features. The

CHAPTER 3. PROPOSED EVALUATION APPROACH 66

former addresses the characteristics that are internal to agents, whereas the latter are concerned

with the cooperation process between agents. Each of them is described in detail below.

Internal properties

1. Autonomy : Agents can operate and make their own decision on which action they should

take without direct intervention of humans or other agents. In other words, both agents’

internal state and their own behaviour are controlled by themselves.

Guidance

• Does the methodology support describing an agent’s self-control features? For ex-

ample, functionalities and tasks being encapsulated within an agent may increase the

degree of autonomy.

• Does the methodology support modelling a decision-making mechanism of agents

regardless of the environment?

2. Mental attitudes: This feature relates to the strong agency definition of agents as discussed

in section 2.1.1. The three major elements of the Belief-Desire-Intention (BDI) architec-

ture of agents are an example of it. The BDI architecture defines an agent’s internal

architecture by its beliefs, the desires or goals it wants to achieve and its intentions or

plans to accomplish those goals.

Guidance

• Does the methodology support modelling mental attitudes of agents? For example,

– Beliefs: Do the agents modelled in the methodology store information involving

their environment (including domain specific concepts), their internal states as

well as the actions they may take?

– Goals (desires): Does the methodology provide a goal modelling technique, cap-

turing the system’s goals and the agents’ goals?

– Intentions: Does the methodology provide representations that details the plans

which an agent use to accomplish their goals or to respond to external events?

3. Pro-activeness: an agent’s ability to pursue goals over time

CHAPTER 3. PROPOSED EVALUATION APPROACH 67

Guidance

• Does the methodology provide a goal modelling technique, capturing the system’s

goals and the agents’ goals?

• Does the methodology provide plans and/or tasks models which depict how goals are

achieved by an agent, e.g. performing a specific actions, or interacting with other

agents, etc?

4. Reactivity : an agent’s ability to respond in a timely manner to changes in the environment.

Guidance

• Does the methodology provide mechanisms to represent changes in the environment,

e.g. events, incidents, etc.?

• Does the methodology provide mechanisms to specify and represent the agents’ re-

sponses to those changes in the environment?

5. Concurrency : agent’s ability to deal with multiple goals and/or events at the same time.

More specifically, agents are able to perform actions/tasks or interact with other agents

simultaneously.

Guidance

• Can task be modelled by the methodology in such a way that concurrency or paral-

lelism can be expressed?

• Does the methodology provide models and techniques to capture the concurrent char-

acteristics of a conversation between agents? In other words, can the communications

between agents described in a fashion that one agent is allowed to interact with more

than one other agent at the same time?

• Does the methodology provide support for detecting and avoiding problems that

arise from concurrency such as race conditions, deadlock, and goals and/or resources

conflicts?

6. Situatedness: agents are situated in an environment. They are able to perceive the envi-

ronment via their sensors and to initiate actions to affect it using their effectors.

Guidance

CHAPTER 3. PROPOSED EVALUATION APPROACH 68

• Does the methodology support the modelling of the environment where agents are

working? For instance, events happening within the environment are captured as well

as the actions that the agents can perform in responding to these events.

• How well does the methodology address modelling the environment through, for ex-

ample, percepts and actions?

• What types of environment does the methodology support? According to Russell &

Norvig [97], the environment can be classified using five different aspects:

– Accessible vs. Inaccessible: percept do not contain all relevant information about

the world in an inaccessible environment.

– Deterministic vs. Nondeterministic: current state of the world does not uniquely

determine the next in a nondeterministic environment.

– Episodic vs. Nonepisodic: not only the current (or recent) percept is relevant to

the situated agent in a nonepisodic environment.

– Static vs. Dynamic: dynamic means that the environment changes while the

agent is deliberating.

– Discrete vs. Continuous: there are indefinite number of possible percepts/actions

in a continuous environment.

Social features

In real world organisations, people, especially those who are working on the same project, need

to cooperate from time to time. This is because, cooperation and teamwork are proven, in most

cases, as the effective ways of tackling large project and making use of distributed expertise.

That social behaviour is also borrowed by the agent-oriented paradigms. A multiagent system

consists of a number of various agents that work together to fulfill the common objective design of

the whole system. Working together means that the agents existing in the system must cooperate.

Similar to humans, the cooperation between agents accelerates the process for analysing and

resolving the problems as well as increase the quality of the solutions or the products [101].

The importance of cooperation indicates the need for a methodology to provide general principles

and techniques to support it. For instance, it is necessary to assist the designers in building and

CHAPTER 3. PROPOSED EVALUATION APPROACH 69

preserving cooperations between agents within the system. The following evaluation criteria

examine various issues relating to this dimension of agents.

1. Methods of cooperation: (Narrative)This criterion addresses the cooperation models (i.e.

how cooperation is to take place) supported by the methodology.

Guidance

• What cooperation modes are supported by the methodology? For example, there are

different types of method for cooperation [36, page 78–82] as described below.

– Task delegation: Tasks are identified based on the process of decomposing overall

goals. There is an agent called “manager” or “facilitator” who is responsible for

selecting the agent who will perform each (or a group of) task. Each agent needs

to carry out tasks that are assigned to them.

– Negotiation: Unlike task delegation, there is no centralised agents or mediators

to handle the agent cooperation. Instead, a society of self-interested agents uses

a negotiation process to reach agreements with respect to cooperations. For ex-

ample, in a trading environment, a buyer agent and a seller agent often negotiate

(e.g. about prices, goods, etc.) with the aim of cooperating to achieve their own

goals.

– Multiagent planning: Plans are developed to achieve goals. These plans are

described in terms of tasks. Tasks are distributed to agents using task delegation

method.

– Teamwork: a group of agents working toward a common goal.

2. Communication modes: (Narrative)Interactions between agents are mainly achieved via

communication. Communication is also the basis for social organisation in multiagent

systems, as for human-beings. Without communication, the agent is purely isolated and

not able to interact with other agents. This criterion addresses the question of what

communication modes are used by the methodology.

Guidance

• What communication modes are supported by the methodology? For example, there

are several types of communication [101]:

CHAPTER 3. PROPOSED EVALUATION APPROACH 70

– Direct: communication can take place directly between two agents, e.g. exchang-

ing messages

– Indirect: communication between two agents is done via a third party. For

instance, if agent A wants to communicate with agent B, then A need to send a

message to agent C, and C passes that message to B.

– Synchronous: the sending agent does not continue the conversation until the

message is received, e.g. making a phone call.

– Asynchronous: In contrast to the above communication type, the sending agent

can goes on exchanging messages immediately after sending a message, e.g. send-

ing emails.

3. Protocols: This criterion examines the level of support for defining the allowable conver-

sations in terms of valid sequences of messages exchanged between two agents [36, 101].

Therefore, it is useful if the methodology provides models and techniques to define the

specification of protocols that characterise agent conversations.

Guidance

• Does the methodology provide textual templates of the communicative act sequence?

• Does the methodology have a way of representing the protocols such as finite state

machines, AUML sequence diagrams, Petri Nets, etc?

4. Communication language: (Narrative)This concerns the language used for communica-

tion between agents.

Guidance

• What communication languages are supported by the methodology? For example,

there are two typical agent communication language [101]:

– Signals: The communication language is at a low level.

– Speech acts: The communication language is at a high level (knowledge level),

e.g. Knowledge Query and Manipulation Language (KQML), FIPA Agent Com-

munication Language (ACL)1, etc.

1The specification of FIPA ACL is available at http://www.fipa.org/repository/aclspecs.html

CHAPTER 3. PROPOSED EVALUATION APPROACH 71

3.2.2 Modelling Language

Just as agent-oriented concepts are the basis for any AOSE methodology, so the modelling

language for representing designs in terms of those concepts is generally the core component of

any software engineering methodology. The modelling language, also called model or notation,

of a methodology provides the foundation of the methodology’s view of the world [3, page 452].

This view is generally an abstract representation of the important aspects of the system under

development. Having a good modelling notation the methodology effectively eases the complex

tasks of requirement analysis, specification analysis and design. Therefore, measuring the quality

of the modelling language of an AOSE methodology plays an important part in our evaluation.

A typical modelling language consists of three main components [39]: symbols (either graphical

or textual representation of the concepts), syntax and semantics. They together are used to fulfill

three major purposes of a modelling language of a software engineering methodology [3, page

452]. Firstly, it is a channel of communication, i.e. providing a means for software developers

to exchange their thoughts and ideas. Secondly, using a modelling representation one is able

to capture the essence of a problem or design in such a way that the translation or mapping

from it to another form (e.g. implementation) can be done without loss of detail. Thirdly, the

modelling language provides a presentation that gives users clear understanding of the problem.

Based on its constituted components and purposes, we categorise the criteria assessing the mod-

elling language of each methodology into two groups: usability and technical criteria. Usability

criteria reflect the first aim of a modelling language, i.e. providing the way for users to exchange

thoughts and ideas. On the other hand, technical criteria aim at the second and third purposes.

The evaluation criteria belonging to the two groups are elaborated in detail as follows.

Usability criteria

Usability criteria consist of various measures: clarity and understandability, easy to use, ade-

quacy and expressiveness.

1. Clarity and understandability : These two criteria are closely related to each other and

both of them are fundamental requirements of a modelling language. In fact, a method-

CHAPTER 3. PROPOSED EVALUATION APPROACH 72

ology which provides clear notations tends to increase the users’ understandability of the

models [39].

Guidance

• Are symbols and syntax well-defined? Is is no overloading of notation elements? For

example, notation symbols are not similar to each other, and the most used concepts

have simple notation.

• Can the models be constructed at various levels of abstractions?

• Does the methodology support for capturing different perspectives of the system, e.g.

views from programmers, system analysts, managers, users, etc. Increase the level of

clarity and understandability of the models?

2. Adequacy and expressiveness: These two criteria are related to each other. Differing from

the above criteria, these two criteria should be examined relative to the application’s

purpose. They measure whether a modelling language represents all the necessary aspects

of a system in a clear and natural way. “Necessary” here also means expressiveness,

meaning that there is no need to have modelling constructs that result in an increase of

complexity rather than promoting the clarity and understandability [39].

Guidance

• Is the notation capable of expressing models of both static aspects of the system

and dynamic aspects? Static models are those that represent relationships such as

aggregation, specialisation, structure of the system, and the knowledge encapsulated

within the system. Dynamic models describe the processing, agent interaction, stage

changes, timing, and data control flow within the system.

• Does the methodology allow the system under development to be modelled from

different views such as behavioural, functional and structural views [113]?

• Does the methodology provide mechanisms to express various aspects of the system

such as exceptional conditions, boundary conditions, error handling, initialisation,

fault-tolerance, performance, and resource constraints?

3. Ease of use: It is important for a modelling language not only to be understandable to

the users but also to be easy to use. The first step toward using a modelling language is

CHAPTER 3. PROPOSED EVALUATION APPROACH 73

to learn the notation. Hence, it is desirable that the notation be easy to learn by both

expert and novice users [96]. In addition, the easier the users can remember the notation,

the quicker they are able to learn to use it. Therefore, the notation should be as simple

as possible.

Furthermore, since people usually sketch models by hand during the process of brain-

storming or reviewing designs, it is essential for the notation be easy to draw and write

by hand [96]. Finally, as mentioned earlier, one of the important purpose of a modelling

language is to convey information among the users. Often this is in the form of hard-

copy documentation for reading and discussing. Hence, it is important that the diagrams

produced are easy to read and comprehend when printed [96].

Guidance

• Does the notation contain symbols that are familiar to users and easy for them to

remember?

• Is it easy to draw and write by hand the notation provided by the methodology?

• Are the diagrams produced using the methodology clear, e.g. containing no distrac-

tion or unnecessary marks, and making good use of space? Are important concepts

captured more prominently than minor ones?

Technical criteria

Technical criteria consist of five different evaluation considerations: unambiguity, consistency,

traceability, refinement, and reusability.

1. Unambiguity : symbols and syntax are provided to users so that they can build a represen-

tation of a particular concept. Thus, the semantic or meaning of a concept is the users’

interpretation of the representation provided. However, this interpretation can be differ-

ent from observer to observer, which in turn results in misunderstandings. Therefore, it is

important to make sure that a constructed model can be interpreted unambiguously [39].

Guidance

CHAPTER 3. PROPOSED EVALUATION APPROACH 74

• Are the semantics of the notation clearly-defined? For instance, the mapping between

concepts and notation needs to be unambiguous. Common and uniform mapping rules

are employed. In addition, it is not recommended to have representations that are

more complex than the nature of the relationship that they are trying to describe.

• Does the methodology provide techniques for checking the models to make sure that

all ambiguities have been eliminated?

2. Consistency : models should not contradict each other. This property becomes more im-

portant as the design evolves. More specifically, the representation of various aspects of a

system such as structure, function and behaviour should be consistent [113].

Guidance

• Does the methodology provide guidelines and techniques for consistency checking

both within and between models?

• Is the methodology supported by tools that provide model consistency checking?

• Is data dictionary used to avoid naming clashes between entities?

3. Traceability : There are relationships between models and between models and the require-

ments of the target system. Traceability requires that it has to be easy for the designers

and the audiences of the design documents to understand and trace through the models.

This may increase the users’ understanding of the system. Tracing backwards and forwards

between models and stages also allow the users to verify that all the requirements of the

system are addressed during the analysis and design stages. Traceability also assist the

designer produce new models by referring to the models that have been previously con-

structed. A result of doing this may be increased productivity in the sense that information

gathered from one model can be used to construct others [113].

Guidance

• Is there a clear and easily recognisable path from early analysis to implementation

via different modelling activities [35]?

• Are naming conventions for entities used across models?

• Can design decision can be recorded in some forms?

CHAPTER 3. PROPOSED EVALUATION APPROACH 75

• Are there rules, either formal or informal, for transforming one model into other

models? For instance, transform abstract analysis constructs into more concrete

design artifacts.

4. Refinement : Adding more detail into a model is called refinement. As discussed in [113],

refinement is a way of developing a design since it allows the developers to develop and fine-

tune design artifacts at different points in the development process. Hence, it is desirable

that a methodology should provide mechanisms to support refinement.

Guidance

• Is the modelling language integrated into the development process?

• Can a model incrementally built? For instance, the designers can start from the most

abstract level to subsequent levels of detail.

• Is there a seamless transformation from one level of abstraction to another without

causing the loss of semantics?

5. Reusability : support for the reuse of design components.

Guidance

• Does the methodology provide mechanisms to reuse existing components or to derive

new components from existing ones.

• Does the the methodology support the use of modularisation, generalisation, design

patterns, or application frameworks?

3.2.3 Process

As discussed above, the modelling language is considered as a crucial part of any software

engineering methodology. However, in constructing a software system, software engineering also

emphasizes the series of activities and steps performed as part of the software life cycle [35, 45,

113]. These activities and steps form the process which assists system analysts, developers and

managers in developing software.

According to [35], an ideal methodology should cover six stages, which are enterprise modelling,

domain analysis, requirements analysis, design, implementation and testing. Methodologies

CHAPTER 3. PROPOSED EVALUATION APPROACH 76

which cover all aspects of the system development are more likely to be chosen because of

the consistency and completeness they provide. More specifically, in evaluating the “process”

component of an agent-oriented methodology, we consider the following criteria.

1. Development principles: This criterion addresses the lifecycle coverage in a broad view.

It examines the development stages and their corresponding deliverables described within

the methodology. In addition, it examines the supporting software engineering lifecycle

model and development perspectives [35].

Guidance

• What are the development stages supported by the methodology? For example,

requirements analysis, architectural design, detailed design, implementation, test-

ing/debugging, deployment, maintenance, etc.

• Which software engineering approach does the methodology support? E.g. sequential,

waterfall, iterative development, etc.

• Which development perspective is supported by the methodology? Is it a top down

or bottom up approach or the combination of both?

2. Process steps: Differing to the above criterion, this one measures the lifecycle coverage in

more detail. In fact, an important aspect in evaluating whether a methodology covers a

particular process step is the degree of detail provided. It is one thing to mention a step

(e.g. “at this point the designer should do X”) and another thing to provide a detailed

description of how to perform X.

Since design is inherently about tradeoffs, detailed descriptions are usually expressed using

heuristics rather than algorithms, as well as examples. Thus, in assessing support for

process steps we identify whether the step is mentioned, whether a process for performing

the step is given, whether examples are provided, and whether heuristics are given [35, 45,

113].

Guidance

• Is a particular step mentioned? Is a process for performing the step given? Is there

any example provided to illustrate the use of the step? Is there any heuristic supplied?

CHAPTER 3. PROPOSED EVALUATION APPROACH 77

• Does the methodology provide decisions making by management such as when to

move between phases, i.e. when a phase is completed and move to the next phase?

3. Supporting development context : This criterion identifies the development context sup-

ported by the methodology. A development context specifies a set of constraints within

which the software development has to take place.

Guidance

• Which development context is supported by the methodology. Below are a number

of development contexts described in [35].

– “Greenfield” is the least constrained in that development can be conducted re-

gardless of existing software.

– “Prototyping” involves either performing prototyping as part of the software

development process or delivering the final product on the basis of successively

refining the prototypes.

– “Reusing” development context also has two facets. One the one hand, the

methodology expressly covers the inclusion of reuse products into its process.

On the other hand, reuse requires the methodology to provide process steps for

producing reusable products.

– “Reengineering” is the most constrained. It regards the software development as

a process of improving legacy and existing systems with the purpose of making

them more useful. This criterion is important, especially for AOSE because, as

emphasized in [53], one of the key pragmatic issues which determines whether

the agent-oriented paradigm can be popular as a software engineering paradigm

is the degree to which existing software can be integrated with agents. Regarding

this development context, it is important for a methodology to provide techniques

to manage legacy systems effectively, to understand their structure design as well

as to revive them to achieve a particular requirement of the organisation.

4. Estimating and quality assurance guidelines: These two criteria determine if such guide-

lines are provided within the methodology process. Estimating guidelines are important to

task planning. Quality assurance guidelines provide the assessors with useful information

in evaluating the merit of the delivered product.

CHAPTER 3. PROPOSED EVALUATION APPROACH 78

Guidance

• Does the methodology provide estimating guidelines for estimating the cost, schedule,

number of agents, etc. of the software under development?

• Does the methodology provide quality assurance guidelines that describe how the

quality of the software is to be assessed? Such qualities are reliability, performance,

etc. Techniques to assess the quality can be by reading documents, inspecting or

walkthrough.

3.2.4 Pragmatics

In addition to issues relating to notation and process, the choice of an agent-oriented software

engineering (AOSE) methodology depends on the pragmatics of the methodology. This can be

assessed based on two aspects: management and technical issues.

Management Criteria

Management criteria consider the support that a methodology provides to management when

adopting it. They include the cost involved in selecting the new methodology, its maturity and

its effects on the current organization business practices [35, 86, 113].

1. Cost : There are different types of cost associated with adopting the methodology as des-

cribed below.

Guidance

• What is the cost of acquiring methodology and tool support? For example, the

cost for reference material, software tools for project development, maintenance on

software tools, etc.

• What is the cost of training to fully exploit the methodology? This cost depends on

the expertise required to use the methodology, which relates to other criteria relating

to the modelling language (number of models, clarity, usability, understandability)

and process (number of steps in each process).

CHAPTER 3. PROPOSED EVALUATION APPROACH 79

• Who is the intended audience for the methodology? e.g. junior undergraduates,

senior undergraduates, graduate students, junior industrial programmers, experienced

industrial programmers, experts in agents, researchers, etc.

• How complex is the methodology? Is the methodology similar to familiar software

engineering methodologies such as UML and RUP?

2. Maturity : The maturity of a methodology is a factor that can play an important role in

determining the quality of a methodology. There are several ways to measure the maturity

of a methodology, as described below.

Guidance

• What are available resources supporting the methodology? For example, conference

papers, journal papers, text book, tutorial notes, consulting services, training services,

etc.

• Is the methodology supported by tools? For example, supporting tools can be tools

for build models, diagram editor, code generator, design consistency checker, project

management, rapid prototyping, reverse engineering, automatic testing, etc.

• What is the methodology’s “experience” such as the history of the methodology use?

For instance, the number of applications that have been built using the methodol-

ogy. In addition, industrial strength applications should be preferred over ones that

have only been used to develop small demonstration projects. Similarly, applications

developed by people not associated with the creators of the methodology are more

highly rated.

Technical criteria

Differing from management issues, technical criteria look at a methodology from another angle.

We use the following criteria, which are discussed in [86], to evaluate the technical aspect of

the methodology’s pragmatics. It is noted that the guidance for each criterion can be extracted

from its description.

• Domain applicability : This considers whether the methodology is targeted at a specific

CHAPTER 3. PROPOSED EVALUATION APPROACH 80

type of software domain such as information systems, real time systems or component-

based systems. With regard to this issue, the methodology that is applicable to a wide

range of software domains tends to be more preferred.

• Dynamic structure and scalability : This measures the methodology’s support for designing

systems that are scalable. It means that the system should allow the incorporation of

additional resources and software components with minimal user disruption. To the one

end, that is an introduction of a new components into the system. To the other end, that

is the introduction of new agent into the system (i.e. open systems).

• Distribution: This criterion measures the methodology’s support for designing distributed

systems. It means the methodology should provide mechanisms, including techniques and

models, to describe the configuration of processing elements and the connection between

them in the running system. It shows not only the physical of the different hardware

components that compose a system, but also the distribution of executable programs on

this hardware. More specifically, such models need to depict the deployment of agents over

the network.

Chapter 4

Evaluation Results

In the previous chapter, we have described the methods that we employed to carry out the eval-

uation and the constructed attribute-based framework on which the evaluation was based. In

this chapter, we present the results of the evaluation. The results of the survey are discussed in

section 4.1. Section 4.2 covers the comments of the students who used the methodology during

the case study. The differences and distinguishing differences of the five agent-oriented method-

ologies which are identified as a result of the structural analysis are presented in section 4.3 and

4.4 respectively. Finally, in section 4.5 we propose some suggestions to the unification of the five

methodologies based on the results we found in the evaluation.

4.1 Results of the Survey

On the basis of the evaluation framework proposed in section 3.2, we have developed a ques-

tionnaire which consists of fifty-three questions. Six of them investigate what the agent-oriented

methodology that the responder is assessing and his/her experience with it. The questions also

enquire the responder’s background (i.e. student, academic, or industry) and his/her experience

in agents and software engineering in general. The next fourteen questions address the concept

component of the framework, whereas sixteen of the fifty-three questions cover the modelling

language component. The remaining two components, process and pragmatics, are addressed in

the five and ten questions respectively in the questionnaire. These questions are derived based

81

CHAPTER 4. EVALUATION RESULTS 82

on every criteria that is covered in the evaluation framework. We also have two questions at

the end of the questionnaire which allow the responder suggests any missing criteria or any

comment to the questionnaire overall. A copy of the the full questionnaire can be found in the

Appendix A.

The intended receivers of the questionnaire were the authors of each AOSE methodology that

we have selected to assess. To our knowledge, Gaia, MaSE and Prometheus have two au-

thors. Tropos and MESSAGE were developed by a group of three main authors. Hence, twelve

questionnaires was distributed and seven of them have been received. Overall, all the selected

methodologies except Gaia had at least one representative from their authors.

In addition to obtaining evaluations from the authors of the methodologies, we also had a

number of students who, over the summer, designed an agent application, each using a different

methodology. Each student gave us feedback on their experience in understanding and using

the methodology (discussed in more detail in section 4.2), and also completed the questionnaire

at the end of their work. We also have our own assessment to the evaluation on the basis of our

understanding to them.

The results are discussed below. The assessments for criteria that can be attached with a numeric

judgement scale (see Section 3.2) are summarised in tables 4.1, 4.2, 4.3, and 4.4, whereas those

for “narrative” criteria are discussed in detail in text. It is also noted that the discussion of the

results is structured in terms of criteria presented in section 3.2.

4.1.1 Concepts

The results of the evaluation of the five methodologies with respect to their concepts (refer to

Section 3.2 for the proposed criteria) are shown in Table 4.1. For each methodology, there are

various columns. The columns named A contain the responses of the authors of the methodology.

The column named U contains the responses of the user (i.e. the student) of the methodology.

The final column of each methodology, named W shows our own assessment.

The assessment scale had six possible responses. “High” (H), “Medium” (M), “Low” (L) and

“None” (N) responses indicate the level of support of the methodology for a particular concept.

“Don’t Know” (DK) means that the responder is not aware of the methodology’s support for

CHAPTER 4. EVALUATION RESULTS 83

this particular concepts, whereas “Not Applicable” (DK) implies a particular concept is not

relevant with respect to the assessed methodology.

MaSE Prometheus Tropos MESSAGE Gaia

Concepts & A A U W A A U W A A U W A U W U W

Properties

Autonomy H M DK H H NA H H H M M H M M M M H

Mental attitudes L M H M H M H M H H H M L N L M L

Proactive H M H H H M DK H H H H H H N M L H

Reactive M M M M H M DK M H L DK M M N M M M

Concurrency H M H H M L DK L H M H L M N L M L

Situated M L H L H H H H H H H M H L M L L

Teamwork H M H N N L NA N H H M N H M N H N

Protocols H H H H M H M M NA M M M H DK M H M

Clear concepts H H H H H H L H H H M H H M H H H

Agent-oriented H H H H H H H H H H H H H M H L M

Table 4.1: Comparing methodology’s concepts. L for Low, M for Medium, H for High, DK for

Don’t Know, N for None, and NA for Not Applicable. The columns named A is the developers

of the methodology, the column U is the student, and the column W is our own assessment

Internal properties

1. Autonomy : As mentioned earlier, autonomy is commonly regarded as one of the key

properties of agents. It differentiates agents from other existing entities such as objects.

According to the responses from the survey and our assessment, all of the five agent-

oriented methodologies recognise that importance. The level of support for autonomy in

all of them is overall “good” (ranging from medium to high). This is reflected by the

fact that all the five methodologies provide various supports for describing an agent’s

self-control features. For instance, functionalities and tasks are encapsulated within an

agent. In addition, plan diagrams in Tropos, concurrent task diagrams in MaSE, or plan

descriptors in Prometheus allow the decision-making machanism of agents to be modelled

regardless of the environment and other entities. That mechanism is based upon the

agents’ goals and their roles within the system.

2. Mental attitudes: Prometheus and Tropos support well (medium to high) the use of mental

CHAPTER 4. EVALUATION RESULTS 84

attitudes (such as beliefs, desires, intentions) in modelling agents’ internals. The percept

and action descriptors in Prometheus and the actor diagrams in Tropos represent the

agent knowledge of the world (i.e. beliefs). Goals and plans are also modelled in the

two methodologies. In contrast, MaSE, MESSAGE and Gaia provide weaker support for

capturing an agent’s mental attitudes. MaSE and MESSAGE have goal diagram but they

do not have a representation of the agent’s belief.

3. Pro-activeness and reactivity : Based on the results, it seems that these two attributes are

difficult to measure we received highly varying responses. They seem to be fairly well

supported by some of the five methodologies (medium-high for MaSE and Prometheus,

mostly high for Tropos). Similarly to mental attitudes, this can be explained by the fact

that in these three methodologies agents’ goals are captured and so are the execution of

plans (i.e. actions or tasks) to achieve these goals. In addition, Prometheus has the action

descriptors which are a means for specifying agents’ responses to environment changes in

terms of external events.

4. Concurrency : In terms of support for concurrency, although the ratings are mostly medium-

high and vary considerably, MaSE is probably best with its concurrent task diagrams and

communication class diagrams. The former is used to specify how a single role can con-

currently execute tasks that define its behaviour. The latter, also expressed in the form or

a finite state machine, is able to define a coordination protocol between two agents. De-

tailed description of the two diagrams are given in section 2.3.2. Regarding this criterion,

Prometheus was rated as being one of the weakest although we should note that the han-

dling of protocols in Prometheus has been developed since the time of the questionnaire.

5. Situatedness: Even though the responses to this criteria with respect to the five method-

ologies range from medium to high, in our view only Prometheus provides a clear support

for modelling the “situatedness” of agents. Prometheus has an environment model which

represents the environment in which the agents operate. The system specification phase

in Prometheus allows the developers to capture the information that agents receive from

the environment (i.e. percept) as well as the actions that the agents initiate in responding

to these events. Even so, the environment model in Prometheus is limited to a clear in-

put/output specification regarding the characteristic needs and requirements of the system.

CHAPTER 4. EVALUATION RESULTS 85

A model of the environment that is internally used by the agents to represent their environ-

ment is not described in the methodology. Although more detailed environment models

have been proposed [56], these have not yet been integrated into an industrial-strength

methodology.

Social features

1. Method for cooperations: There are four cooperation types prompted in the questionnaire,

which are: negotiation, task delegation, multi-agent planning, teamwork. The responses

from both Tropos and MaSE authors are that the agent cooperations which these two

methodologies support are general and that any type of cooperations can captured. How-

ever, what we are interested in, in this criterion, is the cooperation modes that are clearly

supported by the methodology via provided techniques or models. Only negotiation (i.e.

how to manage an acceptable agreement for all agents concerned) and task delegation are

directly supported. Multi-agent planning and especially teamwork (see the next criterion)

are not explicitly supported in any methodology.

2. Teamwork : Although the methodologies all support cooperating agents, none of them

support teams of agents in the specific sense of teamwork. As discussed in detail in [20],

teamwork is the highest level of agent cooperation where all agents in a team work together

to achieve a common goal. Members of a team have joint intentions and activities. The

behaviour of the team looks more like a “single agent with beliefs, goals and intentions of

its own, over and above the individual onest” [20]. Therefore even though the responses

range from medium to high for most of the methodologies, we do not think that any of

the five methodologies provide support for designing teamwork. It can be explained that

the responders misunderstood the question and/or the question, including the definition

of teamwork provided, is not clear.

3. Communication modes: According to the result of the survey, there was a strong agreement

regarding this feature. All of the methodologies provide a wide range of communication

modes. More specifically, they support both direct/indirect and synchronous/asynchronous

communication.

CHAPTER 4. EVALUATION RESULTS 86

4. Protocols: Regarding this criterion, based on the respondents including ourselves, MaSE is

clearly a leader with its protocol analyser, i.e. the communication class diagram (refer to

the description of MaSE in section 2.3.2. Gaia does provide a specific model to represent the

protocols but it only captures the interactions between agents at high level. The legitimate

sequences of messages exchanged between two agents are not defined in the Gaia protocol

model. For the remaining three methodologies, they all currently have protocols but they

are not yet clearly defined other than “use AUML” and not strongly integrated with the

rest of the methodology.

5. Communication language: Similar to the Communication modes feature, this one also

attracts an agreement among the respondents. Since interactions between agents take

place at the knowledge level, all of the five agent-oriented methodologies target speech act

as a primary communication language. Tropos is even more flexible; according to one of

its authors, the methodology does not use any particular communication language.

Overall, according to the results of the survey, the concepts used in the methodologies tend to

be clearly explained and understandable. Given the level of support for the above agent-related

concepts, all five methodologies were perceived as being clearly agent-oriented, as opposed to

being object-oriented.

4.1.2 Modelling Language

The results of the evaluation of the five methodologies with respect to their modelling language

(refer to Section 3.2 for the criteria and Appendix A for the full version of the questionnaire) are

shown in Table 4.2. Similar to the “Concept table” shown in Table 4.1 each methodology has a

number of columns. Columns named A contain the responses of the authors of the methodology.

Column U contain the responses of the user (i.e. the student) of the methodology. The final

column of each methodology, named W, shows our own assessment.

This section of the questionnaire consists a number of statement addressing the evaluation cri-

teria relating to the modelling language (see section 3.2 for more detail). For each statement,

responders were asked to indicate their agreement with the statement. For instance, in asking

CHAPTER 4. EVALUATION RESULTS 87

if traceability exists between the models provided by a methodology, we stated that “The mod-

elling language supports traceability, which is the ability to track dependencies between different

models and between models and code”. There are five levels of legitimate answers. “Strongly

Agree” (SA) is the highest positive level, indicating that the respondent is very confident that

the methodology supports traceability. An “Agree” (A) answer signifies a less confident an-

swer, whereas “Neutral” (N) means the judgement does not have an adequate basis. Similarly,

“Strongly Disagree” (SDA) and “Disagree” (DA) are the opposites of “Strongly Agree” and

“Agree” answers respectively (refer to the questionnaire in Appendix A for clearer examples).

Detailed discussion of the evaluation are described as follows.

MaSE Prometheus Tropos MESSAGE Gaia

Modelling & A A U W A A U W A A U W A U W U W

Notation

Clear notation A A A A SA A A A SA A N A A A A N A

Syntax+symbols A A SA A SA A A A SA N A A A A A A A

defined

Static + dynamic SA A A A SA A A SA N A A A SA DA A A A

Adequate & SA N N A A A A A SA A N A N DA N DA N

Expressive

Different views N N A N A A SA A SA A N A A A SA A DA

Easy to use SA A A SA A N A A SA A N N A SA A A A

Easy to learn N N A A SA N SA A SA N A A A A A SA A

Semanstic A SA SA SA A A A A SA A A A N A A DA A

defined

Consistency SA A SA SA SA A A A A DA N DA N N DA N

checking

Traceability A SA SA SA A A A A A N A N A DA N DA N

Refinement SA A A A SA SA SA A SA A DA A A A A N N

Reusability N SA A DA N A N DA A DA DA N N DA A DA

Table 4.2: Comparing methodology’s modelling language. SA for Strongly Agree, A for Agree,

N for Neutral, DA for Disagree, SDA for Strongly Disagree, “ ” for no response. The columns

named A are the developers of the methodology, the columns U are the students, and the columns

W are our own assessment

CHAPTER 4. EVALUATION RESULTS 88

Usability criteria

1. Clarity and understandability : Two indicators of these two criteria are how clear the

notation is and how well the symbols and syntax are defined. As can be seen from Table 4.2,

the responders generally agreed that the methodologies’ notation were clear and the the

symbols and syntax are well defined. These indicates the notations provided by all of the

five methodologies are fairly clear and understandable.

2. Adequacy and expressiveness: The number of static and dynamic models are a good in-

dicator of this criterion and so is the number of different views represented the target

system. Both MaSE and Prometheus model the dynamic aspects of the system and han-

dle protocols well. Tropos does not provide strong support for protocols, or for modelling

the dynamics of the system except for some support at the detailed design level. Although

one respondent felt strongly that MaSE’s notation was adequate and expressive, the other

two respondents disagreed (neutral). MaSE also does not claim to support capturing dif-

ferent views or contexts of the target system (neutral from both creators). Gaia is also

an interesting case, the user agreed that the methodology has models representing both

static and dynamic aspects of the target system and the system is viewed from different

perspectives. However, the user did not agree that the notation of Gaia is adequate and

expressive. Regarding to static and dynamic models, we tend to agree with the user re-

sponders. We also agreed with the user that Gaia’s modelling language is not adequate

since detailed internal structure of agents are not captured. However, we do not believe

that Gaia is a view-oriented methodology. Rather, the methodology represents the target

system in terms of different models (refer to Section 2.3.1.

3. Ease of use: Overall, most of the respondents agreed that the notations of the five method-

ologies are easy to learn and use. This also relates to the agreement of the understand-

ability and clarity of the notation as discussed above. Tropos is, however, an exception

case. One of its author strongly agree and the other agree that the modelling language of

the methodology is easy to use. In contrast, the user and we preferred to take a neutral

view on this criterion. This is due to the fact, unlike MaSE, Prometheus and MESSAGE,

Tropos does not have a tool support integrated with the methodology. As a result, users

may find it difficult to draw diagrams, checking model consistency, etc.

CHAPTER 4. EVALUATION RESULTS 89

Technical criteria

1. Unambiguity : Semantics are also well-defined by all the methodologies. For Gaia, the

student felt that the semantics of Gaia’s modelling language are not well-defined. However,

we tend to disagree with that. The meaning of symbols and models in Gaia is in fact defined

in detail in [117]. Tropos was another interesting case: there was disagreement on whether

the concepts were clear, and whether the notation was clear and easy to use; furthermore,

there was disagreement on whether the syntax was defined, but oddly, there was consensus

that the semantics were defined. For the remaining three methodologies, there was an

agreement (strongly agree - agree) that the modelling language is unambiguous in the

sense that the semantics of the notation is clearly defined.

2. Consistency : In terms of consistency checking, the level of support differs between method-

ologies. MaSE and Prometheus support it well whereas MESSAGE, Tropos and Gaia do

not appear to support it. These responses seem to relate the availability of tool support

integrated with the methodology. PDT (Prometheus) and agentTool (MaSE) provides

a strong support for model and design consistency checking. Meanwhile, the remaining

three methodologies either have supporting tools that are limited to only drawing diagram

(MESSAGE) or do not have any tool at all (Gaia and Tropos).

3. Traceability : Likewise to consistency, MaSE and Prometheus appear to be the leader in

terms of supporting this feature. The responders of these two methodologies, including us

agreed that there are clear links between models provided by them. For instance, goals,

roles, agents, and tasks are all linked together. This strong connection improve the ability

to track dependencies between different models. Such connections, as described in one of

the paper related to MaSE [103], allow developers to (automatically or manually) derive

design models (e.g. an agent’s internal architecture) from analysis constructs.

4. Refinement : Refinement is generally well-supported (although there was disagreement

from the student using Tropos) by all five methodologies. This reflects the fact that the

modelling language of all five methodologies is integrated into their development process.

The process in fact consists of iterative activities. Developers are free to move between

phases to add mode detail in a constructed model. Another indication of refinement

CHAPTER 4. EVALUATION RESULTS 90

supported by the five methodologies is the seamless transformation from one level of ab-

stractions (e.g. goals, roles) to another (e.g. agents, tasks) without causing significant loss

of semantics. This will be discussed in more detail in sections 4.3.4 and 4.3.5.

5. Reusability : According to the questionnaire’s result, the responders (authors and users)

tended to agree or be neutral in regard of this criterion. However, we disagree with

them since, from our perspective, none of the methodologies explicitly provide techniques,

guidelines or models to encourage the design of reusable components. Also, reuse of existing

components is also not addressed in any methodology.

Overall, all five methodologies have a good modelling language in terms of having clear and

understandable notation and being able to express different aspects (e.g. static, dynamic) of the

target system. They also have a clear semantic, which reduces the ambiguity of the modelling

language. However, several features such as consistency checking and traceability are clearly

supported by some of the five methodologies (MaSE and Prometheus). Meanwhile, the others

(Gaia, MESSAGE, and Tropos) needs some improvement such as integrating these features into

the supporting tool. All five methodologies can also be improved by having some support for

reusing or generating reusable components and models. These, if supported well, would increase

the productivity of the software development.

4.1.3 Process

The assessment’s results of the Process component of the five selected methodologies are sum-

marised in Table 4.3 and part of Table 4.4 (for estimating and quality assurance, management

decision making guidelines criteria).

The evaluation result of the process steps and life cycle coverage are shown in Table 4.3. The

values presented reflect the following:

• 0 indicates no mention is made

• 1 indicates that mention is made but no details are provided.

• 2 indicates that mention is made and at least one example is provided.

CHAPTER 4. EVALUATION RESULTS 91

• 3 indicates that mention is made, examples are presented, and a process is defined.

• 3’ indicates that mention is made, examples are presented, plus heuristics are supplied.

• 4 indicates that mention is made, examples are presented, a process is defined, plus heuris-

tics are supplied.

MaSE Prometheus Tropos MESSAGE Gaia

Process A A U W A A U W A A U W A U W U W

Requirements 4 4 4 4 4 4 4 4 3 3 3 3 4 3 4 4 3

analysis

Architectural 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 4 4

design

Detailed design 4 4 4 4 4 4 4 4 3 3 3 3 1 0 1 0 1

Implementation 3’ 3 1 3 4 1 0 1 2 3 4 2 1 0 1 0 0

Testing & 3 0 0 3 4 1 0 4 0 0 0 0 0 0 0 0 0

Debugging

Deployment 2 3 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0

Maintenance 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Table 4.3: Comparing methodology’s process. 0 implies no mention is made to a particular

stage; 1 implies mention is made, 2 indicates the stage is mentioned with examples; 3 means 2

plus a process is defined; 3’ means 2 plus heuristics are supplied; 4 indicates a stage being fully

supported by examples, defined process, and given examples. The columns named A are the

developers of the methodology, the columns U are the students, and the columns W are our own

assessment.

1. Development principles: From the software development life-cycle point of view, all of the

methodologies mention the requirements, architectural design and detailed designto some

extent. Implementation is also discussed in all of them except Gaia. Testing/Debugging is

only included in Prometheus and MaSE. According to the responses, even though one of

MaSE’s authors was not aware of that and so were its users, we acknowledge the existence

of these phases in the two methodologies. In fact, an effort to automatically verify the

interaction between agents is presented in [73]. Meanwhile, while Prometheus’ support is

part of a research project [93] not yet integrated into tools for use by developers.

MaSE is the only methodology that describe the deployment of agents. This is part of

MaSE system design phase where agents are instantiated and their locations are defined

CHAPTER 4. EVALUATION RESULTS 92

(refer to section 2.3.2 fore more detail). It is also noted that we perceive deployment in

the agent context where the support in MaSE, to some extend, fulfill the objective of a

deployment phase. In software engineering generally, a deployment phase usually refer to

the activities associated with ensuring that the software product is available for its end

users [4], e.g. packing the software, writing the documentation, etc. It is also noted that

none of the five methodologies address the maintenance phase.

Regarding the software engineering models, we agree with all the responses that the five

methodologies adhere to an iterative development process rather than sequential or wa-

terfall ones. Developers are encouraged to freely move between development phases and

steps although there is a tendency in all of the five methodologies that specific activities

are described in sequence. In terms of the development perspectives, Prometheus and

MaSE support both top-down and bottom-up approach, whereas Gaia, MESSAGE and

Tropos seem to suit a top-down approach.

2. Process steps: The process steps described in the requirements analysis and design phases

are also addressed well in most of the five methodologies. For instance, the students who

used the methodologies all said that the analysis stage of the methodology they had used

was well described and provided useful examples with heuristics. This helped them to

shift from object-oriented thinking to agent-oriented. However, detailed design is not well-

documented in both MESSAGE and Gaia for several reasons such as limited resources (for

MESSAGE) and going for generality rather speciality (for Gaia). Furthermore, a common

feature in all the methodologies is the lack of management making decisions in performing

the process steps such as when to move to the next phase, etc.

3. Supporting development context : As discussed in section 3.2, there are several main devel-

opment contexts such as Greenfield, Prototyping, Reusing, Reengineering, etc. Due to the

purpose of keeping the questionnaire as concise as possible to attract more respondents,

this criterion is one of the evaluation measures that were not included in the questionnaire.

Nevertheless, in our view none of the five methodologies explicitly address the issues re-

lating to the inclusion of prototyping in the process, or producing reusable components.

In addition, taking into account that one of the key issues which determines whether the

agent-oriented paradigm can be popular is the degree to which existing software can be

CHAPTER 4. EVALUATION RESULTS 93

integrated with agents, legacy system integration design is not addressed in any of the

methodologies. All of the methodologies in fact follow the “Greenfield” approach in which

the software development can be carried out irrespective to the existing software.

4. Estimating and quality assurance guidelines: (see Table 4.4) Because of the immaturity

of agent-oriented methodologies, issues relating to cost estimating or quality assurance

are not addressed in all five methodologies. They probably rely on the current software

engineering practice of these matters.

Overall, all five methodologies cover requirements analysis, and architectural design. Some

of them (MaSE and Prometheus) go further than that with description of detailed design,

implementation and testing/debugging. Deployment is only addressed in MaSE. Top down

design and “Greenfield” development are the popular approaches employed by most of the five

methodologies. Finally, all of them need some improvement in terms of providing estimating

and/or software quality assurance guidelines.

4.1.4 Pragmatics

As we have mentioned earlier, the pragmatics of a methodology plays a very important role in

determining its applicability in industry as well as in academia.

The results of the evaluation of the five methodologies with respect to their pragmatics (refer to

Section 3.2 for the evaluation criteria) are shown in Table 4.4. Similar to the “Concept table”

shown in Table 4.1 each methodology has a number of columns. Columns named A contain the

responses of the authors of the methodology. Column U contains the responses of the user (i.e.

the student) of the methodology. The final column of each methodology (W) shows our own

assessment.

The questions are also in the “statement” form like those in modelling language. Likewise, the

answers to those questions range from Strongly Agree to Strongly Disagree. In addition, there

are questions to obtain information such as the number of applications implemented using a

methodology, or whether the methodology was used by non-creators1. For more detail, refer to

1These questions address narrative criteria in the evaluation framework described in section 3.2

CHAPTER 4. EVALUATION RESULTS 94

the full version of the questionnaire in Appendix A.

MaSE Prometheus Tropos MESSAGE Gaia

Pragmatics A A U W A A U W A A U W A U W U W

Quality N DA A DA A N N DA DA A DA N DA DA DA DA

guidelines

Cost DA SA N DA DA N DA DA N DA DA DA DA DA DA

estimation

Management DA SA DA SDA N SDA SA A DA DA N DA DA DA

decision

Number apps 21+ 21+ 1-5 6-20 6-20 6-20 1-5 1-5 1-5 1-5 1-5 1-5 0 0

Real apps no no no no no no no no yes no no no

Used by yes yes yes yes yes yes yes yes yes no no yes yes no no

non-creators

Domain no no no no no no no no yes no no no no yes no no no

specific

Scalable A N N N N N N N N N DA N DA DA

Distributed SA SA SA SA SA A N A N A A N A A

Table 4.4: Comparing methodology’s pragmatics. SA for Strongly Agree, A for Agree, N for

Neutral, DA for Disagree, SDA for Strongly Disagree, for no response. The columns named

A is the developers of the methodology, the column U is the student, and the column W is our

own assessment

Management criteria

1. Maturity : Regarding the availability of resources supporting the methodologies, most of

them are in the form of conference papers, journal papers or technical reports. None of the

methodologies are published as text books. The availability of tool support also varies.

MaSE and Prometheus are well supported with agentTool (MaSE) and JDE and PDT

(Prometheus). According to the authors of MaSE (based on the questionnaire’s responses),

agentTool can be used as a diagram editor, a design consistency checker, code generator

and automatic tester. They also revealed that agentTool has been downloaded and used by

many people in academia as well as industry and government. For instance, despite some

minor issues, the use of agentTool really helped the student in drawing diagrams, checking

model consistency and especially semi-automatically transforming analysis constructs to

design models. The tools supporting Prometheus, PDT and JDE, also provide a similar

CHAPTER 4. EVALUATION RESULTS 95

range of functionalities. PDT supports for drawing diagrams, checking model and design

consistency, and generating reports. JDE (Jack Development Environment) can be used

a design tool to build the structure of an agent system in which the concepts provided by

JACK2 match the artifacts constructed in Prometheus’ detailed design phase.

According to the author and the student who used MESSAGE, the tool supporting the

methodology (MetaEdit+) had been used solely as a diagram editor and report generator.

Tropos has only weak tool support (a diagram editor) whereas there is no tool support for

Gaia that we are aware of3.

Although we attempted to determine how much “real” use (as opposed to student projects,

demonstrators etc.) had been made of each methodology, it was not clear from the re-

sponses to what extent each methodology had been used, who had used the methodology,

and what it had been used for. Nevertheless, to our knowledge, MaSE was used to design

a team of autonomous, heterogeneous search and rescue robots [31]. Tropos was used to

develop a web-based broker of cultural information and services for the government of

Trentino, Italy [7] and an electronic system called Single Assessment Process to deliver an

integrated assessment of health and social care needs for older people [80]. Prometheus

was used to design an agent system to perform Holonic Manufacturing [91]. There also

has been one application developed using MESSAGE. That was the Universal Personal

Assistant for Travel which provides personal travel assistance services such as travel ar-

rangements. All of these work done by people more or less involved with the developers

of the methodologies.

2. Cost : Regarding the cost of acquiring methodology and tool support, to our knowledge,

all of the methodologies are free to access. Documentation materials are available online.

Tool support for MaSE (agentTool) is free to public access. MetaEdit, the tool used to

support MESSAGE is in fact a commercial product.

In our view, the potential cost of training is not large because the methodologies are aim

at different levels of expertise. According to the questionnaire’s result, there is a wide

range of intended audiences of Prometheus and MaSE, including inexperienced developers

2JACK Intelligent Agents, own by the Agent Oriented Software company (http://www.agent-software.com),

is an agent programming language that can be used to develop agent-based systems3Email correspondence with Michael Wooldridge, one of the authors of Gaia, on 18th December 2002.

CHAPTER 4. EVALUATION RESULTS 96

such as junior undergraduates. We believe this because, as mentioned earlier, Prometheus

has been taught to, and used by, third-year undergraduate computer science students at

RMIT University. On the other hand, MESSAGE and Tropos are aimed at to the level of

graduate students, experienced industrial programmers, experts in agents, or researchers.

For instance, the five second-year summer students had no prior experience of agents but

within two weeks time they were able to understand and use each of the five methodologies

to successfully complete the design of PIPS.

In the survey, we attempted to measure how complex a methodology is to users, by using

UML (Unified Modelling Language) and RUP (Rational Unified Process) as a benchmark.

There seems to be an agreement among the respondents that the five methodologies are

about the same complexity as UML and RUP. However, it is not clear that there was a

consensus on the perceived complexity of UML+RUP, and so the answers to this question

did not allow any strong conclusions to be drawn.

Technical criteria

1. Domain applicability : The respondents tended to agree that there is no limitation to the

application domains where one of the five agent-oriented methodologies can be applied.

These domains are suitable to agent-based system, promising to deliver robust, reliable and

autonomous software. Nonetheless, strictly speaking, there are several domain constraints

on the basis of models and techniques currently provided by the methodologies. For

instance, all of them may not suit systems that allow the possibility of true conflict, e.g.

the global system’s goal may conflict with those of the system components [55]. To design

such systems, it is necessary to require additional models over and above those that the five

methodologies currently offer. There are also several assumptions relating to the domain

applicability of the five methodologies such as, the organisation structure of the system

does not change at run time, or that the ability of agents and the services they provide are

unchanged at run-time. Open systems are another area that are not addressed by any of

the five methodologies. These systems have several special properties such as being able

to be ported with minimal changes across a wide range of systems, to interoperate with

other applications on local and remote systems, and to interact with users in a such a way

CHAPTER 4. EVALUATION RESULTS 97

that ease user portability [55].

2. Dynamic structure and scalability : Regarding this criterion, most of the respondents stood

on a neutral point of view. In our perspective, this issue is not explicitly addressed in any

of the methodologies. More specifically, they do not tell how to deal with the introduction

of new components or modules in an existing system. Furthermore, none of the method-

ologies, as we mentioned earlier, currently support design of open systems.

3. Distribution: Overall, all of the methodologies implicitly support distribution. This is

partially due to the nature of agent-based systems. When developed, agents communicate

with each other via a message passing system. In other words, agents are not coupled until

an interaction needs to occur. As a result, the agents do not necessarily populate on the

same systems. The results from the questionnaire also agreed with that view. Responses on

this criterion on average range from Neutral to Agree. MaSE is an exceptional case in that

the system design step of MaSE allows the developers to design and allocate agents over

the network. It is supported by the Deployment Model, a representation of agent types and

their location on the network (refer to Section 2.3.2 for detailed description). Therefore,

we tend to strongly agree with all the respondents of MaSE that the methodology provides

sufficient support for distribution.

4.2 Results of the Case Study

As we mentioned earlier, a small case study was conducted over last summer (December 2002

– February 2003). It involved five second-year undergraduate students who each used one of

the five agent-oriented methodologies to design the Personal Planner Itinerary System. As

emphasized when we sketched out the plan for the evaluation in section 3.1.2, the purpose of

this small experiment was not to provide statistically significant results. We did not in fact have

enough time and people to perform such formal experiments. Rather, the main value of this

case study was to examine agent-oriented methodology’s ability to solve a small “real” problem.

As a result, we found whether the methodology is understandable and usable.

To all five students, PIPS was their first experience of agent technology in general and partic-

ularly agent-oriented software engineering. All of them had some experience in object-oriented

CHAPTER 4. EVALUATION RESULTS 98

software engineering (SE) by doing an SE subject and/or building software applications.

At the end of the experiment, apart from answering the questionnaire, we asked them to review

the methodology they used in terms of the strengths and weaknesses in their view. The review

was in the form of either writing or interviewing. The original version of the written reviews is

included in the Appendix B. The students’ comments range from very specific (e.g this particular

concept is confusing or this notation is not explicitly defined) to more general ones. For instance,

a general comment was that the steps of a particular analysis process are well-defined. In

addition, they also responded to the usefulness of tool support (if it was available) as well as

the difficulties that they encountered during the course of analysis and design. Furthermore,

comments also include suggestions such as having a particular model at this phase would be

more helpful or performing a particular step followed by that step would be more useful.

These reviews and our commentary are summarised below.

4.2.1 Gaia

Gaia’s analysis phase includes constructing a role model and protocol diagrams. In order to

do so, Gaia suggests the analyst follow the three process steps: (1) make the prototypical roles

model, (2) make the protocol diagrams and (3) make the elaborated roles model. According

to the student, the stage is generally well-described in the related journal paper. However, the

methodology seems to lack support for helping the analysts identify roles in the system. The

student commented that more examples with some heuristics would be useful. The protocol

model is also descriptive and easy to learn.

Nonetheless, she found that doing step 2 before step 3 had been quite difficult. It is because

performing step 3 requires defining the liveness responsibilities of each role, which might in fact

help her in developing protocol diagrams (i.e. step 2). She also responded that the permissions

and safely responsibilities described in the methodology are not clear. In fact, she did not really

understand the concept of safety responsibilities. As a result, in the role model she created for

PIPS, this property of roles is not well-specified. She also felt that defining role’s permissions at

the analysis stage is quite early. For that reason, performing this process at some stages in the

design phase would help her in making the role’s permission correct. Regarding the notation

CHAPTER 4. EVALUATION RESULTS 99

describing these role’s properties, liveness responsibilities are reasonably well described whereas

responsibilities are in fact not provided.

Involving the design phase, the student responded that even though there are some very general

heuristics in helping the designers map roles to agent types, she found some difficulties in per-

forming this step. More detailed and specific guidelines would be more helpful. Furthermore,

during the process of constructing the Service Model, she was in fact able to define the permis-

sions and the safety responsibilities of each role more properly and clearly. There is, however,

a limitation in the Service Model in that the definition of notations used for the pre-conditions

and post-conditions are not mentioned. The final Gaia design artifact, the Agent Acquaintance

Model, tends to be the easiest one to build, according to her.

Overall, she tended to think that the models described in Gaia are fairly well-described. Nonethe-

less, there are several notations which need to be explicitly defined. The analysis process required

many iterations, which resulted in issues such as model consistency, links between models and

keeping track of changes made. Those problems were manifested by the lack of tool support

(no tool support for Gaia). Additionally, she felt that Gaia is quite general and lacked detailed

design supports. As a result, she might possibly find it difficult to move from the design stage

to the implementation phase based upon the artifacts she currently created.

4.2.2 MaSE

The student who used MaSE gave some positive responses to the Analysis phase described in

the methodology. She found the three process steps (i.e. Capturing Goals, Applying Use Cases,

and Refining Roles) well-defined and easy to follow. She also found the notation used in MaSE

clear and easy to understand and to learn. The provided tool support (agentTool) was very

useful in helping her to build analysis and design models and to check the consistency among

them. However, there were some minor issues; for example as there is too much text on arcs in

concurrent task diagrams, which made them hard to read.

Of the four Design steps described in MaSE, only the “Constructing Conversations” step gave

her some difficulties to follow. According to the student, the other steps are well-defined and easy

to apply. Regarding the “Constructing Conversations” step, there is a paper [103] proposing the

CHAPTER 4. EVALUATION RESULTS 100

Figure 4.1: Agent Class Diagram automatically generated by agentTool (extracted from the

PIPS design documentation produced by Yenty Frily)

use of semi-automatic transformation from analysis models to design models. The idea was also

implemented in agentTool. The student had used it but the generated models were not good

because of many repetitions in messages sent. For instance, as can be seed in Figure 4.1, the

AccountWatcher Agent has four self conversations, which is unnecessary. Hence, she followed

the instructions described in another paper [32] which provided her with guidelines to do this

step manually.

There were also some comments on minor bugs found in agentTool. For instance, conversation

messages in the Deployment Diagram were always missing when the page was revisited. This

issue also occurred with the attribute types in the Agent Architecture tab.

4.2.3 MESSAGE

The student found that the four deliverables of the MESSAGE project are a useful source to

the understanding of the methodology. To his impression, the analysis stage is well defiend and

documented. He also appreciated the view-oriented approach which is used in MESSAGE. In

fact, he found it easy to analysing the system by focusing on different aspects of the system at

CHAPTER 4. EVALUATION RESULTS 101

different times. He also found that five different views are equally important and complete, i.e.

together they provide the comprehensive view of the system and its environment. One of the

important issues in the view-oriented approach is to deal with the maintenance of consistency

among views. Relating to this, tools supporting the methodology play an important role. How-

ever, his experience with the tool support recommended by MESSAGE (MetaEdit) was only

limited to drawing diagrams and generating report documentation. He did not use it to link the

models constructed so that he could benefit from the automatical consistency checking of the

tool. Instead, consistency checking across five views was performed manually. It may be feasible

in this case since the design of PIPS is likely simple.

The student also responded that the refinement-based approach which the MESSAGE analysis

process applies to is very useful. It guided him from the beginning where he just had some rough

idea of the system to the end where components of the system are described in detail. He only

went to level 1 analysis, even though as the methodology suggests, subsequent levels of analysis

may be needed.

Regarding the design stage, he found that it is not described in detail. He commented that

sufficient detailed descriptions and guidelines, if they existed, would help him more during the

design. It is also noted that MESSAGE does deliberately describe the design stage at a high

and general level. Its purpose is to let the designers have the freedom to choose the design that

suits their specific agent platform.

Overall, the student had a good impression of the methodology. His previous experience of

using UML helped him in using and understanding the MESSAGE modelling language since it

extends UML to model agent-oriented concepts. He found that the concepts regarding to agent

oriented software engineering are well-described and easy to understand. Also, to some extent,

MESSAGE helped him think, analyse and design the system in an agent-oriented manner. To

him, the analysis stage is the strength of the methodology whereas the capturing requirements

phase, design and implementation phases need to be described at sufficient level of detail. In

addition, he commented that management decisions such as when to move between phases are

not sufficiently provided. He also emphasised the fact that MESSAGE does not provide a

detailed design nor implementation, which makes implementation a very difficult task. Also,

according to him clearly elaborated and detailed description of the analysis process would make

CHAPTER 4. EVALUATION RESULTS 102

the analysis easier for him.

4.2.4 Prometheus

Overall, the student who used Prometheus to perform analysis and design had a good impression

of the methodology. Even though there were some minor difficulties, he successfully completed

the design. At the time he started the project, the student was a novice to agent technology.

However, he had several experiences in software engineering, especially object-oriented method-

ologies such as Rational Unified Process and Unified Modelling Language.

Despite the fact that goals are a new concept introduced in the agent-oriented paradigm, the

student found it easy to identify system goals. However, determining the external interface

(percepts, actions, data stored) was harder for him. In particular, he had some difficulties in

identifying the relationship between functionalities and actions as well as telling the difference

between actions and sending messages. These confusions may suggest that the differences be-

tween these concepts need to be made very clear. More examples and heuristics in those cases

are likely to be required.

The student also had some difficulties (even though he said they were minor) in understanding

and using several concepts introduced at the system specification phase. In particular, he found

it a bit confusing with respect to the concepts of percept, incident, trigger, message, and event.

Likewise, capability and functionality were confusing. These difficulties may be caused by several

reasons. Firstly, there were inconsistent descriptions of these concepts in different documentation

related to Prometheus 4. For example, the student responded that the textual descriptors which

Prometheus uses to specify details of modelling elements such as functionalities, agents, percepts,

etc are described differently in some articles and lecture slides. In particular, he pointed out that

in one of the articles on Prometheus, Percepts/Events/Messages are parts of the functionality

descriptor. Whereas, in the provided lecture slides Percepts/Triggers are included.

Secondly the student was new to the agent-oriented paradigm and its related concepts. Hence,

understanding these new concepts and changing his object-oriented way of thinking to an agent-

oriented one likely faced some obstacles. Again, standard definitions together with examples

4According to the author of Prometheus, the methodology has been improved relating to this issue

CHAPTER 4. EVALUATION RESULTS 103

and heuristics may be useful in helping new users to overcome these concept-related issues.

The student also suggested the need for having some sort of overview diagrams at the system

specification stage. He proposed moving the data coupling diagram to this stage and argued

that the availability of such diagrams would help in identifying actions,percepts, etc.

The student regarded the architectural design phase as one of the most important stages which

requires careful analysis of the system structure. The amount of effort and work that he put

on this phase was higher compared with the system specification stage. However, he found the

process steps described in this stage well-defined and easy to follow. In addition, he was satisfied

with the support of the Prometheus Design Tool (PDT) in drawing design diagrams, checking

consistency among design models and automatically generating reports. In fact, the student’s

project documentation was based on the design that was developed using PDT. Also regarding

tool support, the student experienced several difficulties and complications in developing inter-

action diagrams and protocols. They were in fact caused by the lack of support for drawing

these diagrams in PDT.

One of the key tasks in the architectural design stage is grouping functionalities into agent types.

Performing this task for the first time did not pose any significant issue. However, refining agent

types including applying changes in agent grouping consumed more time than the student’s

expectation. This is explained by the fact that those changes in grouping functionalities and

mapping to agent types may result in significant updates of various descriptors. This process

was considered by the student to be “tedious and error-prone”. Nonetheless, he appreciated the

role of the System Overview Diagram (SOD) in providing a useful abstract model of the system.

Hence, using the SOD during the refining process helped him to apply changes but still keeping

in mind the overall structure of the system.

The detailed design phase involves constructing the internals of individual agents in the systems.

The student saw that the main secret of this stage is to understand and to get used to the concepts

of the Jack Agent Language. He explained that he wanted to map design elements to the JACK

implementation platform. Therefore, he found the lab exercises, included with JACK, to useful

in learn JACK basics.

CHAPTER 4. EVALUATION RESULTS 104

4.2.5 Tropos

The student appreciated the usefulness of Tropos requirements analysis phase (including Early

Requirements and Late Requirements). The goal-driven requirements engineering techniques

described in this phase helped her in establishing an agent-oriented way of analysing and de-

signing the target system. In fact, the concepts described in this phase, such as actors, goals,

plans, etc, are very similar to those in agents. This early requirement analysis phase also involves

identifying goals of the target system and the resources, plans and tasks to achieve these goals.

It required the student to reason about the system at a higher level of abstraction compared

with the object-oriented methodologies.

However, the student also faced some difficulties at this stage. For instance, in some cases

she could not distinguish between goal mean-end analysis and AND/OR decomposition. This

confusion resulted from the fact that Tropos usually considers AND/OR goal decomposition as

a special case of mean-end analysis. Also, she did not know what level of detail she should go

into at this stage such as the level of plans, tasks, goals. This is likely because of the lack of

support for management decisions (e.g. moving between phases) provided by Tropos.

The student found that Tropos architectural phase is similar to its counterpart in the object-

oriented methodology that she had used. Tropos divides this stage into three process steps:

(1) defining the overall architectural organization of the system, (2) identifying the capabilities,

and (3) defining agent types and mapping them to capabilities. The student regarded the last

step as the most important in this stage even though, according to her, there was not enough

detailed information available describing it. She faced several uncertainties such as whether to

include human agents’ capabilities (e.g. the travellers) in step 2. Also, in step 1 which requires

the designer to provide extended actor diagrams, she did not know which actor she should focus

on, the complex ones, the simple ones, or all of them. In addition, when performing step 3, she

found that the descriptions provided in several papers relating to Tropos are not sufficient. As

a result, she had some difficulties in grouping capabilities into agents type. In this case, some

heuristics and examples such as functionality cohesion, data coupling, communication links may

be very useful. Overall, she felt that she needed to use a lot of common sense when doing step

2 and step 3 of this phase.

CHAPTER 4. EVALUATION RESULTS 105

Tropos’ detailed design phase involves constructing three diagram types: capability diagram,

plan diagram and interaction diagram. The student’s feedback was that there are not detailed

and sufficient descriptions in the Tropos papers regarding this phase. This led to some confusion

when she tried to produce design artifacts such as agent interaction diagrams, and protocols.

According to her, she found some difficulties when drawing the Capability diagrams since the

examples showing these types of diagrams are slightly different between papers related to Tropos.

In addition, since Tropos uses UML diagrams (activity and interaction) to represent the detailed

design models and it assumes that the designer has a prior knowledge of UML, there are some

notations used without explanation. Therefore, this made it harder for the student who did not

have prior experience of using UML. In this case, more examples and heuristics would have been

helpful.

4.3 Structural Analysis - The Commonalities

4.3.1 Capturing Goals

As discussed in section 2.1.1, week agency regards pro-activeness is one of the key characteristics

of agents. Agents are pro-active if they have goals that they pursue over time. It means that

actions initiated by agents are more or less towards the achievement of their goals. Strong agency

emphasises that goals are part of the mentalistic and intentional notions of agents. Hence, agents’

goals are arguably one of the most important concept of agency.

In addition, goals are considered more stable than requirements and less likely to change [26].

For these reasons, “Capturing Goals” is regarded as one of the key steps in the agent-oriented

development process proposed by the five methodologies which we have evaluated. Indeed,

capturing goals is the first step of seven steps in MaSE, whereas identifying goals is part of the

system specification in Prometheus and an important modelling activity (Goal Modelling) in

Tropos. Identifying goals is also an important component in developing the Goal/Task view (one

of the five views) in MESSAGE. Goals are in the form or roles’ responsibilities in Gaia, which is

more concrete than goals in the other methodologies. Overall, for all of the five methodologies,

“Capturing Goals” is part of the requirements analysis and is used as a foundation of the

CHAPTER 4. EVALUATION RESULTS 106

identification of agents (see section 4.3.3).

The mechanism of capturing goals basically includes identifying goals, structuring them and

representing them. Most of the methodologies take the requirements specification (assuming its

availability) as a basis for goal identification. They also tend to agree that system goals are

abstractions of requirements (function and non-functional).

Structuring goals ranges from a simple AND/OR decomposition to more complex process. For

instance, in MaSE goals are classified into four types: summary goal (an abstraction of several

goals), partitioned goal (is achieved by accomplishing sub-goals), combined goal (a sub-goal

are a combination of two or more similar parent goals), and non-functional goals (derived

from non-functional requirements). In Tropos, in addition to the basic AND/OR decomposi-

tion, the goal decomposition process is also achieved by means-end analysis (depending plans,

resources, softgoals which provide the means for fulfilling that goal) and contribution analysis

(identify goals that can be either positive or negative contribution). Prometheus does mention

goal decomposition but does not go into detail. Except for Gaia and Prometheus, the other

three methodologies provides a graphical representation (goal hierarchy in MaSE, goal/task im-

plication diagram in MESSAGE and goal diagram in Tropos) depicting the relationship between

goals and sub-goals. Tropos goes further than that by including plans and resources needed to

accomplish these goals.

4.3.2 The Role of Use Cases in Requirements Analysis

Use cases have been proven to be an effective technique in object-oriented methodologies in

eliciting requirements. Of the five AOSE methodologies, Prometheus, MaSE and MESSAGE

make use of this technique. The purpose of using it in all three methodologies is to help the an-

alysts identify the key communications/interactions between entities in the system. Prometheus

proposes a structured use case scenario template covering related information such as incoming

incident/percept/trigger, goals, message, events. MESSAGE suggests the use of UML-like use

cases. MaSE does not explicitly define a use case template, hence it is expected that analysts

use those from object-oriented design. Nonetheless, MaSE uses UML-like sequence diagrams as

a notation to express the communication paths between roles on the basis of the identified use

CHAPTER 4. EVALUATION RESULTS 107

cases.

4.3.3 Roles/Capabilities/Functionalities

Similar to their counterparts (i.e. objects) in object-oriented systems, agents are the key entities

in agent-based systems. Therefore, one of the crucial requirements of all the AOSE methodolo-

gies is to assist the developers identify the agents constituting the system. The importance of

agent identification is amplified when the target system is a multi-agent system, consisting of a

certain number of agents.

A common technique used in all five methodologies to deal with agent identification is to start

from entities that are “smaller” than agents. These entities are capabilities in Tropos, func-

tionalities in Prometheus, and roles in other methodologies. Agents, or also called agent types,

are formed by grouping these entities into “chunks”. There are different techniques and models

provided by some of the five methodologies to help the designers group or map these chunks

into agents. A detailed discussion relating to this issue is given in Section 4.4.4.

4.3.4 Social System - Static Structure and Dynamics

One of the key motivations of introducing the agent-oriented paradigm into software engineering

field resides in its attractiveness in designing and implementing complex systems. Therefore, it

is very important for an AOSE methodology to provide a useful abstraction for understanding

the overall structure of the system. Most of the five methodologies (all except Gaia) address

this issue fairly well.

In MaSE, the role-task-goal-conversation diagram (called Role Model) shows “chunk” overview

as well as goal assignment. It represents the relations between the key constructs in the system

such as: roles-tasks, roles-goals, tasks-conversation. Prometheus, on the other hand, has a data

coupling diagram which shows the relations between functionalities and data. More importantly,

it has a system overview diagram depicting the social structure of the system as a collections

of agents, messages between them, events, and shared data objects. Similarly, MESSAGE’s

organisation view at level 1 analysis depicts the relationship between agents, roles, organizations

CHAPTER 4. EVALUATION RESULTS 108

and resources in the system and its environment. These relationships are described at the coarse-

grained level so that the social system structure is captured at the abstraction level above

the concrete implementation. Tropos also has extended diagrams describing the dependencies

between actors, goals, resources, and tasks in the system. Gaia is the only one of the five

methodologies which does not provide such an overall social system view.

If the importance of the existence of an overall view on the organizational system structure resides

in their ability to represent the static structure, then it is also essential to capture the high-level

dynamics of the system. These are the interactions and communication taking place between

agents. They include descriptions of the mechanism by which agents coordinate their activities

or other complex social interactions such as competition, negotiation, and teamwork. MaSE,

MESSAGE, Prometheus, and Tropos describe interactions at two different levels of granularity.

They include a set of high level interactions and a more detailed representation in terms of inter-

action protocols. Gaia only provides interactions at the level of the former. MaSE, Prometheus,

and Tropos are similar in that they model high level interactions using sequence/interaction

diagrams borrowed from UML sequence diagrams. In addition, use cases are used to develop

such sequence/interaction diagrams. Nonetheless, there are differences: interaction diagrams in

Prometheus and Tropos show interactions between agents whereas sequence diagrams in MaSE

depicts those between roles. Gaia and MESSAGE are also similar to MaSE in that interactions

are discovered and modelled at the “chunk” level (e.g. role) rather than agent-level. The inter-

action models in Gaia and MESSAGE are also similar. They do not use sequence/interaction

diagrams for this purpose. Rather, the model consists of purpose/motivation of the interactions,

the initiator and responder, inputs/trigger conditions, and outputs/information achieved.

When moving down to the detailed level of interaction protocols, Prometheus, MESSAGE and

Tropos all suggest the use of AUML interaction protocol diagrams. MaSE, on the other hand,

defines a coordination protocol using a form of finite state transition diagrams. Nonetheless,

the interaction model in all the methodologies define the legitimate sequence of the messages

exchanged between agents.

CHAPTER 4. EVALUATION RESULTS 109

4.3.5 Individual Agent - Static Structure and Dynamics

Agents are mostly considered as concrete entities in the agent-oriented software engineering

methodologies. They are often the final products of the design and are mapped to code at the

implementation stage. Therefore, in our view having techniques, guidelines and heuristics for

analysing and designing the internal architecture of each agent in the system is as important

as defining the social system. As a result, the availability of a useful abstract model which

characterises the static structure and dynamic behaviour of the agents is very helpful.

Prometheus answers this question fairly well. It has an agent overview diagram which provides

the top level view of the agent internals. It depicts the capabilities (i.e. modules) together

with their interactions within an agent. Similarly, MaSE’s Architecture Diagram describes the

architectural components, the connectors between these components within an agent (i.e. inner

agent connectors) as well as connections with external resources or other agents (i.e. outer-

agent connectors). MESSAGE’s Agent View also focuses on the individual agents but only

shows what goals, resources, tasks and behavioural rules. Tropos is also similar to MESSAGE

in this shortage of details, which tends to prevent developers from having an understanding of

the internals of each agent, especially at the implementation phase. Gaia does not have any

model describing the internal structure of the agent.

Nonetheless, Gaia does provide a model to capture the dynamics within an agent. In the Service

Model, the descriptions of the inputs, outputs, pre-conditions, and post-conditions for each

function of the agent are provided. However, the techniques for describing agent’s planning or

scheduling capabilities are not supplied. Tropos addresses this well; it has a Capability Diagram

that shows the capabilities of an agent. In addition, each plan in a capability is also depicted

using a UML activity diagram. MaSE also describes this micro-level of dynamics but only for

roles and at the level of tasks. MESSAGE and Prometheus seem not to have this type of model

even though Prometheus does have capability overview diagrams but only shows the relations

between plans, events and resources. Prometheus also provides the individual plan, event, and

data descriptors. These textual descriptors provide the details belonging to individual agents,

which are necessary to move into implementation.

CHAPTER 4. EVALUATION RESULTS 110

4.3.6 Agent Acquaintance Model

Additionally, the dependency and interaction (i.e. communication paths) between agents are

represented in terms of agent acquaintance diagrams in MaSE, Prometheus and Gaia. These are

directed graphs which include rectangles (representing agents) and lines with arrows (depicting

the links/conversations/communications) between two agents (refer to Figure 2.9). They vary

in whether they depict cardinalities (Prometheus), and whether they indicate for each agent the

roles included (MaSE). MaSE is also different from Prometheus and Gaia in terms of specifying

the name of the conversations. Whereas, the other two simply define the communication links

that exist between agents without defining the actual characteristics of the link. Nonetheless,

these links are described in finer detail later on in the design processes of MaSE and Prometheus.

Therefore, the purpose of having the Agent Acquaintance Model in the three methodologies is

to assist designers in identifying the coupling and communication bottlenecks among agents in

the systems.

4.4 Structural Analysis - The Differences

4.4.1 Early Requirements in Tropos

As pointed out in Section 2.3.5, one of the most significant difference between Tropos and the

other AOSE methodologies resides in its strong focus on the early phase of requirements engi-

neering. The inclusion of this preliminary requirements acquisition step where the global model

for the system and its environment is elaborated was firstly proposed in requirement engineer-

ing [26, 119]. Yet it has not been integrated into any agent-oriented software development

methodology before Tropos [7]. There are two key motivations in integrating early requirements

into the agent-oriented software development [17]:

• Requirements analysis is arguably the most important phase of software development.

This is the phase where the developers need to take both technical issues and social and

organisational ones into the equation. Additionally, the requirements analysis stage tends

to introduce the most number of errors and these errors are often very costly [17, 106].

CHAPTER 4. EVALUATION RESULTS 111

Therefore, there is a high demand for having a more systematic requirement engineering

process which is able to capture the requirements of complex software such as multiagent

systems. Performing an early requirements analysis has several crucial advantages. The

system analysts are able to capture the requirements of the target system. More impor-

tantly, they are also able to acquire the motivations, intents and rationales behind the

system under development. It improves the analysis of system dependencies by having a

better and uniform handling for both functional and non-functional requirements. The

latter is important since the systems’ quality attributes such as as accuracy, usability,

performance, security, costs, reliability, etc. are more difficult to deal with toward the end

of the software development life cycle.

• The early requirements model usually involves high-level concepts such as goals, intentions,

agents, etc. As discussed in Section 2.1.1, these concepts are also used by agent-oriented

programming. Therefore, the integration of early requirements analysis into an agent-

oriented methodology results in the same concepts for both requirements analysis and

system design and implementation. Consequently, the mismatch between different devel-

opment phases (e.g. the requirements phase deals with actors, tasks, goals, resources, etc.

whereas the implementation concerns data structures, interfaces, modules, methods, etc.)

is significantly reduced. In addition, it can result in uniform and coherent techniques and

tools for developing software.

More specifically, Tropos adapts the i* goal-oriented requirements analysis approach proposed

by Eric Yu [119]. The Tropos early requirements phase involves examining an organisational

setting including the domain stakeholders, their respective goals, and their inter-dependencies.

This phase was described in more detail in Section 2.3.5.

4.4.2 Environmental Model in MESSAGE and Prometheus

An agent-based system is placed in an environment which it interacts with. In fact, as discussed

in Section 2.1.1, “situatedness” is commonly regarded as one of a key properties of agents.

Therefore, an agent system needs to have a representation or a model of the environment in

which it operates. Unfortunately, none of the methodologies answers this question well. MaSE

CHAPTER 4. EVALUATION RESULTS 112

only mentions that interfacing with external or internal resources requires a separate role to

act as an interface from a resource to the rest of the system. Tropos represents the resources

(physical or information entity) as an entity but no further than that. This limitation involving

having a model describing the environment and the interaction between it and the agents situated

also occurs in Gaia.

Of the five methodologies, only Prometheus and MESSAGE address this issue and the way

they approach it is different. Prometheus views the environment from the system perspective

by providing an interface model which contains a list of percepts and actions possible in the

environment of the agents. This view is motivated by the fact that the system in general and

the agents in particular perceive the environments via a number of sensors. They also operate

on the environment using effectors. Therefore, it is necessary to have a clear input/output

specification regarding the characteristic needs and requirements of the system. The model

provided by Prometheus only captures this interface. A model of the environment that is

internally used by the agents to represent their environment and to reason about is not described

in the methodology.

The system perspective of the environment, however, is only one side of the coin. The devel-

opers of the system may need to have a different perspective on the environment. In contrast

to Prometheus, MESSAGE addresses this need. Its Domain View represents domain specific

concepts and relations. For instance, in the Travel Agency domain, concepts such as Travel

Arrangements, Accommodation, Journeys, Planes, etc. and the relations between them are

represented in the Domain Model. Nonetheless, neither of the models described in Prometheus

and MESSAGE is sufficient. They do not deal explicitly with characteristics of the environment,

such as whether the environment is inaccessible, nondeterministic, dynamic, continuous, etc [97].

These characteristics of the environment, if they are sufficiently captured, may affect the design

decisions of individual agents (e.g. the mechanism of reasoning) as well as the system as a whole.

4.4.3 Deployment Model in MaSE

A deployment model is often used to describe the configuration of processing elements, and the

connections between them in the running system. In addition, it shows not only the physical

CHAPTER 4. EVALUATION RESULTS 113

layout of the different hardware components that compose a system, but also the distribution

of executable programs on this hardware. The use of a deployment model for such purposes

is not new to information system development. It is one of the key models in UML [4]. Its

usefulness comes from three different sources. Firstly, developing a high level deployment pro-

vides a foundation for assessing the feasibility of implementing the system. Secondly, during the

process of constructing the deployment model, the designers may obtain a better understanding

of the geographical and operational complexity of the system. Last but not least, the use of a

deployment model also aims to give an estimate of various aspects such as cost.

Despite the above benefits of developing a deployment model, only MaSE integrates it into the

methodology’s development process. MaSE’s Deployment Diagrams, the realisation of deploy-

ment model, depicts the system on the basis of agent classes and their location on the network.

In constructing the Deployment Diagram, the developers have a chance to fine-tune the system

at the design stage by considering agent-related factors such as the number of agents, their

types and locations, etc. against performance issues such as processing power, and network

bandwidth. MaSE also offers the flexibility in system deployment design by allowing the design-

ers to develop different system configurations and/or to modify them. An example of MaSE’s

Deployment Diagrams is shown Figure 2.11 and is discussed in detail in Section 2.3.2.

4.4.4 Data Coupling Diagram in Prometheus

As mentioned earlier in Section 4.3.3, most of the five agent-oriented methodologies share a

similar technique of identifying the agent types in the system. That is the iterative mechanism

of mapping goals to roles or functionalities, and roles or functionalities are assigned to agents.

In our view, the second part of this process is not easy and the decision may affect later design

activities. The choice of grouping roles or functionalities into one agent depends on several

factors and more importantly needs to be supported by a method that guides designers in

identifying issues. Our argument is also supported by various responses from the students who

used each of the five methodologies to design the Personal Itinerary Planner System (PIPS,

Section 2.1.2). They found it difficult to determine the agent types on the basis of existing roles

or functionalities or capabilities.

CHAPTER 4. EVALUATION RESULTS 114

However, most of the methodologies do not address this critical issue well. Indeed, the docu-

mentation of MaSE [32] and Gaia [117] has only a paragraph discussing this issue. Tropos is also

similar, i.e. providing an example but no techniques or heuristics are given. These limitations

poses several difficulties, as reported by the three students who used these methodologies (see

Section 4.2 for more detail).

Prometheus is the only methodology that provides clear techniques to deal with agent identifica-

tion. One of them is the use of agent acquaintance diagrams which we discussed in Section 4.3.6.

The other is the development of data coupling diagrams. These diagrams consist of the system

functionalities and the external resources in terms of identified data. More importantly, they

also capture the “use and being used” relationship between those functionalities and the data

in terms of directed links. An example of this diagram can be found in Figure 2.18 of Sec-

tion 2.3.4. On the basis of visually assessing the data coupling diagram plus provided guidelines

and techniques, the designers are able to group functionalities into agents. This is based on con-

siderations of both cohesion and coupling - one wants to reduce coupling and increase cohesion.

Putting agents that write or read the same data together seems to reduce coupling between

agents. In other words, functionalities that use and/or produce the same data tend to belong to

the same agent. See section 2.3.4 for a more detailed description of the use of the data coupling

diagram in Prometheus.

4.5 Towards a Mature and Complete AOSE Method-

ology

In the previous sections, we have looked at our evaluation analysis of the five selected agent-

oriented methodologies. They are currently, in our view, among the most prominent AOSE

methodologies. We have assessed their strengths and weaknesses based on a feature-based

evaluation including a survey (Section 4.1) and a case study (Section 4.2). In addition, their

similarities and distinguishing differences with respect to their techniques and models were also

examined (Section 4.3 and 4.4).

On that basis, in this section we provide several proposals towards their unification by combin-

ing their strong points as well as avoiding their limitations. We believe such effort is similar in

CHAPTER 4. EVALUATION RESULTS 115

spirit to the one that gave birth to Unified Modelling Language5. There was an effort (appearing

in [57]) of assembling agent-oriented methodologies from features. However, their approach is

to build a core methodology and to integrate addition features into it choosing from different

methodologies. The integration is performed on a application by application basis. Our app-

roach is different in that we endeavour to make some preliminary suggestions to form a unified

methodology based on the five selected methodologies. These suggestions, as we believe, may

contribute a step towards developing the “next generation” of agent-oriented methodologies.

Such a methodology should support in sufficient depth all the features that we have identified

in Section 3.1. They are summarised in Table 4.5.

Components Features

Concepts � Internal properties: autonomy, mental attitudes, pro-activeness, reactivity, concurrency, and situatedness.

� Social properties: methods of cooperation, teamwork, communication modes, protocols, and communication language.

Modelling language

� Usability criteria: clarity and understandability, adequacy and expressiveness, and ease of use.

� Technical criteria: unambiguity, consistency, traceability, refinement, and reusability.

Process � Full life-cycle coverage, iterative development which allows both top-down and bottom-up design approaches

� Sufficiently detailed process steps with definitions, examples, heuristics, management decision making. Estimating and quality assurance guidelines are provided.

� Supporting various development contexts such as: Greenfield, Reuse, Prototype and Reengineering

Pragmatics � Management criteria: cost effectiveness, and maturity

� Technical criteria: a wide range of domain applicability, support for the design of scalable and distributed applications.

Table 4.5: The basic features of a “next-generation” agent-oriented methodology

As can be seen in the “Process” feature, a relatively complete and mature AOSE methodology

5The UML got its initial start from the combined efforts of Grady Booch, with his Booch method; James

Rumbaugh, with his Object Modeling Technique (OMT); and Ivar Jacobson, with his Object-Oriented Software

Engineering (OOSE) method.

CHAPTER 4. EVALUATION RESULTS 116

should at least cover the requirements analysis, design, implementation and testing/debugging

phases6. The models and techniques used in these stages can be formed as a unification of those

used in the five existing agent-oriented methodologies that were examined in this research.

4.5.1 Requirements Analysis

It seems to us that there are four steps that are important in this stage:

1. Capturing Goals: As discussed in Section 4.3.1, identifying and structuring goals is an

important step in eliciting requirements in most the five methodologies. Similar to the

early requirement phase in Tropos, we examine stakeholders’ intentions and their depen-

dencies via the tasks and resources they use to achieve goals. Goals can be structured and

represented as a goal hierarchy diagram similar to the one used in MaSE.

2. Defining use case scenarios: The structure of use cases can be the one proposed in

Prometheus. Sequence diagrams as in MaSE can be used as a realisation of use cases

to show the communication path between roles in the system.

3. Building the environment model : The environment can be modelled from the perspectives

of both the system and the developers (refer to Section 4.4.2 for detailed discussion). For

the former, an Environment Interface Model describing possible percepts and actions in

the environment of the agents like the based on the one in Prometheus can be used. Re-

garding the latter, we can employ the Domain View in MESSAGE which captures specific

organisational concepts and relations. In addition, characterisation of the environment

and the target run-time environment needs to be examined in this step 7.

4. Defining roles: The above two steps seem to provide enough information for the system an-

alysts to determine a number of key roles or a “chunk” of functionalities(see Section 4.3.3)

existing in the system. It seems to us that the Role Schemata that are mainly used in

Gaia and MESSAGE can be applied to formally define a role. Another alternative is the

functionality descriptor used in Prometheus.

6Since deployment and maintenance phases are still new to agent-oriented approaches, we do not include them

here7None of the five methodologies provide techniques or models to capture these issues

CHAPTER 4. EVALUATION RESULTS 117

The four steps above and their respective artifacts, in our view, provide a strong support for

eliciting requirements, identifying system goals, capturing the environment and defining key

roles in the system. They promote the developers’ understanding of the requirements and form

the inputs to the design state.

4.5.2 Architecture Design

The unified methodology may describe the architectural design as the following two steps:

1. Building social system static structure: An important aspect of this step is to identify the

agent types existing in the system. Two techniques that are commonly used are data cou-

pling diagrams (Prometheus) and agent acquaintance models (Gaia, MaSE, Prometheus).

In addition, dependency relations describing dependencies between agent types for using

resources, performing tasks or achieving goals need to be modelled. Regarding this, the

System Overview Diagram in Prometheus can be applied.

2. Building social system dynamics: the social behaviours and interactions of the system

can be captured at a high level using the AUML interaction protocol diagrams 8 (similar

to the ones employed in Prometheus, Tropos, and MESSAGE). At a lower level, in our

opinion, the Concurrent Task Diagrams of MaSE can be employed to model the concurrent

interaction protocols.

4.5.3 Detailed Design

The detailed design should focus on constructing individual agents’ architecture in terms of

defining the components and the connections between them.

1. Internals of individual agents: In describing the top level view of the agent internals, the

agent overview diagram used in Prometheus can be employed. For further detail, the

individual plan, event, and data descriptors (also in Prometheus) can be used. The service

model used in Gaia is also a good representation of the inputs, outputs, pre-conditions,

and post-conditions of each service (i.e. capability) provided by an agent.

8A new version, AUML 2.0, is currently developed based on UML 2

CHAPTER 4. EVALUATION RESULTS 118

2. Dynamics of individual agents: The dynamic behaviour of individual agents can be mod-

elled using capability diagrams and plan diagrams as employed in Tropos.

3. Deployment model : The deployment of agents within or across the network can be repre-

sented using a deployment model as in MaSE.

4.5.4 Implementation and Testing/Debugging

So far, none of the five methodologies has offered sufficiently detailed process and techniques

to allow the developers to perform the implementation phase. In our perspective, there is

generally a close relationship between the detailed design and the implementation phase. As

such, products from the design and analysis phases can be employed and applied to implement

the system. That close relationship also allows implementation to be derived from a detailed

design by either automated code generation or performed by hand. In short, it promotes a

smooth transitions between development phases.

Additionally, during implementation, testing or debugging methods are essential, which should

relate to other concepts used in the analysis and design. MaSE and Prometheus also have some

recent efforts in supporting the testing/debugging phase although it is not fully integrated into

the methodology yet. Nonetheless, behaviour verification (MaSE) [73] and interaction protocol

debugging (Prometheus) [93] can be employed under the unified methodology.

Chapter 5

Conclusion

As agent-oriented approaches represent an emerging paradigm in software engineering, there has

been a strong demand to apply the agent paradigm in large and complex industrial applications

and across different domains. In doing so, the availability of agent-oriented methodologies

that support the software engineers in developing agent-based systems is very important. In

recent years, there has been an increasing number of methodologies developed for agent-oriented

software engineering. However, none of them are mature and complete enough to fully support

the industrial needs for agent-based system development.

For all those reasons, it is useful to begin gathering together the work of various existing agent-

oriented methodologies with the aim of developing a future methodology that is mature and

complete. Thus, this research carries out an evaluation of five prominent agent-oriented method-

ologies to understand the relationship between them. In particular, our main purposes are: (1)

Assessing each methodology’s strengths, weaknesses, and domain of applicability, and (2) Iden-

tifying the similarities and differences among them in terms of techniques and models that are

necessary or useful in guiding the developing of agent-bases systems. Following this comparative

analysis, we proposed an initial unification scheme for five key methodologies.

With the aim of performing a systematic and comprehensive evaluation that fits our purposes,

we reviewed the literature sources that provide evaluation methods. In fact, such approaches

and techniques are numerous, ranging from simple and preliminary comparisons to formal and

scientific experiments. In addition, there are different evaluation types: qualitative approaches

119

CHAPTER 5. CONCLUSION 120

(otherwise known as Feature Analysis) assess a methodology in terms of the features provided

by the methodology, whereas quantitative techniques focus on the measurable effects in terms of

reducing production, rework, maintenance time or costs that are delivered by the methodology.

Both types of approaches, however, target the same goal - to evaluate methods of developing

software to examine how good they are.

Our resources did not allow us to produce such quantitative data to determine whether an

agent-oriented software engineering (AOSE) methodology is superior to others. Thus, the focus

was kept on the qualitative attributes of an AOSE methodology that support the process of

engineering. They are the concepts and characteristics that are specific to agents, the notation

supporting modelling activities, the development process, and the pragmatics of the methodol-

ogy itself. These attributes, features, and properties constitute the evaluation framework that

we have employed to examine the strengths and weaknesses of the five selected agent-oriented

methodologies. This evaluation framework includes a range of issues that have been identified as

important by a range of authors from software engineering, object-oriented methodologies evalu-

ation, and agent-oriented methodologies evaluation. By doing so, we ensure that the framework

is unbiased and complete, i.e. covers all significant criteria.

In addition, to avoid evaluation being affected by our biases, we also had others do the assessment

of each methodology. More specifically, for each methodology we collected inputs from its

representative authors via the form of a questionnaire assessing the methodology. Additionally,

a small case study was conducted in which we had five students each develop designs for the

same application (the Personal Itinerary Planner System) using different methodologies. We

collected comments from the students as they developed their application designs over summer

(Dec. 2002 - Feb. 2003) as well as asking them to fill in the questionnaire. The aim was not to

provide a statistically significant sample in order to carry out a scientific experiment, which was

not practical, considering our resources and time. Rather, the aim was to avoid any particular

bias by having a range of view points. In addition to the Feature Analysis, we addressed in depth

the second goal of this evaluation (i.e. identifying commonalities and distinguishing differences)

by performing a Structural Analysis.

Overall, all five methodologies provide a reasonable support for basic agent-oriented concepts

such as autonomy, mental attitudes, pro-activeness, and reactiveness. They all are also regarded

CHAPTER 5. CONCLUSION 121

by their developers and the students as clearly agent-oriented. However, there are several char-

acteristics of agent-based systems that are not addressed or sufficiently addressed in most of

the methodologies. For instance, none of the five methodologies provide explicit support for

designing teamwork in agents. In addition, the “situatedness” of agents is not fully addressed

in such a way that the environment in which the agents operate can be modelled in sufficient

detail.

The notation of the five methodologies is generally good. Most of them have a strong modelling

language in terms of satisfying various criteria such as clarity and understandability, adequacy

and expressiveness, ease of use, and unambiguity. However, there are several exceptions. Tropos

was not perceived as being easy to use whilst MESSAGE and GAIA were both ranked weakly

on adequacy and expressiveness. In addition, only Prometheus and MaSE provide techniques

and tools for maintaining the consistency and traceability between models. For the other three

methodologies, there is still more room for improvement with respect to these issues. It is also

emphasised that none of the evaluated methodologies explicitly provide techniques, guidelines,

or models to encourage the design of reusable components or the reuse of existing components.

Regarding the process, only Prometheus and MaSE provide examples and heuristics to as-

sist developers from requirements gathering to detailed design. Neither MESSAGE or GAIA

support detailed design, and MESSAGE does not supply heuristics for architectural design.

Additionally, even though all phases from early requirements to implementation are mentioned

in Tropos with examples given, the methodology does not appear to provide heuristics for any

phase. Implementation, testing/debugging and maintenance are not clearly well-supported by

any methodology.

The five methodologies also share some common features. For instance, all of them, except Gaia,

provide mechanisms to identify and structure the system’s goals as well as agents’ goals. There

are also similarities in the models provided by the methodologies to represent the static structure

and dynamic behaviour of the social system as well as individual agents. Distinguishing features

also exist such as, Tropos is unique among the five methodologies in having an early require-

ments phase. MaSE offers a deployment diagram to capture the agent types and their location

within a distributed network. Prometheus provides distinctive support for issues relating agent

identification. Its data coupling diagram, which we regard as a very useful technique in terms

CHAPTER 5. CONCLUSION 122

of mapping functionalities to agents, is unique within the five methodologies.

The successful use of the five methodologies in designing the Personal Itinerary Planner System

indicates that they are practical and usable. However, the maturity of the methodologies is still

a major concern. Regarding the availability of resources supporting the methodologies, most

of them are still in the form of conference papers, journal papers or tutorial notes. None of

the methodologies are published as text books. Only Prometheus and MaSE appear to provide

strong tool support which is tightly integrated with the methodology’s development process.

Additionally, some important software engineering issues, which are much needed in the industry,

such as quality assurance, estimating guidelines, and supporting management decisions are not

supported by any of the methodologies.

5.1 Future Research

As agent-oriented methodologies continue to be developed, research will keep aiming at the

direction of determining which agent-oriented methodologies are best suited to support the

development of a particular project or system. Hence, there are various future works that can

be done in this area.

5.1.1 Evaluated Methodologies

Even though we selected five prominent methodologies to evaluate, they are not totally repre-

sentatives of the existing agent-oriented methodologies. As summarised in section 2.3, there has

been an significant large number of AOSE methodologies which each has different specialised

features to support different aspects of their intended application domains. Thus, future work

may involve expanding the number of selected methodologies and use the current evaluation

framework to assess them. By doing so, we may improve the proposed unified schema by includ-

ing supplementary models and techniques. For instance, there is a lack of strong support for

modelling the internal of agents with respect to the unified approach we proposed in section 4.5.

Examining further agent-oriented methodologies will help in finding such support in terms of

tools, techniques and models.

CHAPTER 5. CONCLUSION 123

5.1.2 Evaluation Criteria

The features, properties or attributes employed to form the evaluation framework were derived

from a wide range of literature sources, including software engineering, object-oriented methodol-

ogy comparison, and agent-oriented software engineering. They seem to be comprehensible with

respect to the current goals of this thesis. However, if we want to conduct a more application-

orientation evaluation, then further work is needed. A strong argument is that methodologies

may suit certain application areas better than others and any comparison or evaluation should

take this into account. Otherwise, it seems rather impossible to claim one is better than the

other. There are several possible solutions to this obstacle. For instance, we may introduce more

criteria that reflect the application characteristics and the organisational contexts. In addition,

in assessing a methodology, the evaluation criteria are associated with a priority or weight with

respect to the characteristics of the application that the methodology aim at. A powerful ag-

gregation mechanism may be then applied to provide a relative accurate answer relating the

appropriateness of a methodology with the development of a particular project.

5.1.3 Quantitative Evaluation

The current evaluation, including the two possible future works previously proposed, still car-

ries the inherent subjectivity of feature-based analysis [62]. Potential risks relating to this type

of evaluation include the assessors’ bias or their mistaken assumption or understanding of the

methodology. Thus, another possible area for future research resides in conducting formal quan-

titative experiments. More specifically, an experiment will be conducted by having different

agent-oriented methodologies used on a “real” software project. Hypotheses involving mea-

surable effects (e.g. reducing production, rework or maintenance time of costs) of using each

participant methodology. The assessment is then based on collecting data to determine whether

the expected results (i.e. hypotheses) are in fact delivered.

Appendix A

Questionnaire

Survey on Agent Oriented Methodologies1

Khanh Hoa Dam and Michael Winikoff

Introduction

The aim of this questionnaire is to assess an agent-oriented software engineering methodology

against a range of criteria. The criteria fall into a number of areas.

• Concepts/properties: The ideas that the methodology deals with, basically the ontology.

For example, for OO the concepts are objects, classes, inheritance, etc.

• Modelling: The models that are constructed and the notations used to express the models.

• Process: The phases and steps that are followed as part of the methodology.

• Pragmatics: A range of practical issues that are concerns when adopting a methodology.

These include the availability of training materials and courses, the existence and cost of

tools, etc.

Note: This questionnaire should take around 15-20 minutes to complete in full. Although we’d

prefer fully answered questionnaires, we have identified a number of questions that are slightly

1An online version is made available at http://yallara.cs.rmit.edu.au/∼kdam/Questionnaire/Questionnaire.html

124

APPENDIX A. QUESTIONNAIRE 125

less important and could be left unanswered. These are marked with (opt) after the question

number.

About Yourself

1. What methodology are you assessing in this form? Who are the creators of this methodology?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2. What is your experience with the methodology you are assessing?

❒ I created it ❒ I’ve used it ❒ Know its details ❒ Somewhat familiar ❒ Other . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3. If you’ve used the methodology, please tell us a bit about how you’ve used it: what was the

application domain, the scope of application (e.g. design only, design through to implementation)

and the size of the system.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4. What is your background? ❒ Student ❒ Academic ❒ Industry ❒ Other . . . . . . . . . . .

5. Please outline briefly your experience in agents, software engineering etc. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6. Would you be willing for us to contact you via email if we have further questions?

❒ No ❒ Yes, my email address is: (please write clearly!) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

APPENDIX A. QUESTIONNAIRE 126

Concepts & Properties

For each of the following properties/concepts indicate to what extent does the methodology

support the design of agents that have the property or that use the concept.

7. Autonomy: the ability of an agent to operate without supervision.

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

8. Mental attitudes: the use of mental attitudes in modelling agents’ internals (e.g. beliefs,

desires, intentions)

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

9. Proactiveness: an agent’s ability to pursue goals over time

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

10. Reactiveness: an agent’s ability to respond in a timely manner to changes in the environ-

ment

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

11. Concurrency: dealing with multiple goals and/or events at the one time

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

12. Teamwork and roles: a team is a group of agents working towards a common goal.

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

13. Which of the following cooperation models are used in the methodology’s interaction mod-

els? (tick all that apply)

❒ Negotiation (i.e. to manage an acceptable agreement for all the agents concerned)

❒ Tasks delegation (i.e facilitators)

❒ Multiagent planning (i.e development and execution of possible plans)

❒ Teamwork

❒ Other, please specify: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14. Protocols: a protocol is a definition of the allowable conversations in terms of the valid

sequences of messages.

APPENDIX A. QUESTIONNAIRE 127

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

15. Which of the following communication modes are supported by the methodology?

❒ Direct, e.g. sending message

❒ Indirect, e.g. via a third-party

❒ Synchronous, e.g. chatting

❒ Asynchronous, e.g. sending emails

❒ Other, please specify: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16. The communication language used by the agents is based on:

❒ Signals (i.e low level languages) ❒ Speech acts ❒ Other, please specify . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17. Situatedness: agents are situated in an environment. How well does the methodology

address modelling the environment through (e.g.) percepts and actions?

Level of support: ❒ None ❒ Low ❒ Medium ❒ High ❒ Don’t know ❒ Not Applicable

18. The agents that will be designed using the methodology will be situated in an environment.

What types of environment does the methodology support? (tick all that apply)

❒ Inaccessible (i.e. percept does not contain all relevant information about the world)2

❒ Nondeterministic (i.e. current state of the world does not uniquely determine the next)

❒ Nonepisodic (i.e. not only the current (or recent) percept is relevant)

❒ Dynamic (i.e. environment changes while the agent is deliberating)

❒ Continuous (i.e. infinite number of possible percepts/actions)

19 (opt). What agent features are supported by the methodology? (e.g. mobile agents, security,

open systems) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 1995 (page 46 - Chapter

2)

APPENDIX A. QUESTIONNAIRE 128

20 (opt). What agent features are not supported by the methodology . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21. The concepts used in the methodology are clearly explained and understandable

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

22. The concepts used in the methodology are overloaded with respect to standard practice (i.e.

the same term is used to denote different concepts)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

23. The methodology is clearly agent-oriented, rather than, say, object-oriented

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

24 (opt). Please give a brief description of the kinds of agents that the methodology supports.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Modelling & Notation

25. The notation is capable of expressing models of both static aspects of the system (e.g.

structure) and dynamic aspects (e.g. processing)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

26. The symbols and syntax are well defined, it is clear what arrangements of symbols are valid

and which are invalid.

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

27. The semantics of the notation is clearly defined.

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

28. The notation provided by the modelling language is clear (e.g. unambiguous mapping of

concepts to symbols, uniform mapping rules, no overloading of notation elements, etc.)

APPENDIX A. QUESTIONNAIRE 129

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

29. The notation is easy to use (e.g. easy to write/draw and print, easy to learn and memorize,

comprehensible to both experts and novices, etc.)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

30. Did you find the notation easy to learn?

❒ Very easy ❒ Fairly easy ❒ Not easy but not hard ❒ Hard ❒ Very hard ❒ Not

Applicable

31. The modelling language supports capturing different views or contexts of the target system

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

32. The modelling language is adequate and expressive (e.g. no missing or redundant models,

necessary aspects of a system such as the structure of the system, the data flow and control flow

within the system, the interaction of the system with external systems can be represented in a

clear and natural manner, etc.)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

33. The modelling language supports traceability, this is the ability to track dependencies

between different models and between models and code.

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

34. The modelling language provides different guidelines and techniques for consistency checking

both within and between models

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

35. The modelling language supports a refinement-based design approach

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

36. The notation supports modularity of design components

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

37. The methodology has a mechanism for supporting the reuse of design components

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

38. The modelling language is extensible

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

APPENDIX A. QUESTIONNAIRE 130

39. The modelling language supports hierarchical modelling and abstraction

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

40 (opt). Are there any other issues or limitations with the notation and models? . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Process

41. This question addresses lifecycle coverage with regard to development stages and their corre-

sponding deliverables described within the methodology. In the following table each row relates

to a given activity, or phase, in the process. Please indicate by ticking whether the methodology

provides a clear definition of the activities of the phase (column 1), whether examples are given

to illustrate the activities and deliverables (column 2), and whether heuristics and guidelines for

carrying out the activities are given (column 3).

Stage Process Examples Heuristics

mentioned given given given

Requirements

Analysis

Architectural

Design

Detailed

Design

Implementation

Testing /

Debugging

Deployment

Maintenance

Other:

APPENDIX A. QUESTIONNAIRE 131

42 (opt). The methodology addresses quality assurance

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

43 (opt). Estimating guidelines (e.g. cost, schedule, number of agents required, etc.) are well

presented

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

44 (opt). The methodology provides support for decision making by management (e.g. when

to move between phases)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

45 (opt). Which development perspectives are supported by the methodology?

❒ Top-down approach ❒ Bottom-up approach ❒ Both ❒ Indeterminate

46 (opt). The degree of user implication within the methodology, i.e. the methodology provide

means to support and facilitate communication between designers and users, is:

❒ Weak (i.e. the user intervenes only at the beginning and at the end of the project)

❒ Medium (i.e. the user also intervenes in the middle but not in all the system development

phase)

❒ Strong (i.e. the presence of the user is spread throughout system development)

Pragmatics

47. Who is the intended audience for the methodology (tick all that apply):

❒ Junior undergraduates ❒ Senior undergraduates ❒ Graduate students ❒ Junior in-

dustrial programmers ❒ Experienced industrial programmers ❒ Experts in agents ❒

Researchers ❒ Other: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48. How complex is the methodology, compared to UML+RUP?

❒ A lot simpler ❒ Simpler ❒ About the same ❒ More complex ❒ A lot more complex

49. What resources are available to support the methodology? (tick all that apply)

❒ Conference papers ❒ Journal papers ❒ Text book(s) ❒ Tutorial Notes

❒ Consulting services ❒ Training services ❒ Other: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

APPENDIX A. QUESTIONNAIRE 132

50. What software tools are available to support the methodology? (tick all that apply)

❒ Diagram editor ❒ Code generator ❒ Design consistency checker ❒ Project management

❒ Rapid prototyping ❒ Reverse Engineering ❒ Automatic testing

❒ Others (please specify) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51. How many applications do you know of that have been built using the methodology?

❒ none ❒ 1-5 applications ❒ 6-20 applications ❒ 21+ applications ❒ Don’t know

52. Were any of these applications real (as opposed to student projects, demonstrators etc.)?

❒ Yes ❒ No

53. Were any of these applications developed by people not associated with the creators of the

methodology?

❒ Yes ❒ No

54. Does the methodology target any specific type of software domain? ❒ No ❒ Yes, please

specify: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55 (opt). The methodology supports developing systems that allow the incorporation of addi-

tional resources and software components with minimal user disruption (i.e. scalability)

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

56 (opt). The methodology supports developing systems that can be deployed over a number

of machines distributed over the network

❒ Strongly Disagree ❒ Disagree ❒ Neutral ❒ Agree ❒ Strongly Agree

Other

57. Are there any criteria that we are missing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

APPENDIX A. QUESTIONNAIRE 133

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58 (opt). Any other comments you would like to add? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Thank you very much for taking the time to complete this questionnaire

Please hand the completed questionnaire back to Michael Winikoff.

Alternatively email your answers to [email protected] or [email protected]; or post

the questionnaire to

Michael Winikoff

School of Computer Science and Information Technology

RMIT University

GPO Box 2476V

Melbourne, 3001

AUSTRALIA

Appendix B

Students’ Reviews

This appendix contains the original reviews written by four students who each used one of the

four methodologies (Gaia, MaSE, Prometheus, and Tropos). The comments of the student who

used MESSAGE were collected during an interviewed and consequently are not included here.

1. Gaia

The analysis stage is quite well- described in the journal papers. The user of the methodology

is to follow these steps in an iterative manner:

• make the prototypical roles model

• make the protocol diagrams

• make the elaborated roles model

The prototypical roles model, is fairly easy to make. However there is not actually help to

identify the roles of the system. The analyst is just supposed to view the system as an actual

organisation, and then, try to identify the roles, probably based on the aims of the system.

The protocol diagrams for the interaction model are quite descriptive, and easy to familiarise

with. However, it seems quite hard to make protocol diagrams before having specified the

liveness responsibilities of each role.

134

APPENDIX B. STUDENTS’ REVIEWS 135

For the elaborated roles model, the permissions and safely responsibilities are quite ambiguous.

Specifying the permissions at this stage seems a bit early, and it is actually easier to do that

correctly at the design. As for the safety responsibilities, I was not quite convinced about

what it was referring to. The notations used for the liveness responsibilities are reasonably well

described, but the notations used for the safety responsibilities are actually not mentioned.

The design stage comprises of:

• the agent model

• the services model

• the acquaintance model

The agent model is simply identifying agents of the systems, from the roles. The Gaia method-

ology does not really mention, how the agents are to be identified.

The services model uses the interaction diagram and the liveness responsibilities to define the

services offered by each agent. The inputs, outputs, preconditions and post-conditions of each

agent is identified. It is at this stage that I was able to define the permissions and the safety

responsibilities of each role more clearly and appropriately. However, one minor problem was

that the notations used for the pre- conditions and post- conditions were not really defined.

Finally the acquaintance model simply identifies lines of communication between the agents,

and seems to be the easiest model.

Gaia generally defines the models in clear way. The only confusions seem to be with some

notations which are not clearly defined. The analysis process, is also very iterative, and very

often, I came back to the same point, and changed things over and over. It is also quite confusing

and difficult, to link the models, and to be able to follow what is happening, as most models are

quite textual. Besides, there is no tool support. Another observation is that the final design, is

supposed to be detailed enough to be easily implemented. However, I am not very convinced

about that, but it could be because I never implemented a multi- agent system in the past.

I had a great experience working on the Gaia methodology though. It was something new, and

I really enjoyed it!

APPENDIX B. STUDENTS’ REVIEWS 136

2. MaSE

The Analysis Phase in MaSE is well define, the use of agentTool is really helpful, especially with

the extra information found in the research paper articles. However, I personally think there

is a small gap to fill in between System Requirement and Analysis Phase. System requirement

concentrates in gathering requirement functionalities while Analysis Phase concentrates in gath-

ering Goals that will lead to Roles. As the result, when I reached Step2: Applying Use Cases,

I feel the need to involve some use cases which do not directly linked to Goals, rather, they are

from the requirement functionality. For Example: the functionality to “View History”(which I

include in Use Cases while it does not directly linked to any goals).

Another thing is, sometimes, the amount of text on arcs in concurrent task diagrams can make

them hard to read.

In MaSE Design Phase, apart from Step2: Constructing Conversation, the other three steps

are clearly defined and easily applied. For Constructing Conversation, different research papers

seems to suggest different things to do. The methodology suggested in Automated Derivation

of Complex Agent Architectures from Analysis Specifications involves a lot of repetitions. The

main idea in that papers is to move every single message labelled by S and E from Task Diagram

and make it as a separate conversation. The fact that the “send” and “receive” notation is no

longer exist in Conversation Diagram makes those messages to appear very simple and repetitive.

As the result, the Constructing Conversation for PIPS which you have seen above is adhering to

the methodology suggested in MultiAgent Systems Engineering research papers. This is the link

showing the repetitive style caused by Constructing Conversation for AccountWatcher agent.

The agentTool user guide for this particular step is insufficient. There is not much explanation

either about the different naming of the conversation or about what significant it holds. Some

examples of the naming variety is “Conversation38 1” and “Conv81”.

Another comment on the agentTool is some bugs found in Transformation menu as well as in

the Deployment tab. The conversation messages in the Deployment Diagram is always missing

every time the page is revisited. The same thing happened for the attribute types in the Agent

Architecture tab. As you can see from the snapshot of Figure2 above, the conversation message

is written clearly for example, Log or SupplySolution, but if the agentTool is used to view that

APPENDIX B. STUDENTS’ REVIEWS 137

diagram, all the messages we could find are only the empty curves .

3. Prometheus

System Specification

Iteration #1

Identifying the system goals was quite straightforward. But complication arises when it comes

to determining the external interfaces, particularly actions. I tend to get confused about the re-

lationship between functionalities and actions, and actions and messages sent. I suppose actions

involve something external, while messages are sent within the system, between functionalities?

Honestly, I have difficulty identifying actions.

By knowing that actions and percepts will eventually become events in JACK construct, I tend

to think in implementation terms as to what event(s) a functionality (or capability) will post.

It is very unnatural to think this way. Maybe it takes time to think agent-oriented.

Another issue I have encountered is terminology problem, which is a minor one. Please provide

feedback on my understanding of terminology by checking the Glossary. Most of them are taken

from the articles.

In the functionality descriptors, some of the actions are not found in the External Interfaces

section, eg. Query Handling. Perhaps the actions should be delegated to messages sent? I will

review the descriptors again.

Iteration #2

Things becomes much clearer once I get an overview of the system. Thanks to the System

Overview Diagram generated by PDT. It helps me in refining the various descriptors.

However, there are still problems concerning the terminology. For instance, I am still confused

about the difference between percept, event and trigger. I hope there will be standards on the

definition and usage of Prometheus terminology in the very near future.

Another thing I look forward to is a standard format for various descriptors. The example

templates shown in the articles and the ones shown in the lecture slides differ. I wonder which

APPENDIX B. STUDENTS’ REVIEWS 138

are the latest ones? The ones you will suggest for use? eg. In functionality descriptor, [

PAW02 ] uses Percepts/Events/Messages, while the slides use Percepts/Triggers. Does this

mean Events/Messages can be interpreted as Triggers?

Architectural Design Iteration #1

This phase introduces a systematic approach to the system architecture: the steps are clearly

defined and straightforward. The activities in this phase require careful analysis of the possible

system structures. As such, much time and effort has been devoted to this phase.

The tool support for this phase is very useful, especially the automatic report generation, com-

plete with the relevant design diagrams. Prometheus Design Tool offers a significant savings in

time and effort by checking consistency of various design elements. This documentation is based

on the design that is developed using PDT.

Perhaps a more efficient way of documenting this phase is by using the auto-generated report,

converted to HTML. Unfortunately, there is a bug in converting diagrams in the PDT-generated

LaTeX report to images in the latex2html generated HTML documents. This could be a bug

worth fixing.

Complications arise when developing the interaction protocols, mainly because of lack of tool

support. It is unfortunate that PDT doesn’t support the generation of interaction protocols

and diagrams. It would be great to see PDT incorporate the dynamic model (ie. interaction

diagrams and protocols) in the near future.

Do not miss out on the PDT-generated report [ pdf — ps ].

Iteration #2

This iteration has consumed more time than expected. Applying a change in agent grouping

requires significant updates on the various descriptors, a process which is tedious and error-

prone. Fortunately, the system overview diagram helps in ensuring that I do not lose sight of

the big picture, so that I can refine the design more effectively.

Also note that the number of interaction diagrams is cut down significantly, ever since a dif-

ferent agent grouping is selected. This is mainly due to interactions happening within a single

agent, between its constituting functionalities. There is hardly any point to include diagrams

APPENDIX B. STUDENTS’ REVIEWS 139

showing an agent sending messages to own self. This cutdown is also observed in the number of

corresponding protocols.

Refer to Architectural Design Update for a description on the most recent changes made since

the second iteration.

Detailed Design

This phase called for more attention to the internal design elements of the system. The main

issue of concern involves getting used to the concepts of JACK Agent Language. Since the

design elements were to be mapped directly in JACK constructs, it was useful to obtain some

familiarity in using JACK. The laboratory exercises had proven to be most useful in learning

JACK basics.

4. Tropos

Requirements Analysis

Early Requirement Analysis phase and Late Requirement Analysis phase requires me to put

myself in the actors’ place and imagine what will I do to achieve my goals. By doing these 2

phases I started to understand how it can lead to agent oriented programming. This methodology

changes the way I used to think in doing object oriented software engineering. In object oriented

software engineering, I think about what can the application do (for the requirement), such

as: user should be able to login, input his/her interests, and so on. However, in doing the

requirements by using Tropos methodology, I started to think in greater scope about what the

goal of the application and how to achieve the goal by decomposing and analysing the main goal.

The difficulty I found during these 2 phases however is to determine whether it is a means-end

analysis or goal decomposition. Furthermore, sometimes I confuse about how much detail I

should put in requirement analysis.

Architectural Design

Architectural Design phase requires me to think about the architectural of the system. It almost

the same as Object Oriented Software Engineering in Design phase of which it also think about

which module handle which task. However, step 3 of this Architectural Design differentiates it

APPENDIX B. STUDENTS’ REVIEWS 140

from Design phase in OOSE. In step 3 (agent assignment), I think in term of agent, which agent

type has what capabilities. I think this step is the most important in this phase, unfortunately,

I found that there is a lack of resources that gives complete explanation/description about this

step, about how to translate the actor capabilities table to agent types table. Furthermore, I

am not sure whether I have to include human agents (in this case, traveller) capabilities or not

in step 2. I feel that I have to include it since this human agent also interacts with the system.

For the extended actor diagram in step 1, I found myself confused at first of which actor should

I described, so I assume that maybe it’s best to describe the most complex one.

The capabilities table (Table 1) were extracted from Figure 8. From the interaction between

actors in Figure 8, I just thought about what each actor should capable of. The other capabilities

from actors other than those in Figure 8 are what I think should be there in a complete system

since Figure 8 does not describe the full system (ie it does not have Account Manager). So, I

just take a look at Figure 6 to determine which actors are not in Figure 8 and what capabilities

might they have. I also put capabilities of Traveller since I take a look at 2 different papers and

one of them put the user’s capabilities while the other not. I think that since the capabilities

table is for a complete system so maybe it is better if I also put traveller’s capabilities in there.

I face many uncertainty when I do this phase, especially for step 2 and 3. For step 2 and 3 I

just use common sense since the paper I read does not explain much about them.

Detailed Design

I found that there is not enough description and example about this phase. As a result, some-

times I confuse about what the diagram looks like. For example, initially I confuse about Agent

Interaction Diagram. I didn’t know whether the Agent Interaction Diagram will describe the

whole system in one diagram or not. I read 2 papers to help me doing this methodology. One

paper [3] describes the whole system in one diagram, although the description is not very detail,

whereas another paper [1] describes the Agent Interaction Diagram based on agent’s capabil-

ity. However, paper [3] did say that its diagram could be analyzed in detail by using template

packages proposed in [5]. So, I decided to draw the Agent Interaction Diagram based on each

agent’s capability.

I also found the difficulty in drawing the Capability Diagram. Paper [3] and [1] have a slight

APPENDIX B. STUDENTS’ REVIEWS 141

difference in presenting it. Paper [3] has a notation that cannot be found in other paper (ie.

EE), but there is no explanation about what the meaning of the notation. Furthermore, Plan

Diagram in those 2 papers are different.

I found that most Tropos’s papers do not provide enough detail about this phase. Each paper

has its own way to draw the diagram and there is no standard of the correct format.

Bibliography

[1] O. Arazy and C. Woo. Analysis and design of agent-oriented information systems. The

Knowledge Engineering Review, 17(2), 2002.

[2] Mark A. Ardis, John A. Chaves, Lalita Jategaonkar Jagadeesan, Peter Mataga, Carlos

Puchol, Mark G. Staskauskas, and James Von Olnhausen. A framework for evaluating

specification methods for reactive systems: Experience report. In International Conference

on Software Engineering, pages 159–168, 1995.

[3] D. Avison and G. Fitzgerald. Information Systems Development: Methodologies, Tech-

niques and Tools. McGraw-Hill, New York, 2nd edition, 1995.

[4] G. Booch, J. Rumbaugh, and I. Jacobson. The Unified Modeling Language User Guide.

Addison Wesley, 1998.

[5] Jeffrey M. Bradshaw. An introduction to software agents. In Jeffrey M. Bradshaw, editor,

Software Agents, pages 3–46. AAAI Press / The MIT Press, 1997.

[6] F. M. T. Brazier, B. M. Dunin-Keplicz, N. R. Jennings, and J. Treur. DESIRE: Modelling

multi-agent systems in a compositional formal framework. Int Journal of Cooperative

Information Systems, 6(1):67–94, 1997.

[7] Paolo Bresciani, Paolo Giorgini, Fausto Giunchiglia, John Mylopoulos, and Anna Perini.

Troops: An agent-oriented software development methodology. Technical Report DIT-02-

0015, University of Trento, Department of Information and Communication Technology,

2002.

[8] Lionel C. Briand, Erik Arisholm, Steve J. Counsell, Frank Houdek, and Pascale Thvenod-

142

BIBLIOGRAPHY 143

Fosse. Empirical studies of object-oriented artifacts, methods, and processes: State of the

art and future directions. Empirical Software Engineering, 4(4):387–404, 1999.

[9] Allan W. Brown and Kurt C. Wallnau. A framework for evaluating software technology.

IEEE Software, 13(5):39–49, September 1996.

[10] P. Burrafato and M. Cossentino. Designing a multi-agent solution for a bookstore with the

PASSI methodology. In Fourth International Bi-Conference Workshop on Agent-Oriented

Information Systems (AOIS-2002), Toronto (Ontario, Canada) at CAiSE’02, 27-28 May

2002.

[11] Geoff Bush, Stephen Cranefield, and Martin Purvis. The Styx agent methodol-

ogy. The Information Science Discussion Paper Series 2001/02, Department of Infor-

mation Science, University of Otago, New Zealand., January 2001. Available from

http://divcom.otago.ac.nz/infosci.

[12] G. Caire and F. Leal. Project p907, deliverable 4: Recommendations on supporting tools.

Technical Information Final version, European Institute for Research and Strategic Studies

in Telecommunications (EURESCOM), July 2001.

[13] G. Caire, F. Leal, P. Chainho, R. Evans, F.G. Jorge, G. Juan Pavon, P. Kearney, J. Stark,

and P Massonet. Project p907, deliverable 3: Methodology for agent-oriented software

neginnering. Technical Information Final version, European Institute for Research and

Strategic Studies in Telecommunications (EURESCOM), September 2001. Available from

http://www.eurescom.de/ public-webspace/P900-series/P907/D3finalReviewed.zip.

[14] G. Caire, F. Leal, P. Chainho, R. Evans, F.G. Jorge, G. Juan Pavon, P. Kearney, J. Stark,

and P Massonet. Project p907, deliverable 1: Initial methodology. Technical Informa-

tion Final version, European Institute for Research and Strategic Studies in Telecom-

munications (EURESCOM), July 2000. Available from http://www.eurescom.de/ public-

webspace/P900-series/P907/D1/toc.htm.

[15] Giovanni Caire, Francisco Leal, Paulo Chainho, Richard Evans, Francisco Garijo, Jorge

Gomez, Juan Pavon, Paul Kearney, Jamie Stark, and Philippe Massonet. Agent oriented

analysis using MESSAGE/UML. In Michael Wooldridge, Paolo Ciancarini, and Gerhard

BIBLIOGRAPHY 144

Weiss, editors, Second International Workshop on Agent-Oriented Software Engineering

(AOSE-2001), pages 101–108, 2001.

[16] Gerardo Canfora and Luigi Troiano. The importance of dealing with uncertainty in the

evaluation of software engineering methods and tools. In Proceedings of the 14th inter-

national conference on Software engineering and knowledge engineering, pages 691–698.

ACM Press, 2002.

[17] J. Castro, M. Kolp, and J. Mylopoulos. A requirements-driven development methodology.

In In Proceedings of the 13th International Conference on Advanced Information Systems

Engineering (CAiSE’01), pages 108–123, Interlaken, Switzerland, June 4–8 2001.

[18] L. Cernuzzi and G. Rossi. On the evaluation of agent oriented modeling methods. In

Proceedings of Agent Oriented Methodology Workshop, Seattle, November 2002.

[19] P. Chainho, R. Evans, P. Kearney, P Massonet, E. Milgrom, and Y. Deville. Project p907,

deliverable 3: Final guidelines for the identification of relevant problem areas where agent

technology is approriate. Technical Information Final version, European Institute for Re-

search and Strategic Studies in Telecommunications (EURESCOM), June 2001. Available

from http://www.eurescom.de/ public-webspace/P900-series/P907/D3finalReviewed.zip.

[20] P. R. Cohen and H. J. Levesque. Teamwork. Nous, 25(4):487–512, 1991.

[21] A. Collinot, A. Drogoul and P. Benhamou. Agent oriented design of a soccer robot team.

In In Proceedings of the Second International Conference on Multi-Agent Systems, Kyoto,

Japan, December 1996.

[22] M. Cossentino and C. Potts. A case tool supported methodology for the design of multi-

agent systems. In The 2002 International Conference on Software Engineering Research

and Practice (SERP’02), Las Vegas (NV), USA, June 24–27 2002.

[23] Massimo Cossentino, Piermarco Burrafato, Saverio Lombardo, and Luca Sabatucci. Intro-

ducing pattern reuse in the design of multi-agent systems. In Workshop : Agent Infras-

tructure, Tools and Applications at NODe 2002, Erfurt, Germany, October 2002.

[24] John Cribbs, Colleen Roe, and Suzanne Moon. An Evaluation of Object-Oriented Analysis

and Design Methodologies. SIGS Books, New York, 1992.

BIBLIOGRAPHY 145

[25] Khanh Hoa Dam and Michael Winikoff. Comparing agent-oriented methodologies. In To

appear at the Fifth International Bi-Conference Workshop on Agent-Oriented Information

Systems (AOIS-2003), Melbourne, Australia, July 2003.

[26] A. Dardenne, A. van Lamsweerde, and S. Fickas. Goal-directed requirements acquisition.

Science of Computer Programming, 20:3–50, 1993.

[27] J. Debenham and B. Henderson-Sellers. Full lifecycle methodologies for agent-oriented

systems – the extended OPEN process framework. In Proceedings of Agent-Oriented In-

formation Systems (AOIS-2002) at CAiSE’02, Toronto, May 2002.

[28] Scott A. DeLoach. Multiagent systems engineering: A methodology and language for

designing agent systems. In Agent-Oriented Information Systems ’99 (AOIS’99), Seattle

WA, May 1998.

[29] Scott A. DeLoach. Analysis and design using MaSE and agentTool. In Proceedings of

the 12th Midwest Artificial Intelligence and Cognitive Science Conference (MAICS 2001),

2001.

[30] Scott A. DeLoach. Specifying agent behavior as concurrent tasks: Defining the behavior

of social agents. In Proceedings of the Fifth Annual Conference on Autonomous Agents,

Montreal Canada, May 28 – June 1 2001.

[31] Scott A. DeLoach, Eric T. Matson, and Yonghua Li. Applying agent oriented software

engineering to cooperative robotics. In Proceedings of the The 15th International FLAIRS

Conference (FLAIRS 2002), pages 391–396, Pensacola, Florida, May 16–18 2002.

[32] Scott A. DeLoach, Mark F. Wood, and Clint H. Sparkman. Multiagent systems engineer-

ing. International Journal of Software Engineering and Knowledge Engineering, 11(3):231–

258, 2001.

[33] R. Dumke and E. Foltin. Metrics-based evaluation of object-oriented software development

methods. In Proceedings of the 2nd Euromicro Conference on Software Maintenance and

Reengineering (CSMR’98), pages 193–196, Florence, Italy, March 8–11 1998.

[34] G. Eckert and P. Golder. Improving object-oriented analysis. Information and Software

Technology, 36(2):67–86, 1994.

BIBLIOGRAPHY 146

[35] Berard E.V. A comparison of object-oriented methodologies. Technical report, Object

Agency Inc., 1995. Available from http://www.toa.com/pub/html/mcr.html.

[36] Jacques Ferber. Multi-agent Systems: An Introduction to Distributed Artificial Intelli-

gence. Addison-Wesley, 1999.

[37] Leonard N. Foner. What’s an agent anyway? - a sociological case

study. FTP Report - MIT Media Lab, May 1993. Available from

http://kuba.korea.ac.kr/ ixix/Article/agent/whatagent.pdf.

[38] U. Frank. A comparison of two outstanding methodologies for object-oriented design.

Technical Report No. 779, Arbeitspapiere der GMD, Sankt Augustin, 1993.

[39] U. Frank. Evaluating modelling languages: relevant issues, epistemological challenges and

a preliminary research framework. Technical Report 15, Arbetsberichte des Instituts fuer

Wirtshaftsinformatik (Universitt Koblenz-Landau), 1998.

[40] S. Franklin and A. Graesser. Is it an Agent, or just a Program?: A Taxonomy for Au-

tonomous Agents. In Intelligent Agents III. Agent Theories, Architectures and Languages

(ATAL’96), volume 1193, Berlin, Germany, 1996. Springer-Verlag.

[41] Francisco Garijo, Jorge J. Gomez-Sanz, Juan Pavon, and Phillipe Massonet. Multi-agent

system organization: An engineering perspective. In Proceedings of Modelling Autonomous

agents in a multi-agent world, 10 th European Workshoip on multi agent systems (MAA-

MAW’2001), pages 101–108, May 2001.

[42] Fausto Giunchiglia, John Mylopoulos, and Anna Perini. The Tropos software development

methodology: Processes, Models and Diagrams. In Third International Workshop on

Agent-Oriented Software Engineering, July 2002.

[43] N. Glaser. Contribution to knowledge modelling in a multi-agent framework (the CoMo-

MAS approach). PhD Thesis, L’Universite Henri Poincare, 1996.

[44] Mike Holcombe. An integrated methodology for the specification, verification and testing

of systems. Software Testing, Verification & Reliability, 3(4):149–163, 1993.

BIBLIOGRAPHY 147

[45] S. Hong, G. Van den Goor, and S. Brinkkemper. A formal approach to the comparison of

object-oriented analysis and design methodologies. In The Twenty-Sixth Annual Hawaii

International Conference on System Sciences, pages 689–699, Hawaii, 1993.

[46] IEEE Std 610.12. IEEE Standard Glossary of Software Engineering Terminology, 1990.

[47] Carlos Iglesias, Mercedes Garrijo, and Jose Gonzalez. A survey of agent-oriented method-

ologies. In Jorg Muller, Munindar P. Singh, and Anand S. Rao, editors, Proceedings of

the 5th International Workshop on Intelligent Agents V : Agent Theories, Architectures,

and Languages (ATAL-98), volume 1555, pages 317–330. Springer-Verlag: Heidelberg,

Germany, 1999.

[48] Carlos A. Iglesias, Mercedes Garijo, Jos C. Gonzlez, and Juan R. Velasco. A methodological

proposal for multiagent systems development extending CommonKADS. In Proceedings

of the Tenth Knowledge Acquisition for Knowledge-Based Systems Workshop, 1996.

[49] M. Jackson. Problems, methods and specialisation. IEEE Software, 11(6):57–62, November

1994.

[50] Nimal Jayaratna. Understanding and Evaluating Methodologies: NIMSAD a Systematic

Framework. McGraw-Hill, New York, 2nd edition, 1994.

[51] N. R. Jennings, P. Faratin, T. J. Norman, P. O’Brien, B. Odgers, and J. L. Alty. Imple-

menting a business process management system using ADEPT: A real-world case study.

International Journal of Applied Artificial Intelligence, 2000. To appear. Also available

from http://www.elec.qmw.ac.uk/dai/pubs/.

[52] N. R. Jennings, K. Sycara, and M. Wooldridge. A roadmap of agent research and devel-

opment. Journal of Autonomous Agents and Multi-Agent Systems, 1(1):7–38, 1998.

[53] Nicholas R. Jennings. An agent-based approach for building complex software systems.

Communications of the ACM, 44(4):35–41, 2001.

[54] Nicholas R. Jennings and Michael J. Wooldridge. Applications of intelligent agents. In

Nicholas R. Jennings and Michael J. Wooldridge, editors, Agent Technology: Foundations,

Applications, and Markets, pages 3–28. Springer-Verlag: Heidelberg, Germany, 1998.

BIBLIOGRAPHY 148

[55] Nicholas R. Jennings and Michael Woooldridge. Agent-Oriented Software Engineering. In

Francisco J. Garijo and Magnus Boman, editors, Proceedings of the 9th European Work-

shop on Modelling Autonomous Agents in a Multi-Agent World : Multi-Agent System

Engineering (MAAMAW-99), volume 1647, pages 1–7. Springer-Verlag: Heidelberg, Ger-

many, 30– 2 1999.

[56] T. Juan, A. Pearce, and L. Sterling. Roadmap: Extending the gaia methodology for

complex open systems. In Proceedings of the First International Joint Conference on

Autonomous Agents and Multi-Agent Systems (AAMAS 2002), Bologna, Italy, July 2002.

[57] Thomas Juan, Leon Sterling, and Michael Winikoff. Assembling agent oriented software

engineering methodologies from features. In In Proceedings of the Third International

Workshop on Agent-Oriented Software Engineering, at AAMAS’02, 2002.

[58] Gerald M. Karam and Ronald S. Casselman. A cataloging framework for software devel-

opment methods. IEEE Computer, 26(2):34–46, 1993.

[59] E. A. Kendall, M. T. Malkoun, and C. H. Jiang. A methodology for developing agent based

systems. In Chengqi Zhang and Dickson Lukose, editors, First Australian Workshop on

Distributed Artificial Intelligence, 1995.

[60] David Kinny and Michael Georgeff. Modelling and design of multi-agent systems. In

Intelligent Agents III: Proceedings of the Third International Workshop on Agent Theories,

Architectures, and Languages (ATAL-96). LNAI 1193. Springer-Verlag, 1996.

[61] David Kinny, Michael Georgeff, and Anand Rao. A methodology and modelling technique

for systems of BDI agents. In Rudy van Hoe, editor, Seventh European Workshop on

Modelling Autonomous Agents in a Multi-Agent World, 1996.

[62] Barbara Kitchenham. DESMET: a method for evaluating software engineering methods

and tools. Technical Report TR96-09, University of Keele, U.K., August 1996.

[63] Barbara Ann Kitchenham. Evaluating software engineering methods and tool part 1: The

evaluation context and evaluation methods. ACM SIGSOFT Software Engineering Notes,

21(1):11–14, 1996.

BIBLIOGRAPHY 149

[64] Barbara Ann Kitchenham. Evaluating software engineering methods and toolpart 2: se-

lecting an appropriate evaluation methodtechnical criteria. ACM SIGSOFT Software En-

gineering Notes, 21(2):11–15, 1996.

[65] Barbara Ann Kitchenham. Evaluating software engineering methods and toolpart 3: se-

lecting an appropriate evaluation methodpractical issues. ACM SIGSOFT Software Engi-

neering Notes, 21(4):9–12, 1996.

[66] Barbara Ann Kitchenham. Evaluating software engineering methods and tools, part

7: planning feature analysis evaluation. ACM SIGSOFT Software Engineering Notes,

22(4):21–24, 1997.

[67] Barbara Ann Kitchenham. Evaluating software engineering methods and tools part 12:

evaluating DESMET. ACM SIGSOFT Software Engineering Notes, 23(5):21–24, 1998.

[68] Barbara Ann Kitchenham and Lindsay Jones. Evaluating software engineering methods

and tools part 5: the influence of human factors. ACM SIGSOFT Software Engineering

Notes, 22(1):13–15, 1997.

[69] Barbara Ann Kitchenham and Lindsay Jones. Evaluating software engineering methods

and tools part 6: identifying and scoring features. ACM SIGSOFT Software Engineering

Notes, 22(2):16–18, 1997.

[70] Barbara Ann Kitchenham and Lesley M. Pickard. Evaluating software engineering meth-

ods and tools: part 9: quantitative case study methodology. ACM SIGSOFT Software

Engineering Notes, 23(1):24–26, 1998.

[71] Barbara Ann Kitchenham and Lesley M. Pickard. Evaluating software engineering meth-

ods and tools, part 11: analysing quantitative case studies. ACM SIGSOFT Software

Engineering Notes, 23(4):18–20, 1998.

[72] Philippe Kruchten. The Rational Unified Process: An Introduction. Addison-Wesley Pub

Co, 2nd edition, 2000.

[73] T. Lacey and S. DeLoach. Verification of agent behavioral models. In CSREA Press,

editor, Proceedings of the International Conference on Artificial Intelligence (IC-AI’2000),

volume 2, pages 557–563, Las Vegas, Nevada, June 26–29 2000.

BIBLIOGRAPHY 150

[74] D. Law and T. Naem. DESMET: determining and evaluation methodology for software

methods and tools. In Proceedings of the Science Conference on CASE - Current Practice,

Future Prospects, Cambridge, England, March 1992.

[75] David Law. Methods for Comparing Methods: Techniques in Software Development. NCC

Publications, 1988.

[76] J. Lind. MASSIVE: Software Engineering for Multiagent Systems. PhD thesis, University

of Saarbrcken, Germany, 1999.

[77] Michael Luck, Peter McBurney, and Chris Preist. Agent technology: Enabling next gen-

eration computing: A roadmap for agent-based computing. AgentLink report, available

from www.agentlink.org/roadmap., 2003.

[78] Michael Mattsson. A comparative study of three new object-oriented methods. Techni-

cal Report 2, University of Karlskrona/Ronneby, Department of Computer Science and

Business Administration, 1995.

[79] David E. Monarchi and Gretchen I. Puhr. A research typology for object-oriented analysis

and design. Communications of the ACM, 35(9):35–47, September 1992.

[80] H. Mouratidis, P. Giorgini, G. Manson, and I. Philp. Using Tropos methodology to Model

an Integrated Health Assessment System. In Proceedings of the 4th International Bi-

Conference Workshop on Agent-Oriented Information Systems (AOIS-2002), Toronto–

Ontario, May 2002.

[81] J. Mylopoulos, J. Castro, and M. Kolp. Tropos: Toward agent-oriented information sys-

tems engineering. In Second International Bi-Conference Workshop on Agent-Oriented

Information Systems (AOIS2000), June 2000.

[82] H. S. Nwana. Software agents: An overview. Knowledge Engineering Review, 11(2):205–

244, 1995.

[83] James Odell. Objects and agents compared. Journal of Object Technology, 1(1):41–53,

2002.

BIBLIOGRAPHY 151

[84] Odell J. and Parunak H. V. D. and Bauer B. Representing Agent Interaction Protocols in

UML. The First International Workshop on Agent-Oriented Software Engineering (AOSE-

2000), 2000.

[85] G.M.P. OHare and N.R.Jennings. Foundation of Distributed Artificial Intelligence.

J.Wiley&Sons, 1996.

[86] S. A. O’Malley and S. A. DeLoach. Determining when to use an agent-oriented software

engineering methodology. In Proceedings of the Second International Workshop On Agent-

Oriented Software Engineering (AOSE-2001), pages 188–205, Montreal, May 2001.

[87] Arnold P., Bodoff S., Coleman D., Gilchrist H., and Haynes F. An evaluation of

five object-oriented development methods. Hewlett packard technical report hpl-91-

52, Software Engineering Department - HP Laboritories, Bristol, 1991. Available from

http://www.lgu.ac.uk/cism/ugsylls/qc/qc309.htm.

[88] Lin Padgham and Michael Winikoff. Prometheus: A methodology for developing intelligent

agents. In Third International Workshop on Agent-Oriented Software Engineering, July

2002.

[89] Lin Padgham and Michael Winikoff. Prometheus: A pragmatic methodology for engineer-

ing intelligent agents. In Proceedings of the OOPSLA 2002 Workshop on Agent-Oriented

Methodologies, pages 97–108, Seattle, November 2002.

[90] Lin Padgham and Michael Winikoff. Prometheus: Engineering intelligent agents. Tutorial

notes, available from the authors, October 2002.

[91] Lin Padgham and Michael Winikoff. Prometheus: A brief summary. Technical note,

available from the authors, January 2003.

[92] J. Parsons. A cognitive foundation for comparing object-oriented analysis methods. In

J. F Nunamaker and IEEE Computer Society Press R. H Sprague, editors, 26th Hawaii

International Conference on System Sciences, pages 699–708, 1993.

[93] David Poutakidis, Lin Padgham, and Michael Winikoff. Debugging multi-agent systems

using design artifacts: The case of interaction protocols. In Proceedings of the First Inter-

BIBLIOGRAPHY 152

national Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS’02),

2002.

[94] Michael Prasse. Evaluation of object-oriented modelling languages: A comparison between

OML and UML. In Martin Schader and Axel Korthaus, editors, The Unified Modeling

Language – Technical Aspects and Applications, pages 58–75. Physica-Verlag, Heidelberg,

1998.

[95] A. S. Rao and M. P. Georgeff. BDI-agents: from theory to practice. In Proceedings of the

First Intl. Conference on Multiagent Systems, San Francisco, 1995.

[96] J. Rumbaugh. Notation notes: Principles for choosing notation. Journal of Object-Oriented

Programming (JOOP), 8(10):11–14, May 1996.

[97] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice

Hall, 1995.

[98] Chris Sadler and Barbara Ann Kitchenham. Evaluating software engineering methods and

tools-part 4: the influence of human factors. ACM SIGSOFT Software Engineering Notes,

21(5):11–13, 1996.

[99] R. C Sharble and S. S Cohen. The object-oriented brewery: A comparison of two object

oriented development methods. SIGSOFT Software Engineering Notes, 18(2):60–73, 1993.

[100] Onn Shehory and Arnon Sturm. Evaluation of modeling techniques for agent-based sys-

tems. In Jorg P. Muller, Elisabeth Andre, Sandip Sen, and Claude Frasson, editors,

Proceedings of the Fifth International Conference on Autonomous Agents, pages 624–631.

ACM Press, May 2001.

[101] Munindar P. Singh. Agent communication languages: Re-thinking the principles. IEEE

Computer, 31(12):40–47, December 1998.

[102] Xipeng Song and Leon J. Osterweil. Toward objective, systematic design-method compar-

isons. IEEE Software, 9(3):43–53, May 1992.

[103] Clint H. Sparkman, Scott A. DeLoach, and Athie L. Self. Automated derivation of complex

agent architectures from analysis specifications. j-LECT-NOTES-COMP-SCI, 2222:278–

290, 2002.

BIBLIOGRAPHY 153

[104] A. Sturm and O. Shehory. Towards industrially applicable modeling technique for agent-

based systems (poster). In Proceedings of International Conference on Autonomous Agents

and Multi-Agent Systems, Bologna, July 2002.

[105] B. Yadav Surya, R. Bravoco Ralph, T. Chatfield Akemi, and T. M. Rajkumar. Comparison

of analysis techniques for information requirement determination. Communications of the

ACM, 31(9):1090–1097, 1988.

[106] Hans van Vliet. Software Engineering: Principles and Practice. John Wiley & Sons,

second edition, 2000.

[107] Federico Vazquez. Selecting a software development process. In Proceedings of the confer-

ence on TRI-Ada ’94, pages 209–218. ACM Press, 1994.

[108] Gerd Wagner. A uml profile for external aor models. In Springer-Verlag LNAI, editor,

In Proceedings of Third International Workshop on Agent-Oriented Software Engineer-

ing (AOSE-2002), held at Autonomous Agents and Multi-Agent Systems (AAMAS 2002),

Bologna, Italy, July 15 2002.

[109] Wagner G. Agent-Object-Relationship Modeling. In Proc. of Second International Sym-

posium - from Agent Theory to Agent Implementation together with EMCRS 2000, April

2000.

[110] Wagner G. Agent-Oriented Analysis and Design of Organizational Information Systems.

In Proc. of Fourth IEEE International Baltic Workshop on Databases and Information

Systems, Vilnius (Lithuania), May 2000.

[111] I.J. Walker. Requirements of an object-oriented design method. Software Engineering

Journal, pages 102–113, March 1992.

[112] Roel Wieringa. A survey of structured and object-oriented software specification methods

and techniques. ACM Computing Surveys (CSUR), 30(4):459–527, 1998.

[113] B. Wood, R. Pethia, L.R. Gold, and R Firth. A guide to the assessment of software de-

velopment methods. Technical Report 88-TR-8, Software Engineering Institute, Carnegie-

Mellon University, Pittsburgh, PA, 1988.

BIBLIOGRAPHY 154

[114] Mark Wood and Scott A. DeLoach. An overview of the multiagent systems engineering

methodology. In The First International Workshop on Agent-Oriented Software Engineer-

ing (AOSE-2000), Limerick, Ireland, June 10 2000.

[115] M. Wooldridge and N. R. Jennings. Intelligent agents: Theory and practice. The Knowl-

edge Engineering Review, 10(2):115–152, 1995.

[116] M. Wooldridge, N.R. Jennings, and D. Kinny. A methodology for agent-oriented analysis

and design. In Proceedings of the third international conference on Autonomous Agents

(Agents-99), 1999.

[117] M. Wooldridge, N.R. Jennings, and D. Kinny. The Gaia methodology for agent-oriented

analysis and design. Autonomous Agents and Multi-Agent Systems, 3(3), 2000.

[118] M. J. Wooldridge. An Introduction to Multi-Agent Systems. John Wiley & Sons, 2002.

[119] E. Yu. Modelling Strategic Relationships for Process Reengineering. PhD thesis, University

of Toronto, Department of Computer Science, 1995.