human-computer interaction overcoming barriers · concerned with the design, evaluation and...

72
The European Online Magazine for the IT Professional http://www.upgrade-cepis.org Vol. IV, No. 1, February 2003 Human-Computer Interaction Human-Computer Interaction Overcoming Barriers Overcoming Barriers

Upload: nguyendien

Post on 27-May-2019

217 views

Category:

Documents


0 download

TRANSCRIPT

The European Online Magazine for the IT Professionalhttp://www.upgrade-cepis.orgVol. IV, No. 1, February 2003

Human-Computer InteractionHuman-Computer InteractionOvercoming BarriersOvercoming Barriers

An accepted EuropeanICT certification standard

promoted by CEPIS(Council of European Professional

Informatics Societies)

<http://www.eucip.com/>

1

UPGRADE

is the European Online Magazine for the Information Technology Professionals, published bimonthly at <http://www.upgrade-cepis.org/>

Publisher

UPGRADE is published on behalf of CEPIS (Council of European Professional Informatics Societies,<http://www.cepis.org/>) by NOVÁTICA<http://www.ati.es/novatica/>, journal of the Spanish CEPIS society ATI (Asociación de Técnicos de Informática<http://www.ati.es/>).UPGRADE is also published in Spanish (full issue printed, some articles online) by NOVÁTICA, and in Italian (abstracts and some articles online) by the Italian CEPIS society ALSI<http://www.alsi.it> and the Italian IT portal Tecnoteca<http://www.tecnoteca.it/>.UPGRADE was created in October 2000 by CEPIS and was first published by NOVÁTICA and INFORMATIK/INFORMATIQUE, bimonthly journal of SVI/FSI (Swiss Federation of Professional Informatics Societies, <http://www.svifsi.ch/>).

Chief Editors

François Louis Nicolet, Zürich <[email protected]>Rafael Fernández Calvo, Madrid <[email protected]>

Editorial Board

Prof. Wolffried Stucky, CEPIS PresidentFernando Piera Gómez andRafael Fernández Calvo, ATI (Spain)François Louis Nicolet, SI (Switzerland)Roberto Carniel, ALSI – Tecnoteca (Italy)

English Editors:

Mike Andersson, Richard Butchart, David Cash, Arthur Cook, Tracey Darch, Laura Davies, Nick Dunn, Rodney Fennemore, Hilary Green, Roger Harris, Michael Hird, Jim Holder, Alasdair MacLeod, Pat Moody, Adam David Moss, Phil Parkin, Brian Robson.

Cover page

designed by Antonio Crespo Foix, © ATI 2003

Layout:

Pascale Schürmann

E-mail addresses for editorial correspondence:<[email protected]> and <[email protected]>

E-mail address for advertising correspondence:<[email protected]>

Copyright

© NOVÀTICA 2003. All rights reserved. Abstracting is permitted with credit to the source. For copying, reprint, or republication permission, write to the editors.

The opinions expressed by the authors are their exclusive responsibility.

ISSN 1684-5285

The European Online Magazine for the IT Professionalhttp://www.upgrade-cepis.orgVol. IV, No. 1, February 2003

Joint issue with N

OVÁTICA

2 Presentation. The Human Side of IT –

Paloma Díaz-Pérez and Gustavo Rossi

The guest editors present the issue and include a list of useful references for those interested in knowing more about Human Computer Interaction.

5 USERfit Tool: A Design Tool Oriented towards Accessibility and Usability

– Julio Abascal-González, Myriam Arrue-Recondo, Nestor Garay-Vitoria, Jorge Tomás-Guerra, and Carlos A. Velasco-Nuñez

This paper tackles the subject of accessibility and usability via a tool which implements the USERfit methodology to generate usability specifications.

12 An Annotation Ontology on Usability Evaluation Resources: Design and Retrieval Mechanisms

– Elena García-Barriocanal, Miguel-Ángel Sicilia-Urbán, and Ignacio Aedo-Cuevas

The authors analyse how to improve the information retrieval process (a key aspect nowadays) by the use of ontologies, in line with what we know as the Semantic Web.

18 Virtual Reality: Do Not Augment Realism, Augment Relevance

– Johan F. Hoorn, Elly A. Konijn, and Gerrit C. van der Veer

The authors see virtual reality as another kind of fiction and put forward the idea that if we want to improve the effectiveness of these kinds of systems we need to centre not on technology but on those human aspects which make the user experience a virtual environment as if it were real.

27 GADEA: a Framework for the Development of User Interfaces Adapted to Human Cognition Diversity

– Martín González-Rodríguez, Esther Del Moral-Pérez, María del Puerto Paule-Ruiz, and Juan-Ramón Pérez-Pérez

This paper analyses the interesting topic of user interface adaptation, proposing a tool to manage the adaptation process by means of an intelligent system.

31 User Interface Patterns for Object-Oriented Navigation

– Pedro-Juan Molina-Moreno, Ismael Torres-Boigues, and Oscar Pastor-López

This paper shows an example of cooperation between the corporate and the academic world. In it the authors identify and specify a series of user interface conceptual patterns in business management applications.

38 e-CLUB: A Ubiquitous e-Learning System for Teaching Domotics

– Manuel Ortega-Cantero, José Bravo-Rodríguez, Miguel-Ángel Redondo-Duque, and Crescencio Bravo-Santos

The topic of ubiquitous computing is dealt with in this paper, where the authors present a system oriented to the learning of domotics which aims to improve the training/learning process.

46 Designing Complex Systems in Industrial Reality: A Study of the DUTCH Approach

– Cristina Chisalita, Mari-Carmen Puerta-Melguizo, and Gerrit C. Van der Veer

The authors describe how they have succeeded in transferring their experience in the design of complex interactive systems to the real world of business, explaining how they made use of DUTCH, a conceptual framework for the design of interactive systems.

53 Towards Universal Access in the Disappearing Computer Environment

– Constantine Stephanidis

The author analyses the requirements which emerge during the design, development and evaluation of user interfaces in the context of universal access in a society in which the traditional computer is gradually disappearing, as every day we find more smart devices embedded in everyday objects. This article also presents a practical experience, the “Nomadic Music Box”.

60 Customer Interaction Personalization: iSOCO Alize

– Jesús Cerquides-Bueno, Enrique Hernández-Jiménez, Oscar Frías-Barranco, and Noyda Matos-Fuentes

The authors present us with a multi-agent architecture, Alize, which aims to adapt the user interface by making use of a series of behavioural patterns. Alize has also been applied to the world of e-Commerce, in a virtual bookstore.

65 A Web Voice Solution: ConPalabras

– Carlos Rebate-Sánchez, Yolanda Hernández-González, Carlos García-Moreno, and Alicia Fernández del Viso-Torre

The authors describe ConPalabras (Spanish for “with words”), a plug-in developed with the aim of making web pages “speak” by reading either their own content or the content of an attached file.

Human-Computer Interaction: Overcoming Barriers

Guest Editors: Paloma Díaz-Pérez and Gustavo Rossi

Coming issue:“e-Government”

Human-Computer Interaction: Overcoming Barriers

2

UP

GRADE

Vol. IV, No. 1, February 2003 © Novática

Presentation

The Human Side of IT

Paloma Díaz-Pérez and Gustavo Rossi

The phenomenon of interaction is present in all activities ofour lives, whether it involves objects or human beings. Fromthe moment we get up till it is time to go to bed again we areconstantly interacting with the objects around us in order toachieve some specific purpose: our breakfast cup of coffee, thecar we drive to go somewhere, the fork we eat with or the bedwe sleep in. Naturally, we all want this interaction to take placein the most effective and efficient way possible, so that it takesa minimum (even imperceptible) amount of our time for us tofind out what an object is for and how it works. This is preciselywhat

Donald Norman

hopes to reveal in his book “Thepsychology of everyday things” which aims to show us howthings should be designed in order to make it easy for us tounderstand and learn their function and their use, and therebyimprove their interaction with users.

This phenomenon which happens in our day to day life alsooccurs when a person interacts with a computer. This particularinteraction is known as

HCI (Human- Computer Interaction

,though in some cases it can also be seen referred to as

CHI–Computer-Human Interaction

) which is

“a disciplineconcerned with the design, evaluation and implementation ofinteractive computing systems for human use and with thestudy of major phenomena surrounding them”

(as defined bythe ACM SIGCHI 1992). For this reason it can be said that thefundamental goal of HCI is to help create usable and safesystems which are also functionally suitable for the users’needs. These systems comprise not only hardware and soft-ware, but also the environment they are used in or which willbe affected by their use (e.g. company organization, the person-al work environment, etc.). It was precisely when the environ-ment was brought into the equation in the early eighties whenthe term HCI started to replace the term “man machineinterface”.

There are therefore a great many different disciplines in-volved in the study of HCI, ranging from computer science topsychology, not to mention ergonomics (in the USA the term‘human factors’ is also used) and graphic design. All have apart to play in the analysis of how to improve a system’susability. This is normally measured in terms of five criteria:the ease of learning how it works; the ease of remembering howthe system works after not having used it for some time; theefficiency of use; the number of errors the user makes whenusing it; and finally, the user’s satisfaction when using thesystem.

HCI, is not therefore just a discipline concerned with thedevelopment of an interface of windows which the user inter-acts with by using a mouse. Work is currently underway onmany fronts in this field, such as accessibility or international-

isation, with the aim of making systems usable by all kinds ofusers, with different physical, intellectual and cultural charac-teristics, and on any type of platform, as well as on other frontswhich present day and future technology are opening up, suchas ubiquitous computing (the use of computers anywhere),pervasive computing (or the use of computers integrated in aworldwide infrastructure), wearable computing (computersintegrated in everyday objects), computational toys (computersin interactive toys) or interactive television (communicationmedia which the audience can interact with).

In this monograph on HCI we have included some of thearticles that were presented at the

Interacción 2002

Congressheld at the Universidad Carlos III de Madrid (Spain) last May,in which the guest editors of this monograph of Upgradeparticipated as members of the programme committee. Theselection of these articles was made with the idea of coveringvarious aspects of interaction from both an academic and anindustrial viewpoint.

In

“USERfit Tool: A Design Tool Oriented towards Accessi-bility and Usability”,

Julio Abascal-González, Myriam Arrue-Recondo, Nestor Garay-Vitoria, Jorge Tomás- Guerra

, of theUniversidad del País Vasco (Spain), and

Carlos A. Velasco-Nuñez

, from the Fraunhofer Institute for Applied InformationTechnology, in Germany, tackle the subject of accessibility andusability via a tool which implements the USERfit methodolo-gy to generate usability specifications. This methodology was

Paloma Díaz-Pérez graduated and received her doctorate inComputer Science at the Universidad Politécnica de Madrid(Spain), and is now a Lecturer/Associate Professor at the Univer-sidad de la Universidad Carlos III de Madrid. The main lines ofresearch which she is pursuing in the DEI laboratory <http://www.dei.inf.uc3m.es> of this University include: hypermediaand electronic documentation systems; software developmentmethodologies, CASE tools and formal methods for representingweb systems and hypermedia, and user interface design and eval-uation processes for interactive systems. She was President of theIPO2002 Programme Committee. <[email protected]>

Gustavo Rossi is a Full Professor at the Universidad Nacionalde La Plata (Argentina), and Director of LIFIA (Laboratory forEducation and Research in Advanced Informatics) of that sameUniversity. He was awarded a doctorate in Computer Sciences bythe PUC-Rio, Brazil. in 1996. He is one of the developers ofOOHDM, a leading methodology for Web application design. Hiscurrent areas of interest are design reuse in Web applications andbusiness process modelling on the Web. He is editor of the ObjectTechnology section of Novática, journal of the Spanish CEPISsociety ATI. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

© Novática

UP

GRADE

Vol. IV, No. 1, February 2003

3

developed as part of the European project USER (TIDE-1062),undertaken by the HUSAT Research Institute, Sintef UnimedRehab and COO.S.S. Marche scrl. The tool presented here,known as USERfit Tool, has also been designed to maximizeusability and is currently the subject of several evaluations.

In

“An Annotation Ontology on Usability EvaluationResources: Design and Retrieval Mechanisms”,

Elena García-Barriocanal

, of the Universidad de Alcalá (Spain), and

Miguel-Ángel Sicilia-Urbán and Ignacio Aedo-Cuevas

, of theUniversidad Carlos III de Madrid (Spain), analyse how toimprove the information retrieval process (a key aspect nowa-days) by the use of ontologies, in line with what we know as the

Semantic Web

. As a specific example of an application theauthors propose an ontology on the usability of interfaces andpresent a tool which, by means of the markup of resourcesusing the terms of the ontology, enables a search to be per-formed which is more suitable for users’ needs and with bettersemantics.

Johan F. Hoorn, Elly A. Konijn

, and

Gerrit C. van der Veer

,from the Free University of Holland, tackle the subject ofvirtual reality in their article

“Virtual Reality: Do Not AugmentRealism, Augment Relevance”

in which the authors see virtualreality as another kind of fiction and put forward the idea thatif we want to improve the effectiveness of these kinds ofsystems, rather than concentrate on technology, which is only ameans of transmission, we need to centre on those humanaspects which make the user experience a virtual environmentas if it were real. To do this they propose augmenting the rele-vance of virtual environments, taking into considerationfeatures which are of interest to the users and the activitieswhich they perform in that environment in real life.

In

“GADEA: a Framework for the Development of UserInterfaces Adapted to Human Cognition Diversity”

,

MartínGonzález-Rodríguez, Esther Del Moral-Pérez, María delPuerto Paule-Ruiz and Juan-Ramón Pérez Pérez

, of theUniversidad de Oviedo (Spain), tackle the interesting topic ofuser interface adaptation, proposing a tool to manage the adap-tation process by means of an intelligent system. In this way thetask of the developer or designer of the system is greatlysimplified, since he or she need not devote any time to model-ling the adaptation and can concentrate his or her efforts onimproving interaction or on purely technical aspects of devel-opment.

“User Interface Patterns for Object Oriented Navigation”

shows us an example of cooperation between the corporate andthe academic world. In this work,

Pedro-Juan Molina-Moreno

and

Ismael Torres-Boigues

, of CARE Technologies S.A., and

Oscar Pastor-López

, of the Universidad Politécnica de Valen-cia (Spain), have identified and specified a series of user inter-face conceptual patterns in business management applications.The use of these patterns not only provides a common languageduring the development process but also offers the possibilityof validating requirements with the end user.

The topic of ubiquitous computing is dealt with in

“e-CLUB:A Ubiquitous e-Learning System for Teaching Domotics”

by

Manuel Ortega-Cantero, José Bravo-Rodríguez, Miguel-Ángel Redondo-Duque,

and

Crescencio Bravo-Santos

,

of the

Universidad de Castilla-La Mancha (Spain). In this article, theauthors present a system oriented to the learning of domoticswhich aims to improve the training/learning process byapplying two principles: the use of intermediate solutionswhich oblige the student to abstract and plan; and the applica-tion of a collaborative learning process. The tool is also inte-grated in a ubiquitous classroom with the aim of encouragingcommunication among students by making it possible to use anumber of different devices.

In

“Designing Complex Systems in Industrial Reality: AStudy of the DUTCH Approach”

we can read about an interest-ing case of cooperation between the academic and industrialworlds. The authors,

Cristina Chisalita, Mari-CarmenPuerta-Melguizo

and

Gerrit C. Van der Veer

, describe howthey have succeeded in transferring their experience in thedesign of complex interactive systems to the real world of busi-ness. Having been invited to join the development team of ahigh-tech company, they explain how they made use ofDUTCH, a conceptual framework for the design of interactivesystems, and Euterpe, the tool provided by this task baseddesign, and pass on some of the lessons they have been learningas a result of this cooperation.

Universal access is a fundamental requirement of our Infor-mation Society, as we aim to make information accessible toevery citizen. This is the subject matter of the article

“TowardsUniversal Access in the Disappearing Computer Environment”

by

Constantine Stephanidis

, in which he analyses and discuss-es the requirements which emerged during the design, develop-ment and evaluation of user interfaces in the context of univer-sal access in a society in which the computer as we know it inits most traditional sense is gradually disappearing, as everyday we find more smart devices embedded in everyday objects.This article also presents a practical experience, the “NomadicMusic Box”, in which an interaction environment made up ofvarious mobile devices provides each user with access in themost convenient way possible.

To close this monograph we have two examples of how HCIprinciples can be applied to commercial developments.

Firstly,

Jesús Cerquides-Bueno, Enrique Hernández-Jiménez, Oscar Frías-Barranco

, and

Noyda Matos-Fuentes

,from Intelligent Software Components (iSOCO) deal onceagain with the subject of meeting the needs of the user. In

“Customer Interaction Personalization: iSOCO Alize”

theauthors present us with a multi-agent architecture which aimsto adapt the user interface by making use of a series of behav-ioural patterns. Alize has also been applied to the world of e-Commerce, in a virtual bookstore to be more precise, under thepremise that the possibility of generating personalized offerscould make the difference between survival or failure for suchbusinesses.

Finally, from Soluziona,

Carlos Rebate-Sánchez, YolandaHernández-González, Carlos García-Moreno

, and

AliciaFernández del Viso-Torre

present us with

“A Web Voice Solu-tion: ConPalabras”

(Spanish for “with words”), a plug-indeveloped with the aim of making web pages “speak”, to usethe authors’ own words, by reading either their own content orthe content of an attached file. In this way the auditory channel,

Human-Computer Interaction: Overcoming Barriers

4

UP

GRADE

Vol. IV, No. 1, February 2003 © Novática

normally underutilized, is made use of, thereby improvingaccessibility among other things. The authors also presentseveral examples of where the application ConPalabras may beof interest.

We sincerely thank all the authors for their valuable contribu-tion.

Translated by Steve Turpin

Note from the Editors:

This monograph will be also published in Spanish (full issue print-ed, abstracts and some articles online) by Novática, journal of theSpanish CEPIS society ATI, Asociación de Técnicos de Informática,at <http://www. ati.es/novatica/>, and in Italian (online edition only,containing abstracts and some articles) by the Italian CEPIS societyALSI and the Italian IT portal Tecnoteca at <http://www.tecnoteca.it>.

Useful References on Human-Computer Interaction

Below is a

not exhaustive

list of resources on the subject ofHCI (Human-Computer Interaction) which, together with thearticles included in this monograph, will afford the reader abroader understanding of this field.

Associations

• ACM SIGCHI <http://www.acm.org/sigchi/>• Association for Information Systems Special Interest Group on

Human-Computer• Interaction <http://melody.syr.edu/hci/sig_homepage.cgi>• The Ergonomics Society - an international organisation for profes-

sionals using knowledge of human abilities and limitations to designand build for comfort, efficiency, productivity and safety.<http://www.ergonomics.org.uk/>

• Human Factors and Ergonomics Society (HFES) <http://hfes.org/>• British HCI group <http://www.bcs-hci.org.uk/>• Association Francophone d’Interaction Homme-Machine

<http://www.afihm.org/• AIPO <http://www.aipo.es/>• Usability Professional’ Association <http://www.upassoc.org/>• CADIUS <http://www.cadius.org/>

Electronic Resources

• useit.com: Jakob Nielsen’s Website <http://www.useit.com/>• Usable Web <http://usableweb.com/>• HCI Bibliography <http://www.hcibib.org/>• HCI Index <http://degraaff.org/hci/>• Human-Computer Interaction Resource Network

<http://www.hcirn.com/>• Wearable computing <http://home.earthlink.net/~wearable/>• Usability resources <http://www.usabilityfirst.com/>• Bad Designs <http://www.baddesigns.com/>• Usablity news <http://www.usabilitynews.com/>

Books

Jenny Preece et al.Interaction Design: beyond human computer interaction. JohnWiley &Sons, 2002. <http://www.id-book.com/>

Mary B. Rosson and John M. Carroll. Usability engineering. Morgan Kauffmann Pub. 2002. <http://www.mkp.com/books_catalog/catalog.asp?ISBN=1-55860-712-9>

Jakob Nielsen.Designing web usability. New Riders Pub.1999. Traducción:Usabilidad: diseño de sitios web. Editorial Alambra-Longman2000.

Deborah J. Mayhew.The usability engineering lifecycle: a practitioner’s handbook foruser interface design. Morgan Kaufmann, 1999.

Alan Dix et al.Human Computer Interaction. Prentice Hall. 1998.

Ben Shneiderman.Designing user interfaces. Pearson educación. 1997 (segundaedición).

Jakob Nielsen.Usability engineering. AP Professional, 1993.

Specialized Publications

Interacting with Computers<http://www.elsevier.com/locate/intcom>

International Journal of Human-Computer Studies<http://www.elsevier.com/inca/publications/store/6/2/2/8/4/6/>

Transactions On Computer-Human Interaction <http://www.acm.org/tochi/>

User modeling and user-adapted interaction<http://umuai.informatik.uni-essen.de/>

Interaction D-Zine <http://www.interaction-design.nl/>

The Interaction Designer’s Coffee Break <http://www.guuui.com/>

Interactions <http://www.acm.org/interactions/>

SIGCHI Bulletin <http://sigchi.org/bulletin/>

Conferences and Congresses

• HCI International <http://hcii2003.ics.forth.gr/>• ACM CHI <http://chi2003.org/>• INTERACT <http://www.interact2003.org/>• Interacción <http://suido.lsi.uvigo.es/i2003/>

Human-Computer Interaction: Overcoming Barriers

© Novática

UP

GRADE

Vol. IV, No. 1, February 2003

5

USERfit Tool: A Design Tool Oriented towards Accessibility and Usability

Julio Abascal-González, Myriam Arrue-Recondo, Nestor Garay-Vitoria, Jorge Tomás-Guerra, and Carlos A. Velasco-Nuñez

This paper presents an application called USERfit Tool, developed to facilitate the use of the

USERfit

designmethodology that was developed in order to generate usability and accessibility specifications. Though

USERfit

methodology was developed mainly for the Assistive Technology environment it has been adaptedto the design of applications for any group of users. The principal difficulty experienced in the applicationof

USERfit

has been that because the information generated was handled using paper forms then theinclusion or elimination of users, change of user contexts, and the need to propagate data from one form toother tended to make the

USERfit

application tedious. In order to decrease the effort required to manage theinformation produced in the design process – and so facilitate the use of

USERfit

methodology – USERfitTool allows the reuse of previously generated designs, and the sharing of a design among remote designers,therefore maintaining coherence and compatibility.

Keywords:

Accessibility, Design Tools, Methodologies,Usability.

Tools for Usability-Oriented Design

There exist many tools developed to help a design teamcollect and process all the information required to produceuser-oriented specifications. Most such tools have documentsto record relevant aspects to the interaction (user, task andenvironment characteristics, work methods, attitudes etc.). Thisinformation is then used to produce design specifications usinga specific methodology.

1

USERfit

, one of these methodologies, is mainly oriented tousability and accessibility. This methodology can be defined asa helping environment in the Assistive Technology

2

area whichproduces a design philosophy that can be described as usercentred, system oriented, and iterative design promoting. Themain purpose of

USERfit

is the capture, and specification, ofuser requirements [4].

Though this methodology was originally created to facilitateuser-centred design for users with disabilities

3

, its focus is notlimited to a group of users with some specific physical orcognitive characteristics. Therefore it is valid for any group ofpeople in all kinds of situations.

USERfit

places emphasis on detecting and applying particu-lar characteristics of the user group to whom a product isoriented, avoiding generalizations and presuppositions. For

1. Examples of tools for such methodologies are: UCD [1], EZSort[2], and EuCase99 <http://foruse.com/resources/tools/index.htm>. Many tools such as WebSAT, Lift Online and Lift Onsite,Max, NetRaker Suite are especially devoted to usability orientedweb design [3].

2.

Assistive Technology

gathers the technological applications(mainly the so called new technologies) to enhance life conditionsof elderly and disabled people.

3. It was developed within the USER (TIDE-1062) European project,which was done by HUSAT Research Institute (UK), Sintef Uni-med Rehab (N) and COO.S.S. Marchebscrl (I).

1

Julio Abascal-González has a MS degree in Physics (Universi-dad de Navarra, Spain, 1978) and a PhD in Computer Science(Universidad del País Vasco-Euskal Herriko Unibertsitatea,Spain,1987). He is an Associate Professor in the Computer Archi-tecture and Technology Department of this University. Since1988 he has worked in the Laboratory of Human-Computer Inter-action for Special Needs, where he has led several researchprojects. He is a member of technical committee TC13 “Human-Computer Interaction” of the International Federation for Infor-mation Processing, and he is the founder and the first chairman ofthe IFIP working group WG 13.3 “HCI & Disability”. He is alsoeditor of the HCI section of Novática, journal of ATI (Asociaciónde Técnicos de Informática, ATI, Spain). <[email protected]>

Myriam Arrue-Recondo has a ME degree in Computing (Uni-versidad del País Vasco-Euskal Herriko Unibertsitatea, 1999). InJanuary of 2001 she joined the Laboratory of Human-ComputerInteraction for Special Needs as a researcher on the IRIS Europe-an project. At the moment, she is an Assistant Professor in theComputer Architecture and Technology Department of the abovementioned University. <[email protected]>

Nestor Garay-Vitoria has a MS Degree in Informatics (1990)and a PhD in Computer Science (2000) in the (Universidad delPaís Vasco-Euskal Herriko Unibertsitatea, Spain). Since 1991 hehas worked in the Laboratory of Human-Computer Interaction forSpecial Needs where he has collaborated on several researchprojects. At present he is an Associate Professor in the ComputerArchitecture and Technology Department of the same University.His PhD thesis has gained the Extraordinary Award given by theUniversidad del País Vasco in October 2002. <[email protected]>

Continued on next page

Human-Computer Interaction: Overcoming Barriers

6

UP

GRADE

Vol. IV, No. 1, February 2003 © Novática

this reason it is a clear and easy-to-use methodology. It has theadvantage of directing the designer's mind to the diversity ofthe users, avoiding the common failing of considering the usergroup as uniform and standard. Therefore USERfit can also beuseful in the education of the usability and accessibility-orient-ed design since it allows the analysis of user features that arehidden in other contexts [5]. Some authors, such as Newell andGregor [6] or Nielsen [7], affirm that experience in the designfor people with disabilities can generate a special ability to faceup general usability problems.

This paper presents the USERfit Tool that has been designedto facilitate the design of products by means of the USERfitmethodology.

Structure of the USERfit Methodology USERfit includes a set of nine summarising protocols;

each composed of diverse forms covering the followingaspects: user analysis, activity analysis, product analysis, envi-ronmental context, product environment, and product attributematrix, requirements summary, design summary and usabilityevaluation. Table 1 shows a summary of its main characteris-tics. The purpose of these protocols is to allow the design team to

collect, evaluate and develop information in order to constructa specification of the product. The attraction of this methodol-ogy is that it makes designers deal with the issues that need tobe considered in order to obtain usable and accessible products.Therefore this methodology could be considered as a frame-work that allows combining and comparing design material toproduce specifications of usability and accessibility, more thana design methodology.

If we consider the application life cycle composed by fourdifferent phases – definition of the problem, functional specifi-cation, development and evaluation – then USERfit can help inthe definition and functional specification phases, and helps inthe selection of the most adequate method for evaluation of theproduct or service that is being developed. One of its advantag-es is that the designer team can select its preferred method –according to its availability, experience and preference –between the documented ones. Consequently it is possible toadopt one of the methods proposed by Rubin [8] or Nielsen andMack [9].

A more detailed description of the methodology can be foundin the USERfit handbook [4].

From USERfit to USERfit ToolThe nine protocols of USERfit methodology allow anal-

ysis of all the aspects related to the user, the task and the envi-ronment. In order to perform this task the following processshould be followed. The designers team analyses the differenttypes of potential users, their role in relation to the product orservice, the design implications, etc., recording details on theUser Analysis 1 (UA1) form. A User Analysis 2 (UA2) form,where the functional implications and the product characteris-tics are specified based on the user attributes, is created for eachtype of user. The User Analysis 3 (UA3) form is used to join theuser requirements with desired characteristics, and to identify

2

3

Jorge Tomás-Guerra has a MS Degree in Computer ScienceEngineering (Universidad del País Vasco-Euskal Herriko Unibert-sitatea, Spain, 2000). Since 1999 he has collaborated on severalresearch projects in the Laboratory of Human-Computer Interac-tion for Special Needs. At present he is a researcher working inthe IRIS European project. <[email protected]>

Carlos A. Velasco-Nuñez has a MS Degree in Aerospace Engi-neering (Universidad Politécnica de Madrid, Spain, 1990) and aPhD in Applied Mathematics and Computer Science (UniversidadCarlos III de Madrid, 1999). He is currently a senior researcher inthe Fraunhofer Institute for Applied Information Technology(FIT), Germany, as leader of the Competence Centre Barrier-freeInformation and Communication Technologies (BIKA) project.Since 1990 Dr. Velasco has held positions in Research Centresand private companies in the United States, The Netherlands andSpain. Before joining FIT he was an independent consultant in thefield of software development and web technologies. He has ledand managed several national and international R&D projects,some related to people with special needs in Information andCommunication Technologies. <[email protected]>

USERfit tool Objectives

Environmental context

Provides a high level summary of the product, covering such issues as the initial justification for it, who its users

are likely to be, who will purchase it.

Product Environment

Summarises what is known about the support environment for the product (including likely training, documentation, installation, maintenance and user

support).

User analysis Identifies the range of people who should be considered in the development of the product, and

describes in detail their attributes.

ActivityAnalysis

Identifies and describes the range of activities that people will engage in when using the product and the implications that these will have for product design.

ProductAnalysis

Summarises the functional aspects of the product as they are understood and lists these as operational

features.

Product attribute matrix

Summarises the match between emerging functional specifications and product attributes inferred from user

and activity analysis.

Requirements summary

Summarises the design features identified through user and activity analysis and their degree of match to

user requirements.

DesignSummary

Summarises in more detail the functional specification for the product and its operational details.

UsabilityEvaluation

Summarises plans for evaluation along with objectives, methods to be used and evaluation criteria. Documents the degree of match between evaluation criteria and

the results of evaluation activities.

Table 1: Elements of the USERfit Methodology1

1. More information about USERfit methodology can be obtained in“USERfit, a practical handbook on user centred design for Assis-tive Technology” [4].

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 7

possible conflicts between diverse requirements and theirdevelopment priorities.

In the same way the scenarios of possible uses of the product,together with an analysis of its possible characteristics, are re-corded on Activity Analysis 1 (AA1) forms. Each one of thesescenarios is individually analysed on Activity Analysis 2 (AA2)form, where functional implications of the activities and theproduct characteristics are included. The product characteris-tics, possible conflicts and their priorities are combined onActivity Analysis 3 (AA3) forms.

The other protocols are documented in a similar way. There-fore, the number of documents that must be handled rapidlyexpands according to increases in user categories, contexts ofuse, etc., as can be seen in Figure 1. Additionally, due to theiterative nature of the design process, it could be necessary tocreate or delete categories, contexts, functionalities, etc.throughout the design process. So it is clear that the mainproblem of this methodology is the management of the infor-mation recorded on the paper forms for which it was initiallyconceived.

USERfit Tool was created in order to take advantage ofUSERfit methodology but avoid the need to handle large num-bers of paper forms. Its structure accurately reproduces theprotocols defined by USERfit. It has also been possible to addsome extra features, which will be detailed later.

USERfit Tool DevelopmentThe USERfit Tool has been developed by the University

of the Basque Country and the Fraunhofer Institute for AppliedInformation Technology, within the frame of the IRIS4 Europe-an project, with the intention of facilitating the application of

the USERfit methodology. It has been developed in Java, whichmakes it almost completely independent of the platform (Java2 Platform, Standard Edition (also known as JDK 1.2) or lateris required). The data handled by the application are in Exten-sible Markup Language (XML) format. Part of the structure ofthe XML files can be seen in Figures 2 and 3. This structurecombines, on the one hand, all the data (characteristics,attributes, scenes, etc.) related to each implied actor (user,family, maintenance staff, etc.) and, on the other hand, all thedata referring to the service or product that is being designed.

When a new project is started, the tool allows the creation ofthe basic forms, such as UA1. New attached forms, UA2 in thiscase, are automatically created when new users are added to theoriginal UA1 form. This means that when the user adds or

4 4. IRIS: Incorporating Requirements of People with Special Needs orImpairments to Internet-based Systems and Services. IST-2000-26211. Partners: European Dynamics (GR), University of theAegean (GR), Fraunhofer for Institute Applied Information Tech-nology (D), Information Society disAbilities Challenge (B), Uni-versidad del País Vasco (E). <http://www.iris-design4all.org/>

USERFIT

UA1PA EC PE UE1 UE2 UE3

UA2 UA3 AA1 AA3 PAM

UA2 UA3 AA3 PAM

AA2

AA2

AA2 AA2

AA2 AA2

AA1

Figure 1: Propagation of the Information through the Structure of the Forms of USERfit

Figure 2: First Level of the Data Structure of USERfit

Human-Computer Interaction: Overcoming Barriers

8 UPGRADE Vol. IV, No. 1, February 2003 © Novática

deletes a row, to introduce or eliminate data within each form,the consequences of this action are propagated to all the formsrelated to the modified one. As a result, adequate fields arecreated or removed from within the forms. For example, whena row containing a specific type of user is removed from theUA1 form, the UA2 and UA3 forms related to this type of userwill disappear. Analogously, when a new user is introduced,new UA2 and UA3 forms are created for this user.

The creation of new forms, and the propagation of the gener-ated information between forms, is based on internal eventsthat are triggered when the user activates certain elements ofthe application. Thus, this mechanism that starts with the crea-tion (or destruction) of the necessary forms and the data com-piled in one level of the USERfit environment are transmitsinformation to the following levels, where it will be reused.

In order to facilitate the reusability of the available informa-tion, the USERfit Tool allows several files to be open simulta-neously e.g. adding implied users from other files, printing thegenerated forms, loading an example and consulting on-linebasic help.

Development and Architecture of the USERfit Tool application

The USERfit Tool consists of Java code that uses a processorof XML style sheets and a XML analyser to implement thefunctionality of the USERfit environment. This follows the

typical structure of any Java application developed around aXML analyser. It has two ways to capture data, one with theuser and another with XML input documents in a specific struc-ture.

Turbo XML 2.1 for XML elements – which permits graphi-cal visualization of the XML structure/schema and Jbuilder forthe Java application – were used in the development of the

5

Figure 3: Data Structure Associated to each Implied Action

Figure 4: Development of the USERfit Tool Application

Jbuilder

XML Authority

Userfit/IRISApplication

USERfit MethodologyW3C Recommendation

Userfit XMLSchema

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 9

USERfit Tool application. Figure 4 shows the modules thatcompose the application.

Data introduced by the user through the graphical interfaceof the tool are saved in a XML format file. In order to carry outthe conversion from the internal structure to XML format, thesystem uses the Xerces package. The SUN's JHelp package isused by the system when the user chooses the print option. Thispackage is responsible for showing the relevant Hyper TextMarkup Language (HTML) pages of the forms before printingthem. Before printing the data, the system uses the Xerces5

package to transform it to XML format and then performs thetransformation from XML structure to the HTML page usingthe Xalan6 package. This package uses XSLT files to transformthe XML structure.

The Jhelp package is also used to view the Help and exam-ples features within the application. The structure of the appli-cation is shown in Figure 5. This structure is composed of theuser interface, the body of the application with the followinglibraries: Xalan, Xerces (Apache), Help (Sun) and XML datastorage.

Features of the USERfit Tool ApplicationThe use of the tool is very simple for people familiar with

the USERfit environment, as it maintains the same structure asthe original paper forms. For example, the UA1 form, asdisplayed on the screen, is shown in Figure 6.

Due to the experience obtained in IRIS project someimprovements have been introduced to the original USERfitenvironment e.g. a new “comments” column has been added tosome forms. This column allows the designer to introduce anytype of relevant information in order to remember decisionstaken in previous phases. A basic on-line help, which can beseen in Figure 7, has also been added to the tool. The inclusion

of this feature avoids handling the voluminous manualof USERfit.

The application allows creation of a new file, open-ing an existing one or consulting an example. Once afile has been opened, the user can navigate through thedifferent forms using the tree structure on the left partof the screen.

In addition to the previously mentioned features,such as the automatic propagation of the information toassociated forms, the generation and elimination ofassociated documents and the creation or deletion ofusers, contexts of use, etc., this tool allows the reuse ofinformation. Therefore, the information created inprevious designs (analysis of users or contexts, product

characteristics and environments, etc.) can be reused. Thisfeature will allow saving of time and effort in later designs.

Furthermore, USERfit Tool can be helpful with a distributeddesign team, as it allows the generation of homogenous andcompatible specifications. USERfit Tool allows also sharing ofthe information generated by each group. Therefore, the mate-rial related to a design is stored in files that can be distributedelectronically between the design team members. Operational-ly, USERfit Tool has been used to generate the functional spec-ifications of the Domosilla7 and Heterorred8 projects. Eachmember of the consortium has generated specifications fordiverse contexts and then all the specifications have been easilyincorporated in a unique document. The successive versions ofthis document have then been circulated among the members ofthe consortium until consensus has been achieved.

Usability of the USERfit ToolUSERfit Tool itself has been developed following a usa-

bility orientated design process, based on the analysis of thepossible users’ characteristics, the tasks and the context. Theusability of the generated prototypes has been analysed in thefollowing contexts.

1. Evaluation by experts. The tool has been used by two usa-bility experts who have simulated the process of the specifica-tion design of an Accessible Web Site authoring tool. Proce-dures for form storing, recovering and printing have beenimproved as a result of these analyses.

2. Heuristic evaluation. The tool has been used by a group often PhD students who were asked to generate the usabilityspecifications of two examples of real interaction systems. Inaddition, the comments they made during the work sessions,the used forms, and the time required to fill the forms in wereregistered.

5. Xerces allows the XML analysis and generation. There are analy-sers for Java and C++ that fulfil the W3C XML and DOM (level 1and 2) standards as well as SAX (version 2) standard. The analy-sers are highly modular and configurable.

6. Xalan is a XSLT transformer to turn the XML documents intoHTML, text or any other type of XML documents. It implementsthe W3C recommendations for the XSL and the XML PathLanguage (Xpath) transformations. It can be used from thecommand line, in an applet or servlet, or as a module in otherprogram.

6

7. Domosilla “Study, evaluation and design of an interconnectionsystem between local network for wheelchairs (DXBus) controland domotic network (EHS)”. TIC2000-0087-P4. Partners:University of Seville, Bioingeniería Aragonesa and the Universityof the Basque Country.

8. Heterorred “Study and development of a heterogeneous personalarea network for interoperability and access to wireless servicesand communications”. [No. TIC2001-1868-C03]. Partners:University of Seville, Universities of Saragossa and the BasqueCountry.

7

Figure 5: Architecture of the Application

UserfitinterfaceJava/Swing

Userfit Javaapplication

Xalan (Apache)

Xerces (Apache)

Help (SUN) XSLT’s

Userfit XMLDocument

Human-Computer Interaction: Overcoming Barriers

10 UPGRADE Vol. IV, No. 1, February 2003 © Novática

3. Tests of use. Finally, the USERfit Tool has been used in theUser Requirements Analysis module of the Domosilla project,where the utility of the new “comments” column to facilitatethe understanding of other work groups was verified.

From these tests, it can be concludedthat the tool adjusts to the necessities andways in which various types of users workproducing specifications of usability in adesign context.

ConclusionUsability-oriented design of inter-

faces that guarantee the accessibility toany kind of people independently of theirphysical characteristics requires the use ofadequate methods and tools. USERfitproved to be a very satisfactory designmethodology to cover these goals, but forlarge designs it can be quite tiresomebecause it was operated using pen andpaper. The overload due to the need tohandle multiple forms complicates theoperation. Bearing that in mind the designof a tool to automate the most tediousparts of its use was addressed. In addition,to facilitate the application of the method-ology, USERfit Tool enables the reuse ofprevious specifications and allows thedesign to be shared among remote groupswith homogeneous results. Usability anal-ysis confirms that this tool fits the needs

and working styles of its users: the designers.The Laboratory of Human-Computer Inter-

action for Special Needs is currently studyingtwo enhancements for USERfit Tool. One isthe creation of a Computer-Supported Coop-erative Work (CSCW) environment to supportinteractive remote development of specifica-tions. The other is the development of a partic-ular tool to generate the user interface code forspecific application domains (e.g. user inter-faces for mobile devices).

In addition, the CE-DG XIII, owner of theUSERfit author rights, has granted permissionto the IRIS consortium to make IRIS Toolaccessible to anyone wanting to use it.USERfit Tool can be obtained in http://www.sc.ehu.es/ userfit_tool.

AcknowledgementsThe work presented in this paper has been

partially financed by the European Commissionwithin the IST-2000-26211 IRIS project (Incorpo-rating the Requirements of People with SpecialNeeds or Impairments to Internet-based Systemsand Services) developed by European Dynamics

(GR), Fraunhofer for Institute Applied Information Technology (D);the University of the Aegean (GR); the Universidad del País Vasco –Euskal Herriko Unibertsitatea (E) and the ISdAC International Asso-ciation (Information Society disAbilities Challenge) (B).

8

Figure 6: The UA1 Form

Figure 7: On-line Help

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 11

References[1]

J. Scanlon, L. Percival. UCD for different project types. <http://www-106.ibm.com/developerworks/library/us-ucd/in-dex.html?n-us-372>

[2]J. Dong, S. Martin, P. Waldo. “A user Input and Analysis Tool forInformation Architecture”.

<http://www-3.ibm.com/ibm/easy/eou_ext.nsf/Publish/410/$File/EZSortPaper.pdf>

[3]A. Chak. Usability Tools. A Useful Start. New.Architect (WebTechniques) August 2000. <http://www.newarchitectmag.com/documents/s=4906/new1013637228/index.html>

[4]D. Poulson, M. Ashby, S. Richardson, S. (Eds.). USERfit, a prac-tical handbook on user centred design for Assistive Technology.Brussels-Luxembourg: ECSC-EC-EAEC. 1996.Also in: <http://www.stakes.fi/include/1-0.htm>

[5]J. Abascal, C. Nicolle. The Application of USERfit Methodologyto Teach Usability Guidelines. In J. Vanderdonckt & C. Farenc(eds.) Tools for working with guidelines. Springer, 2001. ISBN:1852333553.

[6]A. F. Newell, P. Gregor. Human Computer Interfaces for Peoplewith Disabilities. In: Helander M. et al. (Eds.): Handbook ofHuman-Computer Interaction, 813–824. North Holland/ElsevierScience, 1997. ISBN: 0444818766.

[7]J. Nielsen. Usability Engineering. Morgan Kaufmann, 1993.ISBN: 0125184069

[8]J. Rubin. Handbook of Usability Testing. Wiley, 1994. ISBN:0471594032

[9]J. Nielsen, R. Mack (Eds.). Usability Inspection Methods. Wiley,1994. <http://www.acm.org/sigchi/chi95/proceedings/tutors/jn_ bdy htm>

Human-Computer Interaction: Overcoming Barriers

12 UPGRADE Vol. IV, No. 1, February 2003 © Novática

An Annotation Ontology on Usability Evaluation Resources:Design and Retrieval Mechanisms

Elena García-Barriocanal, Miguel-Ángel Sicilia-Urbán, and Ignacio Aedo-Cuevas

Current Web resource retrieval mechanisms – namely, search engines, link catalogues or link databases –have well known limitations on their usefulness, especially when searching highly specific resources. In thiswork, we describe an alternative interface based on annotating resources with terms inside an ontology, andits application to the usability evaluation domain. The structural principles of the ontology and a prototypebased on the RDF meta-data description language are described. In this prototype, the specification of thequery is carried out through browsing the ontology’s inheritance hierarchy, and resources are also retrievedaccording to the semantic relationships defined in the conceptualization.

Keywords: Ontology, Search Interfaces, Semantic Retrieval,Usability Evaluation.

MotivationIn the human computer interaction (HCI) field, the

usability of user interfaces is a key aspect in the quality of soft-ware systems, especially since software applications are heav-ily used by non-technical users. Interactive interfaces should bedesigned to ease the use of these systems, focusing on user’seffectiveness, efficiency and satisfaction as defined in [14].

A number of techniques aimed at facilitating usability can beintegrated into the application development life cycle [17], oneof the most important being usability evaluation in its differentforms. A thorough preparation and execution of evaluationprocedures guarantees that important usability factors are prop-erly taken into account. Researchers and practitioners in thisarea must search for information about these evaluation tech-niques and also for reports on the experience of others regard-ing their application, covering diverse domains or practicalscenarios. This practical need is only partially covered bycurrent Internet search and exploration mechanisms. For exam-ple, a typical query issued to a conventional Internet searchengine about concrete experiences in which questionnaire-based evaluation techniques have been carried out would prob-ably be formulated as the query string “questionnaire usabilityevaluation report”. Unfortunately, such a query will return ahuge list of hits without filtering the resources that actually pro-vide the empirical data that the user is looking for, resulting inreduced efficiency and effectiveness of the search interaction.

We have proposed the development of a resource access toolfor the usability evaluation domain that allows for more precisesearch interactions that retrieve more accurate result sets. Inthis work, we describe our first prototype of such a tool, whichuses knowledge-based – and more specifically, ontology-based– semantic descriptions of Web resources.

The rest of this paper is structured as follows. Section 2 sum-marizes the limitations of current Internet resource searchparadigms and the key role of ontologies in the elaboration ofnew paradigms. Section 3 describes the internal design of ourontology about usability evaluation, and Section 4 brieflydescribes the prototype built on it. Finally, conclusions andpossible future extensions are provided in Section 5.

1

Elena García-Barriocanal is an IT engineer. She has taught atthe Universidad Pontificia de Salamanca in Madrid (Spain), in theUniversity School Cardenal Cisneros and later in the dept. ofComputer Science of the Universidad de Alcalá (Spain), whereshe is currently lecturing at the University School. Her researchwork is centred on human-computer interaction, in particular onaccessibility and usability evaluation, and on ontology engineer-ing. <[email protected]>

Miguel-Ángel Sicilia-Urbán is an IT engineer. He has taughtat the Faculty of Computer Science of the Universidad Pontificiade Salamanca (Spain) and later at the Universidad Carlos III ofMadrid (Spain). During the early years of his teaching career hewas also working in various IT companies as an analyst and soft-ware architect. He carries out his research work in the DEI Labo-ratory of the Universidad Carlos III of Madrid, and his main linesof research are adaptive hypermedia and fuzzy logic which arealso the subject of his thesis. <[email protected]>

Ignacio Aedo-Cuevas graduated and received his doctorate inComputer Science at the Universidad Politécnica de Madrid(Spain). Since 1991 he has been teaching at the UniversidadCarlos III of Madrid, where he is currently a lecturer/associateprofessor in the dept. of Computer Science. His main fields ofresearch are everything to do with interactive systems and espe-cially their use in education. He has co-authored several booksand research articles in international journals within his fields ofresearch and he sits on the editorial committee of the journal“Educational Technology and Society”. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 13

Web Information Retrieval Paradigms

2.1. Current Web Resource Search ToolsFinding accurate information in the Internet or corporative

Intranets is not an easy task. Essentially, three kinds of tech-niques are currently used to search for contents: general-purpose search engines (like Google1), link catalogues (likeUsable Web2) and query systems that access conventionalstatic-schema databases (like Computer Database3). Each kindof tool has certain limitations that hamper its usefulness inconcrete situations.

Roughly speaking, general-purpose search engines are ulti-mately founded on sophisticated term matching [2]. Due to thestrong influence of user interaction in information retrieval, andin spite of the user-centred designs [1] applied to these systems[20] [15], one of their main shortcomings is the presence oflarge amounts of irrelevant or noisy search results.

In addition, link catalogues follow a static classifying criteri-on, and provide navigational access to information. In otherwords, retrieval follows a predetermined navigational patternon pre-established levels, with a single path to any of theresources (if we discount repetition).

Finally, the main problem of systems that use predefinedschemas or codes for retrieval is their lack of flexibility, sincethey provide lists of unrelated codes – that are required to bememorised or consulted – or unrelated information elements,so that the semantic relationships of the information is notexploited.

In this work, we propose a three level model – realized inseveral related ontologies – for the semantic annotation of Webresources about usability evaluation, which allow the construc-tion of access interfaces capable of using these annotations. Inconsequence, this work can be labelled as having a “SemanticWeb”4 approach. Concretely, the annotation technique is simi-lar to that already used in Topic Maps [13] applications.

2.2. Ontologies and the Semantic WebFollowing Tim Berners-Lee’s [4] definition, the Semantic

Web is based on providing meta-data to existing resources,using the current technological infrastructure of the Web. Then,meta-information can be used to enable Internet machine-to-machine “semantic” communication, thus improving informa-tion readiness and reliability for humans. In consequence, theultimate aim of this “new” Web is that of enabling effective andefficient information interchange [18].

To achieve the mentioned goals, software systems requireaccess to structured collections of information and the appro-priate formalisms – currently based in mathematical logic –that enable reasoning processes to a certain extent. Ontologiesprovide this support and can be used to annotate, i.e. to addsemantic information, to Web resources. The term ontology –used in philosophy to refer to ‘theories of existence’ – has been

adopted by the artificial-intelligence research community tomake reference to categorizations with rich semantic relation-ships between their terms [5]. OIL [9] [10] and DAML [12] aretwo of the most used ontology definition languages, along withits combination, DAML+OIL. These languages are equippedwith well-defined semantics and allow the manipulation ofcomplex taxonomies and logical relationships between entitiesin the Web. All of them can be translated to RDF [16] and itscomplementary, RDFS [6], which adds a minimum frame-based semantics. Currently, a number of graphical-user inter-face editors can be used to generate ontology descriptions, like,for example, OIL (OILed5) and Protégé6.

Towards a Human-Computer Interaction OntologyIn order to annotate Web resources about usability evalu-

ation, an ontology must be engineered. Its development processshould be carried out, in a consensual way, by experts in thearea forming an ontogroup. Our approach has been that ofengineering a preliminary version of the ontology, without aformal consensual process, which can be used as a first ontolo-gy version for subsequent collaborative conceptualizationprocesses.

Ontology development has been driven by bibliographicalsources, adding to the ontology only terms and relationshipsthat are documented in the literature of the area. This approachguarantees that the knowledge has been extracted from com-monly agreed sources (like, the ACM7 bibliographic informa-tion collection, which has been used).

3.1. Ontology Structure: Levels and Meta-classesThe need to represent bibliographical sources together with

the consideration of Web resources as instances of ontologyelements has resulted in a three-level and three-layer structure(depicted in Figure 1).

The central level is the main axis of the ontology (labelled asUsability Evaluation Domain Level in Figure 1) and itdescribes the terms of the domain of “Usability Evaluation”.The “rightmost” level contains the documentary sources inwhich the terms and relationships of the domain are described(Documental Sources Level in Figure 1). Sources may be, forexample, books, book chapters, papers in scholarly journals orproceedings of human-computer interaction-related conferenc-es or workshops. Kinds of documents are represented by class-es, which enables the definition of the necessary attributes toproperly characterize them. The “leftmost” level describes thetaxonomy of Web resources (Annotable Resource Level inFigure 1). This taxonomy is partially borrowed from the exist-ing KA2 ontology [3], and represents the different kinds ofscientific and technical publications that can be found on theWeb.

The ontology is structured “horizontally” in three layers thatrepresent different levels of abstraction. The top layer (denoted

1. <http://www.google.com>2. <http://www.usableweb.com>3. <http://www.galegroup.com/>4. <http://www.semanticweb.org>

2

5. <http://www.cs.man.ac.uk/oil/request.asp>6. <http://www.smi-web.stanford.edu/projects/protege/down-

load.html>7. <http://www.hcibib.org>

3

Human-Computer Interaction: Overcoming Barriers

14 UPGRADE Vol. IV, No. 1, February 2003 © Novática

by M2 in Figure 1) provides meta-information, represented as ANNOTABLE RESOURCES LEVEL USABILITY EVALUATION DOMAIN LEVEL DOCUMENTAL SOURCES LEVEL

M2 METACLASS LAYER DomainTerms

Defined in

DomainRelations

BibliographicSources

ProceedingsCommunication

Books Articles

StandardNorms

Defined in

M1 CLASS LAYERUsabilityAttributes

Efficiency Effectiveness Satisfaction

UsabilityEvaluationMethods

InspectionMethods

InquiryMethods

HeuristicEvaluation

CognitiveWalkthrough

Questionnaire

UsabilityEngineering

HEURISTICEVALUATION

of userinterface

UsabilityInspectionMethods

Testing awalkthroughtmethodologyfor theory...

Evaluates

OnLineResource

EventOnLinePaper

Topic

ConferenceOnLineConf.Paper

OnLineArticle

Journal Tutorial

Pub._in

Pub._in

M0 INSTANCE LAYER

InteractionsACM

ObservationalTechniques forLearning AboutUsers and Their

World

A heuristicevaluationof a WorldWide Webprototype

UPA 2001Conference

Towards anepistemology

of usabilityevaluationmethods

CyberspaceConference onErgonomics ‘99

IND_CognitiveWalkthrough

WAMMI

IND_HeuristicEvaluation

IND_UsabilityEvaluationMethods

Rei

ficat

ion

ISO9241/11

Figure 1: Ontology Design Illustrated by Sample Elements.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 15

a set of meta-classes and meta-relationships, and affects boththe domain term and the documentary sources levels. In thislayer, a couple of relationships, called defined_in, betweendomain meta-classes and sources, and between meta-relation-ships and sources, are used to model the sources in which thedomain ontology elements can be found. Domain terms andrelationships, along with concrete bibliographic referencesmake up the abstraction layer denoted as M1, in which theterms describing Web resources and their relationship todomain terms are also defined. For example, we know thatusability evaluation can be carried out using various methodsand techniques, such as those described in Jakob Nielsen’sbook “Usability Engineering”. Among these methods, we haveinspection evaluation, and according to Nielsen & Mack’s book“Usability Inspection Methods”, inspection can be undertaken,using any one of several methods, including “cognitive walk-through”, defined in sources like the paper in ACM CHI’90conference proceedings titled “Testing a walkthrough method-ology for theory-based design of walk-up-and-use interfaces”by Clayton Lewis. So, following our example, inside the M1layer we may find instances of the meta-class domain_term like“Usability Evaluation Method”, “Inspection Method” or“Cognitive Walkthrough”, and also instances of the meta-class-es paper and book, representing the mentioned sources.

Finally, the third layer (labelled as M0 in Figure 1) is used forthe annotation of resources as instances of terms in the M1layer of the ontology. In some early Semantic Web approaches,resources were annotated during their creation, including theannotations inside the markup of the Web page [8]. We havetaken a different approach, because external Web resourcescan’t be modified. We have represented existing Web resourcesas ontology instances of classes in the M1 layer that referencethe actual resource by storing its URI, so that annotations arephysically moved away from the resource.

In the intersection of the M0 layer and the domain level,instances of domain classes are found. For example, Wammi (aconcrete Web usability evaluation questionnaire) is an instanceof the Questionnaire class. In addition, some classes of the M1level are reified in the M0 level in order to be able to directlyassociate resources with them. For example, “Heuristic Evalu-ation” is reified in the instance labelled as“IND_Heuristic_Evaluation”. This reification is required forabstract terms that were considered to stay at M1 level.

3.2. Ontology Definition with Protégé 2000 The Protégé 2000 was selected to build the ontology due to

the need for more than one meta-level, since its knowledgemodel [19] is powerful enough to represent them, taking intoaccount that they are “strict” levels, i.e. an entity at Mi layer canbe only an instance of a concept at Mi+1 layer.

A meta-class and a meta-relation have been used to representthe elements at the intersection of M2 and the domain level. Inaddition, a :BIBLIOGRAPHY_SOURCE meta-class and itssubclasses were used to represent documentary sources. Inorder to specify the bibliographic source/s in which a domainterm is defined, the relation between domain terms and their

sources is represented by a mandatory, multi-valuedtemplate_slot labelled defined_in from :DOMAIN_TERMmeta-class to :BIBLIOGRAPHY_SOURCE meta-class. Thisslot is “propagated” to the instances of the meta-classes (and itssubclasses) in the second layer. The tool’s knowledge modelprescribes that these kinds of slot are inherited as own_slots ininstantiated classes, and they are not further inherited in subse-quent subclasses or instances. We have used it to model:DOMAIN_TERM instances bibliographic sources, but not toenable the definition of this relationship at M0 layer, since theunique relationship that :DOMAIN_TERM instances maintainat M0 layer is the own_slot available_at with OnLine_Resourceinstances. In the :DOMAIN_RELATION meta-relationship atemplate_slot is defined to represent the source in which rela-tionships between domain terms are represented.

An Ontology-Annotated Resource Search PrototypeAs we have said, the search process of annotated resourc-

es require novel access techniques based on Semantic Webtechniques [7]. As a proof of concept, we have developed a Javalibrary called meta-dataKB that can be used in combinationwith any Java Servlet-enabled Web Server. This library readsthe RDF(S) markup generated by PROTÉGÉ, using JENA 1.1RDF processing libraries.

The main characteristic of the search prototype is that usersdo not specify queries by typing words. Search criteria aresomewhat predefined by the structure of the ontology, sincethey are the classes defined in it. This fact entails that query cri-teria are determined by the contents (including future changes)of the ontology. The retrieval process is made up of two internalprocesses: (1) In one, the terms selected by the user are used to directly

pick the resources that are their instances. It should benoted that if more than one class is specified (more thanone term is picked), only instances that belong to all theselected classes will be shown (conjunctive multi-criterionsearch).

(2) In the other, the semantic relationships between the select-ed terms are traversed, so that related instances will be alsoretrieved.

4

Figure 2: Overall View of the Main Page of the Search Prototype (it shows a separate dependant window in which the documentary sources for the clicked term are showed.)

Human-Computer Interaction: Overcoming Barriers

16 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Resource retrieval begins with the specification of the query,starting from the so-called entry points of the ontology. Entrypoints are defined by the ontology designer, and are typicallythe most significant terms, that allow an initial specification ofa query. Once one or more entry points are selected, the usermay execute the query directly or refine the query criterion.Refinement proceeds by showing the direct subclasses of theclasses that were previously selected. This process can berepeated iteratively until the subclasses in the ontology for theselected terms have been exhausted.

Search provides two types of results: resources annotated asinstances of the selected terms, and the relationships that existbetween instances of the selected terms (multi-criteria search).The first set of results is internally computed by obtaining theintersection of the extents of the selected classes. An extent isdefined as the set of instances of a given class, includinginstances of its subclasses. The second set of results is comput-ed in two steps. First, the union U of theextents of the selected classes is obtained,and second, the set of the ontology relation-ships that connect instances in U is selected.

In what follows, the search process will beillustrated by a concrete case in which thegoal is obtaining conference papers relatedwith questionnaire-based usability evalua-tion. First, the entry points Inquiry and Pub-lication (see Figure 2) must be selected.Documentary sources of any term in theontology can be consulted by clicking in itsname, as showed on Figure 2.

Since the query is still not specificenough, the next step is to refine the criteriaby selecting the terms Questionnaire andArticle, and subsequently, Questionnaireand Conference Paper (see Figure 3).

The computing of the results is triggered when the“buscar” (search) button is clicked. The results areshown in Figure 4. It should be noted that no instancesbelonging to the selected classes are retrieved, but anumber of relations is selected. Resources are shownalong with their specific semantic relationships. Thisfunctionality facilitates the selection of the most suitableresource/s, in our example, papers that use question-naires in evaluation.

Since only annotated resources are shown, the noiseassociated with conventional search engines is absent(e.g. pages of usability consulting companies or generalinformation about questionnaires are not retrieved).

Conclusions and Future WorkThe work presented here is a proof of concept re-

garding the usefulness of employing ontologies for theannotation of and access to resources in the domain ofusability evaluation.

From the viewpoint of the users of the tool, criteriaspecification is only restricted by the ontology itself, ob-

taining more precise results when more detailed and rich ontol-ogies are used, thus avoiding noisy results. The ontology de-sign uses the novel feature of providing the bibliographyannotation of the domain entities in a separate level. The searchmechanism is different from other approximations [11] be-cause it provides an iterative approach in the query construc-tion, which avoids the writing of complex search sentences.From the technical viewpoint, the implementation approachmakes the prototype independent of the ontology used, so thatdifferent ontologies can be used, by simply replacing the RDFfiles that are loaded by the application.

One of the objectives of this work is to foster the foundationof initiatives for the collaborative definition of a human-com-puter-interaction ontology, which will enable new applicationsof a diverse kind. Also, the use of an ontology enables the shar-ing of a well-defined corpus of concepts and terminology tofacilitate research and knowledge interchange.

5

Figure 3: Refining Criteria in the Search Prototype.

Figure 4: Search Results for Two Criteria.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 17

Future work could include obtaining experimental data aboutthe usability of the search prototype, and about other variantsof ontology-based resource access that can be developed in asimilar fashion.

References[1]

B. L. Allen. Information Tasks. Toward a User-Centred Approachto Information Systems. Academic Press, 1996.

[2]R. Baeza-Yates, B. Riviero-Neto. Modern Information Retrieval.Addison-Wesley, 1999.

[3]R.V. Benjamins, D. Fensel. “Community is Knowledge! in(KA)2”. En actas de Workshop on Knowledge Acquisition, Mod-elling and Management (KAW'98), Banff, Canada, 1998.

[4]T. Berners-Lee, J. Hendler. “Scientific publishing on the ‘seman-tic web’”. Nature, April, 2001.

[5]T. Berners-Lee, J. Hendler, O. Lassila. “The semantic web: a newform of web content that is meaningful to computers will unleasha revolution of new possibilities”. Scientific American, May,2001.

[6]D. Brickley, R.V. Guha. Resource Description Framework (RDF)Schema Specification 1.0, W3C Candidate Recommendation, 27March 2000.

[7]A. Eberhart. “Applications of the Semantic Web for document re-trieval”. Proc. of Semantic Web Working Symposium 2001 (po-sition papers). California, 2001.

[8]D. Fensel, S. Decker, M. Erdmann, R. Studer. “Ontobroker: trans-forming the WWW into a knowledge base”. Proc. of Workshopon Knowledge Acquisition, Modelling and Management(KAW'98), Banff, Canada, 1998.

[9]D. Fensel. et al. “OIL in a nutshell”. Proc. 12th European work-shop knowledge acquisition, modelling and management, NewYork: Springer-Verlag, 2000.

[10]D. Fensel et al. “OIL: An ontology infrastructure for the semanticweb”. IEEE Intelligent System, vol 16, nº 2. 2001.

[11]J. Heflin, J. Hendler. “Searching the Web with SHOE”. Proc. ofthe Artificial Intelligence for Web Search, AAAI. 2000.

[12]J. Hendler, D. McGuinness. “The DARPA Agent Markup Lan-guage”. IEEE Intelligent System vol 15 nº6, 2000.

[13] JTC1/SC 34 Technical Committee ISO13250. Topic Maps:Information Technology. Document Description and MarkupLanguages. International Organization for Standardization, 1999.

[14]TC 159/SC 4 Technical committee ISO 9241. Ergonomic require-ments for office wrk with visual display terminals (VDTs)-Part11: Guidance on Usability. International Organization for Stand-ardization, 1998.

[15]K.S. Kim. Individual Differences and Information Retrieval:Implications on Web Design. Proc. of 6th Conference onContent-Based Multimedia Information Access, April 2000.

[16]O. Lassila, R. Swick. “Resource Description Framework (RDF)Model and Syntax Specification”. W3C Recommendation 22February 1999.

[17]D. Mayhew. The Usability Engineering Lifecycle. MorganKaufmann, 1999.

[18]E. Miller. “The W3C semantic web activity”. Proc. InternationalSemantic Web Workshop. July, 2001.

[19]N.F. Noy, R.W. Fergerson, M.A. Musen. “The knowledge modelof Protege-2000: Combining interoperability and flexibility”.Proc. of 2th International Conference on Knowledge Engineeringand Knowledge Management. October, 2000.

[20]W. Sugar. User-centred perspective of information retrieval re-search and analysis methods. Journal of the American Society forInformation Science, 44 (7), 1995.

Human-Computer Interaction: Overcoming Barriers

18 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Virtual Reality: Do Not Augment Realism, Augment Relevance

Johan F. Hoorn, Elly A. Konijn, and Gerrit C. van der Veer

Virtual Reality (VR) is not technology and VR is not new. VR is fiction and fiction is as old as humanity. Usersof computer systems deal with virtual reality all the time. Typically, they do not distinguish functionality frommachinery but create their own User’s Virtual Machine. Because users do not clearly discriminate between(their own created) fiction and (misunderstood) reality, delusions can be insidiously destructive to thesatisfaction with and efficient use of the system. How to design the experience of fiction and how to developtechnologies for implementing this experience such that users are satisfied while the system remainsobtrusive? We describe a new model for the perception and experience of fictional characters withinsituations, while VR is discussed for its truth-value, degree of being realistic, and its place in fiction andreality. We argue that a VR-experience gains more from increased emotional relevance than from higherrealistic resolutions.

Keywords: Experience, Fiction, Mixed Realities, Rele-vance, Virtual Reality.

IntroductionVirtual Reality (VR), virtual environments, and augment-

ed reality refer to elaborated technologies in interactivesystems that render illusory effects of realism. Sensory feed-back-mechanisms would bring the user a physically present,three-dimensional, and natural experience [28, pp. 332–333].We propose that technology is but a means to render a VR-experience and that the true discussion should issue the humanfactors that allow users to have an experience of a virtual envi-ronment as if it were real. We adhere to some of the slogansprompted by [5] and provide an elaboration on the ‘fun’ and‘beauty’ aspects of VR. VR-experiences are not as awkward asituation as seems from people walking around with wiredgoggles, gloves, and headphones with 3-D surround sound.Users of everyday computer systems create their own User’sVirtual Machine (UVM), a pragmatic rather than accurateconception of what the software and hardware they work withare about [36][39][40]. Therefore, they conceptually navigatethrough a fictitious environment but keep it for real.

In this paper, we argue that VR is an instance of the broaderconcept of fiction, that it has its predecessors in the history ofart and theatre, and that VR and fiction are usual attributes ofthe workplace. We will offer an analysis of the experience offiction and the place VR occupies in that, which is then extend-ed into a model on the perception and experience of fictionalcharacters and situations. At the heart of the proposed model isan involvement-distance conflict that the user experienceswhile engaging a fictional situation. We argue that the intensityof possible involvement is lower in fiction (e.g., VR) than inreality, due to the lack of personal relevance to the user. Wetherefore recommend that VR designers focus on developingfeatures that sustain relevance to the goals and concerns of the

1

Johan F. Hoorn started with the empirical study of literature,but soon widened his scope to psychology and finished a Ph.D. onthe psycho(physio)logical processing of metaphors. Togetherwith Elly A. Konijn, he worked as a postdoctoral fellow on thestudy of fictional characters. Again as a postdoctoral fellow, he iscurrently working on a second Ph.D., in computer science thistime, about the human dynamics in designing interactive systems.This project is granted by the Dutch Ministry of Economic Affairs(Centre Agency). On the occasion of the Annual Meeting of theAcademia Europaea held in June 2001, Johan was welcomed as aBurgen Scholar “in recognition of excellent academic achieve-ment.” <[email protected]>

Elly A. Konijn graduated in social and emotion psychology,and wrote her Ph.D. in Theatre Studies on the psycho(physio)log-ical responses of professional stage actors while performing. Shedeveloped the so-called ‘task-emotion theory,’ which explainshow actors use emotions aroused by doing their jobs to form life-like character emotions. Elly replicated the effects in the USAwith (sometimes quite famous) “Method Actors”. Her first post-doc study was concerned with task-emotions of spectators oftheatre. During her second postdoc project, Elly worked withJohan F. Hoorn on the processes involved in observing fictionalcharacters. She then received a so-called ASPASIA grant withwhich she works as an Associate Professor in the Department ofCommunication Studies on the effects of different reality-levelsin perceiving (multi)media. <[email protected]>

Gerrit C. van der Veer was one of the first in The Netherlandswho worked with computers. He switched from psychology tocomputer science where he works as an Associate Professor onmaking interactive systems user-friendly. He is actively workingon the development of a new design method called “DUTCH”(Designing for Users and Tasks from Concepts to Handles) and arelated task analysis method called “GTA” (Groupware TaskAnalysis). He is Cooperating Societies Liaison for the EuropeanAssociation of Cognitive Ergonomics (EACE), member of theDutch Local Special Interest Group of ACM SIGCHI, represent-ative of NGI, the Dutch Computer Society, in IFIP: “TechnicalCommittee on Human Computer Interaction” (TC 13), and readerin Interactive Systems. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 19

user, for instance, by performing GroupWare Task Analysisrather than mechanically enhancing the realistic features of anapplication.

VR as an Everyday ExperienceThe idea users have about their interactive system is a

virtual conception of reality. A UVM encompasses the knowl-edge the user should have about operating a computer system.This knowledge may be a blend of the outer appearance of theworkstation, of the plugged-in gadgets, the interface style(using animation agents or a basic prompt), and the form andcontents of the software. As long as it works, it is unproblem-atic that the concepts of the user do not meet the reality that is‘behind’ the machine. However, the user does employ anetwork, remote data collections, and application semantics. Ifreality and fiction clash, then, the designers are to blame. Theyshould have made the machine transparent enough to lowervirtuality and to augment reality in the heads of the users.However, there are only so many technical details an ordinaryuser can handle without being taken in a trap of incomprehen-sible instructions (Figure 1). Merely providing more facts,then, is not the way to represent functionality in a user-friendlydesign but tuning in to the user’s needs and interests is.

Designing in a comprehensible way the total of user-relevantknowledge of the technology encompasses both semantics(what task delegation the system provides) and syntax (howtask delegation is articulated by the user). UVM-specificationor ‘detailed design’ concerns the stipulating of the functionali-ty, arranging the dialogue between users and system, and spec-ifying the presentation displays. Designing the functionalitydoes not entail planning the application architecture with datamodels. It involves the user-relevant functionality as communi-cated by the user interface. Dialogues concern the structure anddynamic behaviour of the interface irrespective of the precisepresentation. For dialogues, the modality of an interface com-ponent, its contents, and the organization of its contents may bemore important than its appearance but appearance is not insig-nificant for user experience. Presentation design devises howthe user interface is ‘staged’ (e.g., monoscopic or stereoscopic

displays). In VR, presentations can be cross-modal, combiningvisible, auditory, and tactile feedback.

How to do presentation design on the UVM that is accurateyet intelligible to the user? Our statement is that virtual realityis misunderstood as a concept. The users misunderstand itbecause they are naively misconceiving the machine. Systemdesigners misunderstand it because they hold it synonymous to3-D graphics and electronically equipped helmets and glovesfor experiencing ‘total immersion.’ We claim that to design VR,experience instead of technology is the key word. VR is not atechnique; it is one possible outcome of the biological capacityto imagine, to think in advance, and be prepared to situations tocome. In humans, the output of this capacity may be literaryfiction or metaphor but animals also can imagine short-termfuture situations. Imagine a fox calculating to jump a mouse.The fox creates a mental model of the future situation (‘Wherewill the mouse go?’) by understanding the present situation(‘Where is the mouse now? What are the (im)possibilities in theterrain to help or hinder catching my mouse?’). The fox, then,differentiates between the here and now and there and aftermuch the same as the system designer who first describes thepresent work situation (Task Model 1) and second the system’scontribution to a future situation (Task Model 2). Both have inmind a state of affairs that they consider (a representation of)reality and both have a (hopefully well-informed) fantasy aboutwhat the future situation will be, which may be considered(hopefully realistic) fiction. However, system requirements canbe quite unstable so that at the day of release, the system doesnot completely fulfil the client’s needs anymore. In hindsight,this makes the fantasy of the designer about the future situationa fiction composed of mixes of realistic and unrealistic fea-tures. This brings everyday system design close to engaging avirtual reality situation. Indeed, voices have been raised toemploy VR for exploring requirements satisfaction of thesystem in different scenarios [7][24][38]. If one can play futurescenario’s in the mind as well as on stage or in VR, it means thatthe quintessence of VR is not the way the fantasy is raised butthe human capacity to generate fiction and that the experienceof this fiction may be judged ‘realistic.’

2

Figure 1: UVM dialogues: What the designer thinks the user needs and what the user needs (authors’ translation).Source: http://www.clubmetro.nl/actueel/strips

Human-Computer Interaction: Overcoming Barriers

20 UPGRADE Vol. IV, No. 1, February 2003 © Novática

We maintain, then, that VR is not a modern invention but adifferent word for ‘realistic fiction,’ while fiction is the substan-tiation of the (human) capacity to imagine different situationswhile contemplating different possibilities. This ability is notnecessarily confined to the arts and theatre but frequentlyhappens in everyday life as people design future systems, makeup stories, have incomplete memories, or plainly lie. Weconsider fiction, therefore, everything that is make-belief,imaginary, invented, and not (yet) empirically true [26][29].

Unfortunately, daily use of the word ‘fiction’ has a connota-tion of falsehood and therefore, of triviality. We would not wantto call VR worthless, however. On the contrary, imaginingfictional worlds and engaging mediated persons (fictional char-acters such as agents) may be functional for primary concernsin real life. It may be informative for future encounters andsituations, may extend one’s (virtual) coping behaviour, andmay support subjective well-being. At least six functions offiction can be specified. Because we consider dealing with VRan extension of imagining fiction, we suppose that the samefunctions apply. First, mediated persons in fiction fulfil amodelling function in that one learns how to behave in specificcircumstances [1]. In VR, for example, the avatar of a famousmicro-surgeon in the virtual operation room may be a rolemodel to medical students. Second, fiction can be helpful toexplore dangerous, impossible, or expensive events ([8][27]).In VR, a flight simulator offers a safe place to train for moonlandings, star wars or terrorist attacks. Third, in fiction peoplecan explore personal truths to experience their own emotionsand comprehend unclear aspects of them relative to contexts inwhich the emotions occur [26]. Fourth, fiction can satisfy theneed for emotional experiences to recompense tedium and list-lessness, and which motivate our behaviours (cf. [1][9, p. 475,p. 371]). Fifth, fiction helps to re-experience or re-live the pastas with family photographs and home videos (see below).Sixth, fiction may be entertaining and relaxing. In VR, userslove seeing stunning events that are pretty impossible toencounter in real life and experience the sophisticated technol-ogy generally as overwhelming. Many of these functions covera broader range than what the common vision on VR makes usbelieve, which usually focuses on point two and six.

VR as an experience is not new (cf. [12]). Three-D graphicsmay allow one to navigate through a virtual labyrinth. Howev-er, Greek taletellers already let the Minotaur rage through theminds of the audience. Renaissance painters discovered how torender 3-D images of people and spaces. In the 19th century,two-way mirrors staged ghostly images. Via a big, slanted glasspane on a dark stage, a mirror downstage reflected the light ona there hidden actor to the audience, who in the dark did not seethe mirror but did perceive the sudden apparition of Hamlet’sfather. In using videos, modern theatre makers have peopleinteract with physically present and absent actors in physicallypresent and absent scenery. Cinematic development ‘passed theaudience through the window’ into the centre of action viaIMAX, thereby enhancing the experience of virtual presence inthe scene [33]. All this does not mean that there are no differ-ences in the possibilities that the various techniques offer. Yet,

one thing connects these examples: Reality is twisted by fabri-cation and artistic rendering, and thus, by fiction.

The Reality-Fiction FrictionThere are many perceptual cues in paintings, drawings,

and other cultural products that try to simulate reality. Perspec-tive drawing in 2-D suggests depth in space but is not an exactcopy of 3-D space. The same applies to photo, film, and video:Flat pictures merely bring to mind reminiscences of the experi-ence of spacious environments. Moreover, by choosing cameraangle, selecting who is shot, cutting what is undesired, anddisregarding smell, the wedding, birthday, and funeral picturesmerely reflect an interpretation of reality or an imperfectrendering to say the least. Interventions with reality also occurin news items, and therefore, news coverage may be realisticperhaps, but it is certainly not reality, let alone true. A digitalimage does not do the trick of copying reality either. The imageconsists of pixels, whereas nothing else does in real life. Zoomin and you do not end up with realistic details but with anabstract painting of colorful blocks. Even a Xerox copy orcomputer scan of a text is the distorted picture of a researcharticle, not the article itself. What is conveyed, is the message,which depends on human interpretation. The same applies toVR. Despite technical perfection and despite the claim that VRpretends not to be fiction any more, the displays, suits, andgloves may render an experience of a real environment but theydo not render the environment itself. Hence, the translation ofthe term virtual reality should be something like reality repre-sented in fiction with as many features as possible perceived asrealistic and the smallest possible number of features perceivedas unrealistic.

Reality is the wider concept of which fiction is a part. Theobject that contains or expresses the virtual or fiction, thescreen, book, or the actor as a person, is real and does notbelong to the fiction. The props, the paint, the pixels, and eventhe fiction itself all belong to reality but what they try to conveyto us – the contents – is in the fiction: Everything that it wantsus to believe is fiction no matter how reality-based. This callsfor a differentiation of what is called reality on at least twolevels. On the one hand, reality refers to a ‘standard set ofbeliefs’ (authority claims, belief systems, science) about whatis real and unreal, true and false on a contextual level. On theother hand, reality refers to appraisals of individual observerson an epistemic level within a given context. If someone in reallife goes to a bank to collect some money but suddenly the bankis being robbed and a big shooting takes place, the whole scen-ery may appear as unreal to the individual (‘Just like themovies’). Reversely, an event in fiction (e.g., VR) may presentcertain phenomena, situations, or features that can be appraisedas being more realistic than other situations, etc. Such epistem-ic appraisals often correspond to the intended genre. In Sci-Fi,for example, assuming that a body has a molecular structure isfairly realistic but that it can be formed into a paradoxicalkinetic bulb (Figure 1) is nonsense. In Figure 1, the words arerealistic according to the scientific genre but the suggestedrelations among them are unrealistic and make the genrescience fiction. The different reality-levels are graphically

3

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 21

represented in Figure 2: Within reality, situations, phenomena,or features can be more unrealistic (closer to fiction) or morerealistic (closer to truth). Likewise, in the fiction, situations,phenomena, or features can be more realistic (closer resem-bling reality) or more unrealistic (more remote from reality).For instance, when a designer makes a task model, it is ametaphor for the reality of the work place. Yet, it describes reallife at work and can do predictions on workflow that are empir-ically confirmed. So much for the realistic part. The task model,however, also maintains unrealistic assumptions, for instance,when tasks are defined but not performed by anyone or whenpeople perform tasks they are not allowed to do.

Figure 2 exposes the field of reality and fiction in which thefriction in the UVM takes place. Fiction is part of reality andthe events, objects, and other entities in reality are attributed atruth-value, ranging from 1 (true) to 0 (false) and depending onthe (culturally defined) beliefs and mindset of the individual.The multivalued logic-continuum between those extremes(dotted line) allows for doubt, for the possibility that eventsmay occur and objects may exist with a chance-value between100% (true) and 0% (false). Doubt, then, and possibilities,move between reality and fiction, causing the friction in theUVM when entities are labelled wrongly: ‘Why doesn’t thesystem tell me what to do?’ (computers can think – fictiontaken for reality), ‘It can’t be that a computer program identi-fies a poet’ (computers cannot analyse poetry – reality taken forfiction, see [13]). Beliefs about what is there in the world andwhat is not have a perceptual side (‘I see the picture on mymouse pad’) and an experiential side (‘I love that picture on mymouse pad’). Accordingly, reality assessment (or epistemicappraisal) is strongly influenced by (emotional) experiences,like most perceptions are. In a depression, life is more negativethan in a positive mood and colleagues are less trustworthythan they seem or there is more violence on the screen thanusually. Whether an error message refers to genuine problemsor merely covers up sloppy programming, arouses verdicts like‘Exactly!’ or ‘Fake!’ respectively. What is considered real andtrue is called realistic and what is considered fictional and falseis called unrealistic. The tough part is in the friction-area of theUVM; the realm of what might and then again might not be

possible (cf. “mixed realities” [3][22]). Reality that isappraised as unrealistic refers to those events and entitiesthat are very unlikely to happen, for instance, that acomputer virus will infect UNIX based operating systems.What is currently meant by VR is realistic fiction, muchthe same as reality TV series like Rescue 911: Based onfacts, yet mediated through human intervention. Parachut-ing from a plane may be bound to happen in real life butwith the head set on and imperfect graphics it is still amade-up situation. Particularly in VR, then, the computerinterface is what in theatre is called the ‘liminal space,’ thearea between the actor as a real person and the actor as afictional character (cf. [10]). Much like a movie director,the system designer is a producer of fiction as far as theUVM is concerned. Quite like the actor should not bemixed up with the character s/he plays, the system shouldnot be mixed up with the UVM. A user of VR who takes

on a role and is not into professional acting may forget aboutthe liminal space or interface, mixing up the character with theself, caught up in the illusion of things being real (cf. [37]).

One may counter that although Figure 2 reserves truth forreality, certain unrealistic environments may yet be true withinthe fiction. For example, a role-playing game about dragon-fighters supposes magic caves and supernatural volcanoes tohide in and that the breathtaking lizard spits fire is true to thefantasy as well. However, this is an informal use of truth, mean-ing that environmental features and fortunes of the charactersnicely follow the logic that is defined at the outset of the narra-tive (if dragon, then it can spit fire). Even when things are trueto life it is not the same as being true in life. Put differently,something is true or real according to contextual logic.

Today’s tendency in VR is to boost the number and quality ofthe realistic features at the cost of the unrealistic features(Figure 2, gray area). However, no matter how many realisticfeatures are brought in, the represented reality remains virtual.In view of the state of the art, there is always a contextual cue(e.g., the head-mounted display unit) that makes the user awarethings are not real (cf. [30]). There even may not be a need tocapitalize on realistic rendering to obtain the desired effects.Certain observers of theatre (or other fiction) state that theywere swept of their feet by emotion, apparently totallyimmersed, although they knew that everything on stage wasfaked. From cave-paintings to VR, sensory-rich environmentshave been produced to heighten involvement and entertainingexperiences. To evoke overwhelming effects, technical sophis-tication may be helpful and VR seems to fulfil this need.However, involvement with the contents (e.g., the message ortask) demands personal involvement by means of emotionalrelevance.

In sum, VR offers an experience of reality, which is multi-layered. We may know that reality represented via media suchas VR is a represented reality and not reality itself. However, aslong as it looks enough like reality in a technical sense, we getaccustomed to the suggested reality status (e.g., the newsreaderlooks like a real person so the message is real). For the sake ofthis convention, we take the artificiality for granted (e.g., thenewsreader is framed by a television set). Therefore, the

1: true* ?: possible 0: false

Fiction

VR

Reality

realistic

epistemic appraisals

unrealistic realistic

epistemic appraisals

unrealistic

*Note: Truth test by whatever criterion (e.g., scientific, religious, or cultural).

Figure 2: Continuum of truth-values on beliefs about reality and fiction and how they are experienced.

Human-Computer Interaction: Overcoming Barriers

22 UPGRADE Vol. IV, No. 1, February 2003 © Novática

sophisticated technology of VR may be powerful but it is notenough to initiate a reality-experience that is true-to-life. Basicto reality-experiences that are true-to-life is that the experienceis emotionally loaded – that it is accompanied by the feelingthat ‘something is really going on’ – touching upon basicconcerns, motives, or goals of the user. The basic of emotionpsychology is personal meaning: Without relevance no emo-tion occurs. Thus, VR needs personal relevance for the user toarrive at the intended (total) involvement as manifested in theexperiences of immersion and presence.

Total Immersion Parallels IdentificationBusiness and public administration, entertainment indus-

try and the arts, it has become widely accepted that VR is theapex of interactive systems [15] but nobody really grasps whatit is, due to a “lack of human factors understanding” (Kalawskyin [28, p. 343]). The experience that would separate VR fromall other techniques is that the fiction is so realistic that it wouldlet people undergo total immersion (e.g., [14]), submittingthemselves as if the fictional environment was real; forgettingthat the situation is not real, the dream of every illusionist (cf.[37]). The idea that fiction should be as realistic as possible toarouse the deepest experience has a long tradition, vide Aristo-tle’s [11] ‘theatre mimics life.’

A nice parallel can be drawn with how people supposedlyengage fictional characters. They would strive for a state ofidentification, sharing the features and fortunes of the charac-ter. In taking over its perspective, the emotions of the characterwould be paralleled by emotions of the self (cf. [20][21][25]).In the same vein, Hollywood filmmakers want the audience tobe ‘absorbed by’ and ‘drawn into’ the film. Likewise, so called‘Method Actors’ think that they literally should become thecharacter they want to play but it turns out that even theseprofessionals hardly ever manage [16]. Professional actors,used to fictional worlds, need at least four runs to accommodateto the virtual environment before feeling mildly involved (see[32]). Much like Hollywood directors, VR-technologists try tomimic life as much as possible in their virtual theatres and indoing so, advocate one of the oldest mix-ups in cultural history.Readers, spectators, and performers are supposed to complete-ly identify with a character, become one with the film, fictional,or virtual environment to have the maximal experience, whichwould be the optimal experience.

Theoretically and empirically, the counter-evidence is clear([16][17][19][34][35][42]) and it should be concluded that theidentification hypothesis (i.e., total immersion) does not hold.As patients in a operation room, we would not want the heartsurgeon to feel really inside our veins, a little person copingwith strong blood pulses, fighting the attacking white bloodcells (doctor mistaken for infection), and trying to keep fromdrowning [31, pp. 222–223]. Like professional actors anddirectors, surgeons and pilots using VR should keep theirdistance, keep their heads cool while being involved inperforming a tough job and be strongly aware that the represen-tations in their UVMs are not the same as reality! Moreover,people do not necessarily like the highest involvement possible(identification or total immersion) because it may be too threat-

ening (car crash simulation), too desirable (cyber sex), or plain-ly inappropriate (the virtual operating theatre). Trying to mimicreal life ‘to the max,’ then, by only augmenting the realisticfeatures ignores that involvement is subjective and multifacet-ed, that people keep realizing VR is fiction. The surgeon knowss/he is operating virtually as long as the real life consequencesare not apparent (e.g., deadly blood loss). A pilot may fly aplane directly into a building as long as s/he stays alive in theflight simulator. Exactly because it is fiction, VR has the advan-tage to explore possibilities without heavy emotional interfer-ence. Thus, we argue for an optimal involvement, regulated byco-occurring distance, which predicates the highest apprecia-tion of VR.

Perceiving and Experiencing FictionInstead of total immersion in the situation or identifica-

tion with the character, we would rather envisage VR as a moremoderate “subjective feeling of involvement” with the fictionalsituation [28, p. 336] achievable only by an intentional“suspension of disbelief” [31, p. 223]. Founded on psycholog-ical theories of emotion, art, and interpersonal attraction, [18]and [19] formulated and found empirical evidence for a newtheory on how spectators of theatre, art, film, TV, and othermedia establish affective relationships with fictional charac-ters. The PEFiC theory (Perceiving and Experiencing FictionalCharacters) states that identification is merely one example ofa diversity of involvement-distance conflicts that someone mayexperience with fictional characters. The theory holds thatappreciation for fictional characters is a trade-off between theparallel processes of involvement (psychological tendency toapproach) and distance (psychological tendency to avoid).

Because the perception of characters strongly depends on thesituation in which they act, the theory on Perceiving and Expe-riencing Fictional Characters (PEFiC) can easily be applied toVR-situations. The users of VR resemble characters in fictionin that they play themselves while navigating through a worldof make-belief (a bit like avatars). In other words, they areobservers of their own performance and it should be clear thatthey as a fictional character along with other characters helpcomposing the virtual situation. The PEFiC-theory [18][19]may be helpful to try and make up a little for the “lack of humanfactors understanding” in VR. PEFiC explains how involve-ment and distance towards fictional characters (FCs) and situa-tions transpire and how their interrelationship determines thefinal appreciation for the FC within its situations. During theencoding phase, the subjective appraisal of the ethic, aesthetic,and epistemic features of the FC and its situational contexttakes place. The ethics of an FC refers to its moral goodnessand badness, aesthetics to its beauty and ugliness, and epistem-ics to its realistic and unrealistic qualities. Such appraisalspertain to the social norms someone maintains, which areusually influenced by and may occasionally deviate fromsignificant others.

The comparison phase in PEFiC includes the subjective eval-uations of the FC with respect to oneself as expressed bypersonal relevance, mutual similarity, and valence towards theFC. Given the situation in which the FC acts, FC’s features and

4

5

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 23

fortunes are evaluated on their personal meaning to the observ-er’s goals and concerns. Similarity refers to the degree in whichFC’s features resemble those of the observer (same-different).Valence refers to the intrinsically positive or negative outcomeexpectancy observers have about FC’s features to support(positive valence) or harm (negative valence) the goals andconcerns at stake (cf. [9]). A goal of a VR-user may be to weldpipelines under water for an offshore oil-platform. The attack-ing virtual shark is an FC potentially harming this goal, thusevoking negative valence.

Finally, in the response phase, the outcome of the intertwinedencoding and comparison phases are reflected in the felttendencies to (psychologically) approach or avoid the FC,called involvement and distance, respectively. In [19], Konijnand Hoorn found that involvement and distance occurred inparallel, which together provided a basis to predict an evalua-tive end-state, expressed by the final appreciation for an FC.[19] found that involvement with the FC and its situationgenerally was supported by positive appraisals, that is, by good,beautiful, and realistic features, which could be augmented byrelevance, similarity, and positive valence. However, suchtendencies were compensated (and sometimes reversed) by co-occurring negative appraisals: Bad, ugly, and unrealistic fea-tures evoked distance, which was generally intensified by dis-similarity and negative valence. This means, for example, thatan ugly and bad character could yet be liked because s/he wasjudged realistic and compassionate. In fact, [19] found that abad character being beautiful evoked significantly moredistance than bad characters not judged as beautiful. Ethics,being morally good or bad, appeared the strongest factor inpropelling involvement and appreciation relative to distance.Interestingly, the degree of distance for good FCs determinedthe final appreciation stronger than involvement, whereas forbad FCs, the degree of involvement predicted appreciationbest. Appreciation for the eight different FCs that were used in[19] did not show as much variation as the co-occurring meas-ures of involvement and distance did. These empirical findingsfor motion pictures, then, show the intricacy of involvementjudgments concerning fiction and call for a more sophisticatedtheory of VR than the simple ‘total immersion’ assumption.

The supposed involvement-distance conflict in the PEFiC-theory, therefore, relies on the dynamics of conflict to approachor avoid a desirable goal that can yet be harmful [6, chap.22][23]. Evidence reveals that, initially, approaching tenden-cies generally are stronger than avoidance, which in recentwork on impression formation is called a ‘positivity offset’(e.g., [4][2]). Further evidence reveals that the modulatingeffect of avoidance on approach is stronger than vice versa,which is called a ‘negativity bias’ (ibid.). The gradient of avoid-ance is steeper than that of approach and the tendency toapproach or avoid a goal would be stronger the nearer the sub-ject is to it. Thus, the initial level of approach to a desired goalis higher than of avoidance but across time avoidance growsfaster than approach (the lines crossing each other). Figure 3shows the predictions of the PEFiC-model on the involvement-distance trade-off. Applied to VR, the user enters a situationwith a goal in mind (‘Can I land the plane? Will I win the

game?’). Due to the positivity offset, the initial involvement inthe situation is higher than distance, because most users areopen to new experiences (curiousness is an inborn humanconcern [9]). While the situation develops, distance compen-sates stronger for involvement than the other way round, due tothe negativity bias. However, reaching the goal becomes moreurgent over time, so that relevance increases and amplifies themagnitudes of involvement and distance. Because the baselevel of involvement is higher than of distance, involvementreaches the optimum sooner than distance. At the junction ofinvolvement and distance (the equilibrium), users may experi-ence doubt, indolence, or ambivalence. When distance out-weighs involvement too much, the final appreciation will benegatively influenced, lowering appreciation (Figure 3, right ofequilibrium). Optimal involvement, therefore, actually is tofind an optimal involvement-distance balance (quite someinvolvement counterbalanced by less distance), which excitesthe highest appreciation (left of equilibrium).

Note that in fiction, the features that feed involvement anddistance may be different from those in reality. In fiction,killerdroids and mechwarriors offer us completely differentfeature sets than our bosses and colleagues at work (or dothey?). Entering a combat zone in VR may be exciting andchallenging, whereas in reality, it may be just scary. Moreover,the overall relevance of a situation in which one acts is higherin reality. That leads to the position depicted in Figure 4, wherethe fiction-reality discrimination is integrated with the experi-ence of involvement-distance trade-offs with situations.

We suggest that there is a difference in the experience ofsituations in reality and in represented reality (VR or otherfiction) that probably has to do with different ratios between themagnitudes of involvement and distance that are raised. Assaid, other features are relevant in reality than in fiction. Rais-ing your energy level (eating) and not getting run over by atruck have stronger and more long-lasting consequences inreality than in fiction. You do not partake in reality of your ownfree will but you do in fiction. If not, unwillingly beinginvolved in fiction is a form of deceit, a game unrecognized by

highappreciation

equilibrium

lowappreciation

situation over time �

IDm

agni

tude

D

I

Figure 3: Predictions on the ratio’s between involvement (I) and distance (D) while experiencing a (virtual) situation.

Human-Computer Interaction: Overcoming Barriers

24 UPGRADE Vol. IV, No. 1, February 2003 © Novática

the deceived, and therefore, misconceived as reality by thevictim (cf. [37]). Because the events in VR (and other fiction)are less consequential for daily life (cf. [16]), we can exploremore hazardous situations than in reality, also emotionally.Therefore, our level of tolerance acquires more elasticity in VRand can be extended to include not only the positive but also thenegative side of things. In other words, it can be fun to explorebadness, ugliness, magic, situations dissimilar to those in ourdaily life, and to live through life-dangers.

As long as fictional situations stay inconsequential to reallife, features relevant to survival become less urgent so that ashift of attention may occur to features that otherwise are out ofcontrol or out of reach. In using VR for training the inspectionof a nuclear fall-out area, for instance, more attention can begiven to finding casualties than to avoiding becoming one.Simulation of various business processes may help to focus onrequirements, design trade-offs, and malfunctions in theproduction line without having to pay attention to real bank-ruptcy. However, not only other features are relevant; theintensity of relevance altogether is probably higher in realitythan in fiction. In reality, there is more at stake (‘don’t let myharddisk crash’) than in fiction (‘please, regenerate mysystems’). In reality, there is only a drastic way out. In fiction,the exit is a push button or the end of a computer program.Because fiction can be controlled much better than reality,fiction can have an emotionally relaxing effect. One regulates

the dosage of the emotional impact as it were. In sum, rele-vance is distributed over the features differently in reality ascompared to fiction and the baseline of relevance in reality ishigher overall (Figure 4).

Augmenting Relevance in VRIt seems, then, that to enhance the relevance of a VR-

experience, the VR environment should provide features thatare personally relevant to the user and the acts performed in thatenvironment should have real consequences for daily life. If abrain surgeon is aware that the incision via a VR-system has noconsequences for a real patient (e.g., no cure, no damage), therelevance of his/her activities will stay relatively low (as intraining). Personal relevance is increased, however, when theVR-system is connected to a real patient and the long-distanceoperation perhaps has irreversible effects. A mistake can bedeadly and for sure, this will augment the relevance of the VR-experience because now the technique should offer everythingthat is needed for safe conduct and leave everything out that isirrelevant (and thus, distracting) no matter how realisticallyconveyed. Conversely, distorted images, white noise, flawedgrip, a picture of the operation bed are irrelevant as long as thetask is performed optimally and the surgeon does not complain.Frankly, it may even be helpful that certain features are notrendered realistically at all. As in brain imaging techniques, itmay be beneficial that, for instance, the tissues of the primary

motor and sensory cortices are coloured green and pur-ple, respectively. Because in VR, usually no real-lifeconsequences are at stake, setting clear goals and touch-ing upon relevant concerns, goals, or motives of theusers should increase relevance.

A first step into augmenting the relevance of featuresin VR is to perform GroupWare Task Analysis [41].Design of modern VR-systems means first of all specifi-cation of the system for (multiple) users in actual situa-tions. The main challenge of designing VR-systems isspecification of the knowledge and needs that futureusers and other stakeholders will have when using orparticipating in the system in practical situations. Hence,the major mission in early design of VR is knowledgeand needs analysis and knowledge and needs specifica-tion. The viewpoint should be that of distributed cogni-tion in a multidisciplinary design team collaboratingwith non-design-expert stakeholders. Design of VR-systems requires frequent communication with the clientof design and with various different types of users andstakeholders. Multiple representations are needed, andthe need for providing techniques, models, and tools forunderstanding these is a major quest for support ofdistributed cognition. For instance, at a certain level inearly VR-design, the engineer will need a formal con-ceptual model of the envisioned system, and the ergono-mist needs to assess the feasibility of this specificationby confronting the prospective users with a scenario orsketch. This process can only succeed if the variousrepresentations (scenario, sketch, formal conceptualmodel) are views on the same underlying design knowl-

6

1: true ?: possible 0: false

Fiction

VR

Reality

realistic unrealistic realistic unrealistic

highappreciation

equilibrium

lowappreciation

situation over time �

IDm

agni

tude

D

I

highappreciation

equilibrium

lowappreciation

situation over time �

IDm

agni

tude

D

I

IDm

agni

tude

Figure 4. Reality-fiction judgments are based on category knowledge, epistemic appraisals, and beliefs on truth. The involvement-distance conflict in reality develops from a higher baseline than in fiction.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 25

edge. The present paper intended to help develop the neededtheory to design VR-experiences from which a set of represen-tations with their connection to an underlying ontology canevolve, and design tools can be build to support this situation.

ConclusionsWe made a case that VR is one of many fictional genres

and that VR should be conceived of as an experience ratherthan a technique. As an experience, it coincides with certainmoments in the history of art and theatre on the one hand buton the other hand it is also a natural attribute of the everydaywork experience (i.e. regarding the UVM). We provided ananalysis of the experience of fiction and where VR fits in thistypology, which was then integrated with a model on perceiv-ing and experiencing fictional characters and situations. Thetenet of the PEFiC-model is an involvement-distance conflictthat the user experiences while entering a fictional situation. Itis assumed that the intensity of this conflict is lower in fictionthan in reality as a function of less personal relevance to theuser.

We recommend that for designing a VR-experience, thestrive for ‘augmented reality’ should better be for ‘augmentedrelevance.’ Augmenting relevance should be done not so muchby increasing the number and quality of realistic features but byincreasing the personal meaning of a simulated (work) situa-tion, providing features that tune in to the goals and concernsof the user, which can be uncovered by GroupWare Task Anal-ysis. In games, relevance can be increased when the user doesnot know all the rules (similar to real life) but beware; it alsomay scare people off. For the creation of an interface at theworkplace, the rules of conduct must be clear. We suggest stud-ying and making use of the elasticity of the tolerance level infiction, by challenging users with features that are a little bad,ugly, and unrealistic and that arouse some negative valence anddissimilarity with their daily practice. However, make it abalance in which the involvement-raising features persist morestrongly. To do so, requirements engineering for VR-systemsshould begin with relevance assessment.

AcknowledgementsThis paper is part of the Innovation Oriented research Program

(IOP) for Human-Machine Interaction entitled Integrating Design ofBusiness Processes and Task Analysis, granted by the Centre Agencyof the Dutch Ministry of Economic Affairs in The Hague, grantMmi99009. The contribution of Elly A. Konijn is supported by anASPASIA-grant of the Faculty of Social and Cultural Sciences at theFree University of Amsterdam. The collaboration between Johan F.Hoorn and Elly A. Konijn was instigated by the Netherlands Organi-zation of Scientific Research (NWO), grant 301-80-79b.

References[1]

R. Buck. The Biological Affects: A Typology. PsychologicalReview 106, 2 (1999) 301–336

[2]R.F. Baumeister, E. Bratslavsky, C. Finkenauer, K. D. Vohs. Badis Stronger than Good. Review of General Psychology 5 (2001)323–370

[3]S. Benford, C. Greenhalgh, G. Reynard, C. Brown, B. Koleva.Understanding and Constructing Shared Spaces with Mixed-real-ity Boundaries. TOCHI 5, 3 (1998) 185–223

[4]J. T. Cacioppo, W. L. Gardner, G. G. Berntson. The Affect SystemHas Parallel and Integrative Processing Components: Form Fol-lows Function. J. of Personality and Social Psychology 76, 5(1999) 839–855

[5]J. P. Djajadiningrat, C. J. Overbeeke, S. A. G. Wensveen. Aug-menting Fun and Beauty: A Pamphlet. Proc. of DARE 2000 onDesigning Augmented Reality Environments. ACM Press, NewYork NY (2000) 131–134

[6]J. Dollard, N. E. Miller. Personality and Psychotherapy. An Anal-ysis in Terms of Learning, Thinking, and Culture. McGraw-Hill,New York Toronto London (1950)

[7]D. L. Donald, N. Andreou, J. Abell, R. J. Schreiber. The New De-sign: The Changing Role of Industrial Engineers in the DesignProcess through the Use of Simulation. Proc. of the 1999 Simu-lation Conf. on Winter Simulation (1999) 829–833

[8]R. A. Emmons, P. M. Colby. Emotional Conflict and Well-being:Relation to Perceived Availability, Daily Utilization, and Observ-er Reports of Social Support. J. of Personality and SocialPsychology 68 (1995) 947–959

[9]N. H. Frijda. The Emotions. Cambridge University Press, NewYork (1986)

[10]B. Laurel. Computers as Theatre. Addison-Wesley Longman,Boston, MA (1991)

[11]L. Golden. Poetics. Englewood Cliffs, N.J. (1968)

[12]C. Greuel, P. Caire, J. Cirincione, P. Hoberman, M. Scroggins.Aesthetics & Tools in the Virtual Environment. Proc. of the 22ndAnnual ACM Conf. on Computer Graphics. ACM Press, NewYork NY (1995) 490–491

[13]J. F. Hoorn, S. L. Frank, W. Kowalczyk, F. an der Ham. NeuralNetwork Identification of Poets Using Letter Sequences. Literaryand Linguistic Computing 14, 3 (1999) 311–338

[14]D. F. Keefe, D. Acevedo Feliz, T. Moscovich, D. H. Laidlaw, J. J.LaViola. CavePainting: A Fully Immersive 3D Artistic Mediumand Interactive Experience. Proc. on 2001 Symp. on Interactive3D Graphics. ACM Press, New York NY (2001) 85–93

[15]D. de Kerckhove. Connected Intelligence. Somerville HousePublishing, Toronto (1997)

[16]E. A. Konijn. Acting Emotions. Amsterdam University Press,Amsterdam (2000)

[17]E. A. Konijn. Spotlight on Spectators: Emotions in the Theatre.Discourse Processes 28, 2 (1999) 169–194

[18]E. A. Konijn, J. F. Hoorn. Perceiving and Experiencing FictionalCharacters. In: Locher, P., Smith, L. (eds.): Proc. of the XVICong. of the Int. Association of Empirical Aesthetics, New York,August 9-12, 2000. Montclair State University, Upper MontclairNJ (2000) 75

7

Human-Computer Interaction: Overcoming Barriers

26 UPGRADE Vol. IV, No. 1, February 2003 © Novática

[19]E. A. Konijn, J. F. Hoorn. Some Like it Bad; Testing a Model onPerceiving and Experiencing Fictional Characters. Submitted.Vrije Universiteit, Amsterdam (2002)

[20]H. Kreitler, S. Kreitler. The Psychology of the Arts. DukeUniversity, Durham NC (1972)

[21]E. Maccoby, W. C. Wilson. Identification and ObservationalLearning from Films. J. of Abnormal and Social Psychology 55(1957) 76–87

[22]P. Milgram, F. Kishino. A Taxonomy of Mixed Reality VisualDisplays. IEICE Trans. Inf. Syst. E77-D, 12 (1994)

[23]N. E. Miller. Some Recent Studies on Conflict Behaviour andDrugs. American Psychologist, 16 (1961) 12–24

[24]A. Mowshowitz. Virtual Organization. Comm. of the ACM 40, 9(1997) 30–37

[25]K. Oatley. A Taxonomy of the Emotions of Literary Response anda Theory of Identification in Fictional Narrative. Poetics 23(1995) 53–74

[26]K. Oatley. Why Fiction May Be Twice as True as Fact: Fiction asCognitive and Emotional Simulation. Review of General Psy-chology 3, 2 (1999) 101–117

[27]M. Peckham. Man’s Rage for Chaos: Biology, Behaviour and theArts. Chilton, Philadelphia (1965)

[28]J. Preece,Y. Rogers, D. Benyon, S. Holland, T. Carey. Human-Computer Interaction. Addison-Wesley: New York AmsterdamTokyo (1994)

[29]D. A. Prentice, R. J. Gerrig. Exploring the Boundary between Fic-tion and Reality. In: Chaiken, S., Trope, Y. (eds): Dual-processTheories in Social Psychology. The Guilford Press, New York NYUSA (1999) 529–546

[30]G. Robertson, M. Czerwinski, M. van Dantzich. Immersion inDesktop Virtual Reality. Proc. of the 10th annual ACM Symp. onUser Interface Software and Technology. ACM Press, New YorkNY (1997) 11–19

[31]B. Shneidermann. Designing the User Interface. Strategies for Ef-fective Human-Computer Interaction. Addison-Wesley, BerkeleyCA Amsterdam Tokyo (1998)

[32]M. Slater, J. Howell, A. Steed, D.-P. Pertaub, M. Garau. Acting inVirtual Reality. Collaborative Virtual Environments – Proc. of theThird Int. Conf. on Collaborative Virtual Environments. ACMPress, New York NY USA (2000) 103–110

[33]S. Spiegel. Interactivity and Involvement in Large Format DigitalCinema. Keynote address at the VIIth Cong. of the Int. Society forthe Empirical Study of Literature, Toronto Ca (2000)

[34]E. S.-H. Tan. Emotion and the Structure of Narrative Film. Filmas an Emotion Machine. Erlbaum, Mahwah NJ (1996)

[35]P. H. Tannenbaum, E. P. Gaer. Mood Change as a Function ofStress of Protagonist and Degree of Identification in a Film-view-ing Situation. J. of Personality and Social Psychology 2, 4 (1965)612–616

[36]M. Tauber. On Mental Models and the User Interface. In: van derVeer, G.C., Green, T.R.G., Hoc, J.-M., Murray, D. (eds.): Work-ing with Computers: Theory versus Outcome. Academic Press,London (1988) 89–119

[37]B. Tognazzini. Principles, Techniques, and Ethics of Stage Magicand their Application to Human Interface Design. Conf. Proc. onHuman Factors and Computing Syst. ACM Press, New York NYUSA (1993) 355–362

[38]T. Tuikka. Searching Requirements for a System to SupportCooperative Concept Design in Product Development. Proc. ofthe Conf. on Designing Interactive Syst.: Processes, Practices,Methods, and Techniques (1997) 395–403

[39]G. C. van der Veer, M. J. Tauber, Y. Waern, B. van Muylwijk. Onthe Interaction between System and User Characteristics. Behav-iour and Information Technology 4 (1985) 289–308

[40]G. C. van der Veer, H. van Vliet. The Human-Computer Interfaceis the System; A Plea for a Poor Man’s HCI Component in Soft-ware Engineering Curricula. In: Ramsey, D. Bourque, P., Dupuis,R. (eds.): Proc. 14th Conf. on Software Engineering Education &Training. IEEE, Piscataway NJ USA (2001) 276–286

[41]G. C. van der Veer, M. van Welie. Task Based Groupware Design:Putting Theory into Practice. In: Conf. Proc. on Designing Inter-active Syst.: Processes, Practices, Methods, and Techniques.ACM Press, New York NY USA (2000) 326–337

[42]D. Zillmann. Mechanisms of Emotional Involvement withDrama. Poetics 23 (1994) 33–51

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 27

GADEA: a Framework for the Development of User Interfaces Adapted to Human Cognition Diversity

Martín González-Rodríguez, Esther Del Moral-Pérez, María del Puerto Paule-Ruiz, and Juan-Ramón Pérez-Pérez

The adaptation of user interfaces to the specific cognitive, perceptive and motion requirements of certainkinds of user is usually much too expensive and unprofitable to be attractive to the software industry. GADEAis a user interface management system for the development of adaptive user interfaces, which are both easyto user by developers and able to adapt the user interface of an application in an automated way. Thisframework uses new ways of isolating user interfaces from the application’s functionality, and they are basedon code structural reflection, fuzzy logic bases, artificial intelligence techniques and small “armies” ofintelligent software agents that keep the user model updated.

Keywords: Agents for Information Capture, IntelligentAgents, Multi-mode Communication, UIMS.

IntroductionThe developing process of user interaction mechanisms

of every computing tool implies the identification of the partic-ular characteristics of the cultural environment to which theirusers belong. The individuals of the environment can obtain the

maximum yield and satisfaction from the tool by means of acustomised and suitable design to users’ characteristics.

However, the nature of certain types of applicationsaddressed to a wide variety of different users – for instancemultimedia booths, education applications or web sites –implies the necessity of a design which recognises the expecta-tions of different cultural environments. This is why at the endof the design phase it is commonplace to consider the use of

1

Martín González-Rodríguez has been an Associate Professor ofthe Universidad de Oviedo (Spain) in the area of ComputerLanguages and Systems of the Department of Computer Sciencesince 1996. He is a member of the Laboratory of Object OrientedTechnologies of that same department, where he has collaborated asa researcher on various projects including Integral Object OrientedSystem: Oviedo3 (FICYT- FC-97-PBP-TIC-97-01), Project JEDI:Java Enabled DataBase Access over Internet (ESPRIT: EP-24231)and Interoperability of Objects and Components via IntegralObject Oriented System: Oviedo3 (NP-99-534-12). He was award-ed a Doctorate in Computer Engineering from the Universidad deOviedo in July 2001. His areas of research include cognitive tech-nology, adaptive hypermedia and the application of fuzzy logic tothe development of intelligent interactive agents. He has authoredmore than seventy publications nationally and internationally, most-ly on subjects related to Artificial Intelligence in the design of userinterfaces. <[email protected]>

Esther del Moral-Pérez graduated in Pedagogy at the Universi-dad Complutense de Madrid (Spain) in 1990. Research scholarshipat the Educational Documentary Research Centre (Madrid,1990–94). She received her Doctorate from the UNED (SpanishDistance Learning University) in 1994 with the thesis “The psycho-logical, social and educational influence of cartoons on children”Masters degree in Computers in Education (UNED, 2000). Associ-ate Professor in nw technologies applied to education in the Area ofEducation and Educational Organization of the Universidad deOviedo (Spain) as of research include mass media in education,computing and learning, design and evaluation of educational

hypermedia material and distance learning.<[email protected]>

Juan-Ramón Pérez-Pérez is an Associate Professor at the Uni-versidad de Oviedo (Spain) in the area of Computer Languages andSystems of the computer science department. After graduating as acomputer engineer in 1996 at the same University, he began hiscareer in the private sector in the R&D department of an IT compa-ny based in Spain, while at the same time studying for his doctorate,completing his research project in 1998. He is currently a memberof the Research Group into Object Oriented Technologies of theComputer Science Department of the same University, where healso has experience of University Extension courses, and has au-thored publications nationally and internationally on the subject ofhuman-computer communications. <[email protected]>

María del Puerto Paule-Ruiz has been a full time associateprofessor in the area of Computer Languages and Systems of theDepartment of Computer Science, Universidad de Oviedo (Spain),since 1999. She is a member of the Research Group into Object Ori-ented Technologies of the Computer Science Department of thesame University. Awarded a FICYT Research Scholarship in 1997for the project “Management of a Documentary Management Tool”.Training grant: programming Web servers in the company HUNO-SA in Asturias 1997/1998. Analyst-programmer carrying out anal-ysis, design and development of Web pages for the companyHidroCantábrico (HC) from 1998 until 1999. National and interna-tional publications on the topic of human-computer communica-tions. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

28 UPGRADE Vol. IV, No. 1, February 2003 © Novática

more than one interaction mechanism depending on the differ-ent approaches used to organize the knowledge base of theapplication.

Therefore, we are talking about multi-mode interfaces wherean action can be executed through various methods: moving apointer or a cursor, oral orders, written orders, etc.

The design of stable and effective interfaces is difficult duemainly to the following aspects: the users’ cognitive, percep-tive, motion capabilities or cultural diversity. It is practicallyimpossible that an interactive mechanism could be used withthe same efficiency by two users endowed with completelydifferent cognitive, perceptive or motion capability models.

One possible solution to this problem is to use automaticmechanisms to detect the type of user of the application andthen adapt its interaction mechanisms whilst taking intoaccount the cognitive, perceptive and motion capability charac-teristics of the active user.

GADEAGADEA is an intelligent user interface management

system. It starts from a knowledge base which is aware of aparticular problem and is capable of establishing the user’smechanisms of dialogue adapted to their cognitive, perceptiveand motion capabilities. The whole process is automatic andtransparent to the final user of the application and also to itsprogrammers and designers.

In order to achieve this aim GADEA is based on the classicparadigm of expert systems which are capable of emulating thehuman behaviour in some knowledge areas. In this case we tryto simulate the behaviour of an expert in the person-computerinteraction process who is charged with designing a uniquecommunication mechanism for a particular user who is theactive user of the application.

The software program, which emulates the behaviour of theexpert person, will apply general knowledge rules based on theperson-computer interaction process in the context of a partic-ular user. The objective is to achieve a complete adaptation ofthe lexical and syntactic levels of the interface to the functionsof the cognitive, perceptive and motion capabilities of the userin real time and in a dynamic way.

In order to achieve the described adaptation level, the internalarchitecture of GADEA is made up of three independentmodules: CODEX, DEVA and ANT. Their description is asfollows:

CODEX (Code eXplorer): it is the communication interfacebetween GADEA and the application code. It converts theuser’s interaction requests into calls to particular methods inthe application. This module is in charge of separating the func-tionality of the user interfaces from the client application.

DEVA (Dialogue Expert Valuator for Adaptation): it selectsthe appropriate communication channels establish an interac-tive dialogue with the user. It adapts both the communicationchannel and the dialogue to the functions of the cognitive,perceptive and motion capability characteristics. The softwareagent emulates the behaviour of the human expert.

ANTS (Automatic Navigability Testing System): It enrichesthe information contained in the user model by means of soft-

ware agents which are capable of passing through a remoteserver to the user’s computer with the objective of examiningand analysing the user’s behaviour.

2.1 CodeX (Code eXplorer)GADEA is designed to work with operating systems which

have mechanisms of structural reflection, so, they can inspectand change their own code during run time. This feature isincorporated in the JAVA platform [4] for which GADEA wasdeveloped.

By means of a simple convention in the names of the meth-ods of an object, the programmer identifies the user processes(those are assigned to the high level interaction with the user)to GADEA so that GADEA can recognize them automaticallyduring run time. GADEA sends the data to the DEVA modulewhich then notifies the user using the appropriate mechanismsfor the communication (a menu, a hypertext option, etc.). In thesame way, it is possible to define pre-conditions and post-con-ditions for every user process of the application. CodeX isresponsible for invoking them automatically at the correct time.

In every user process it is possible to establish one or moreinteractive dialogues. To GADEA, an interactive dialogue is aset of data that the user process must request so that the userwill be able to run it and the information that it has to transmitto the user as result of this process. So, an interactive dialoguewill be formed by the data necessary to successfully run a userprocess.

The current prototype version of GADEA supports fourtypes of basic data; concretely, for the integer type (class Inte-ger), strings (class String), date (class Date) and user process(class UP). All are JavaBeans components [5] that can be con-figured by a set of public properties.

The paradigm of development which we suggest withGADEA is to remove the use of controls (buttons, labels, men-us, etc) by the programmer. The GADEA program code there-fore doesn’t mention any controls, so it is the responsibility ofthe system to choose and allocate the controls to the channel ofcommunication chosen by the programmer. DEVA will chooseand configure the controls according to the type of informationthat it has to obtain and/or show to the user whilst adapting tothe perception of their visual or auditory aspect.

An important aspect of GADEA which gives flexibility andease of use is its ability to configure an interactive dialogue tothe functioning of the sub-model of data or the model of inter-action instead of resorting to configuration as a function of con-trols. It is an important aspect of GADEA, because it gives flex-ibility and ease of use.

GADEA is flexible because, having only a list of informationrequirements, GADEA is able to generate automatically, sever-al versions of an interface, combining different controls allo-cated in different locations (visual and auditory) with differentconfigurations (visual and auditory) without the limitation ofstatic interfaces.

GADEA incorporates ease of use because the responsibilityof choosing an adequate control, and its correct configurationdoes not depend upon the programmer or interaction andhuman communication in most cases. It depends upon the

2

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 29

management systems of the user interface. Thus, GADEAliberates the development team from a substantial amount ofdesign effort and a great load of work thereby substantiallyreducing the development time.

The selection of the types of data to use depends upon thedomain of the problem and the sub-model of data of the inter-action model. The means of selection can be incorporated intothe design of the software so that this capability can be presentin the software product. Use of the GADEA paradigm in theselection of controls which are appropriate to the requirementsof the data and to the consequent configuration of the workloadresults in a simplification of the task.

2.2 DEVA (Dialogue Expert Valuator for Adaptation)When the programmer attempts to establish a channel of

communication with the user, DEVA uses the knowledgestored in the user model (maintained and updated by ANTS) toconfigure the channel and the interactive dialogue, choosingand adapting the components to be used according to the dataand the cognitive features of the user.

DEVA is based upon the ‘Keystroke Model’ described byNewell et al. [6]. This model proposed a set of possible actionswhich the user may undertake in an interface. They are repro-duced below in the following list of basic primitives:

K (Key stroking): To press a key on the keyboard or clickthe mouse.

P (Pointing): To move the cursor onto an object.H (Homing): To move the hands from one device to another

device (for example, from the mouse to the keyboard). D (Dragging): To drag the cursor in order to move an object.M (Mental): Mental Process to decide what to do next.

Each use of control types (buttons, menus, etc.) involves theuse of combinations of the basic primitives. It is possible toassign values of efficiency to each type and then select a typewhich most closely matches a concrete user. For example, toselect a user process, it can use a menu CLI or ‘Command LineInterface’ (where a list of numbered options is shown and theuser has to press the corresponding number of the selection) ora menu GUI or ‘Graphic User Interface’ in which the menu isshown and the user uses the mouse to move the cursor to thechosen option and then clicks on it. If we suppose that the userhas to choose between 15 possible options, the mathematicalexpressions that predict the efficiency of each type are thefollowing:

TCLI = TM + TH + 3TK.TGUI = TM + TH + TP + TD + TK.

When a mental process (TM) is necessary to decide whichoption to select, a period of time will elapse whilst changing thecontext (TH). According to the previous activity of the user, itis possible to know which has been the last device used and thetime used by the user, so, sometimes DEVA can assign a valueTH = 0 to one of the two methods of selection. For example, ifthe user has just finished an action with the mouse, the TH withoption GUI will be zero.

The following operators vary in function according to themethod of selection. In CLI mode, the user has to press 2 or 3keys (one or two digits for the section of the option and the

key). In the GUI mode it is necessary to use the mouse to directthe cursor to the menu (P), to drag the cursor to reach the option(D) and then to press (or click) the button of the mouse (K).

The difference between the original ‘Keystroke Model’,which proposed general values for each person, and GADEA isthat the latter supports a user model updated with real values ofTM, TK, TH, TP and TD for each individual user, so the eval-uation of the previous expressions is done in real time andlogged in the ‘history of use’ of the application by its activeuser. The more that GADEA is invoked so will the precision ofthe application’s adaptation to the user be increased.

There are cases in which the process of selection is missingan input. For example, for a blind user it is impossible to applya schema of GUI selection but the CLI mechanism is adequatefor the user (transmitting information of the content via aspeech output system). In these cases, the TD and TP parame-ters cannot be measured (TD and TP) and TH may be meaning-less as it is set at the maximum value. It may thus be impossibleto obtain a successful evaluation of the expression.

There are other aspects of the user interface that need toobtain the user’s parameters before interaction begins. The sizeof the objects is an example. It depends upon the degree ofvisual perception of the user and his/her spatial and motioncapabilities, especially in interactive objects which have beenpointed directly with a mechanical device; this is the case ofmouse buttons or pull-down menus.

In order to determine the value of the size of the objects,GADEA has an adaptable fuzzy logic module [7] in which thediscrete values (from 0 to 100%) of the visual perception andthe motion capabilities of each user are diffused to transformthem into diffuse values of type: ‘nil’ [0% – 10%], ‘low’ [10%– 40%], ‘normal’ [40% – 75%], ‘High’ [75% – 90%] and‘excellent’ [90%–100%] thus defining diffused limits for fivesets. In this way, it is possible to apply rules of generic knowl-edge in the context of specific users in a simple and effectiveway. The rule used by GADEA to calculate the size of interac-tive object is:

If Motor Accuracy of User is Low AND Visual Accuracy of User is High THEN

width of Interactive Object is Big

The value ‘Width’ depends upon whether the variable ‘MotorAccuracy’ is a member of the set ‘Low’ and if the variable‘Visual Accuracy’ is member of the diffused set ‘High.’ Whenthe inference module obtains a diffused value for the variable‘Big,’ this is transformed into a discreet value that will indicatethe factor of correction [0% – 100%] which is required toincrease or decrease the width base of the interactive object (inpixels).

The advantages of using fuzzy logic are determined by twofactors. The first factor is that the majority of the knowledgeavailable for the expert human is in the form of verbal enunci-ates and these enunciates are vague and imprecise regardingnatural language. Fuzzy logic uses these factors in the develop-ment of linguistic variables. [8].

The second factor is derived from the fact that the informa-tion obtained by our senses and the information simulated bythe expert human is received in absolute and relative values.

Human-Computer Interaction: Overcoming Barriers

30 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Our cognitive processes are able to detect these values andassess them relative to other parameters. [9].

2.3 ANTS (Automatic Navigability Testing System)ANTS is the module responsible for keeping the user model

of GADEA updated. The information is obtained by a smallarmy of data gathering agents which keep track of user interac-tions in every available interactive dialogue at execution time.

The architecture of ANTS follows a client-server pattern em-ploying a design metaphor based on the life of a community ofants. The ants (agents) depart from their anthill (a server) look-ing for food (the information) in every available picnic (inter-active dialogue). Once the ant has got the food, it comes backto the anthill and stores it carefully in one of the anthill’s datawarehouses (the specific user model).

Whenever a user process is executed, the ant is downloadedfrom the anthill (the ANTS server) being included in the proc-ess’s interactive dialogue as a simple widget (however, the antis completely invisible to the user). Once the ant has arrived atthe user machine, it establishes a communication channel withthe anthill and uses this channel to send the data which it hascollected.

When the ant detects an interactive dialogue it sends a com-plete identification of the target dialogue to the anthill. Whenthe ant disengages from the dialogue it will send informationabout the time employed by the user working with the dialogueas well as how and when it was invoked. This information isused to create an abstract navigation model to determine thelandmarks, routes and mental models employed by the userswhen they make use of the application, showing the user proc-esses of the application which remain un-used. This informa-tion might be also employed to determine the extent of theuser’s expertise in the use of the application [10].

The information retrieved by the agents might be obtainedexplicitly by mean of certain questions posed during theGADEA registration process (such as ‘are you left-handed?’ or‘how old are you?’) or implicitly by simply observing the userswhile they execute their everyday tasks.

Every agent class in ANTS is specialised in gathering aspecific kind of data. For each parameter needed to adapt theuser interface, there will be a specific agent class, so ANTS hasspecific agents to measure reaction times (M), key pressingspeed (K), dragging speed (D) pointing speed (P), and so on.However, these agents are not restricted to measure times aloneas they are also able to analyse low-level flows of data in orderto determine the user’s perceptive and motor system accuracy.For example, the agents are able to determine the user typingskills, the user visual form recognition skills, and many others[31].

ConclusionsThe automatic code exploring system provided by

GADEA allows an easy adaptation of the low-level aspects ofa user interface with very little programming effort.

The approach adopted by CodeX in the design of interactivedialogues in terms of primitive data components instead of stat-ic-based widgets allows designers to focus their effort upon thedesign of the high level interaction mechanisms (metaphors) oftheir applications thus saving time and resources.

The use of an expert system (DEVA) to take every decisionconcerned with the interaction style and techniques to beapplied guarantees strong user interface homogeneity in everyapplication. As there is no ambiguous natural language-basedHCI guideline to be interpreted by a human designer, thisapproach eradicates problems of consistency which are presentin many computer platforms.

The information captured by ANTS in every interactivedialogue allows DEVA to adapt the user interface to the currentstate of the user’s cognitive, perceptive and motor systemswhich tend to fluctuate over time.

References[1]

C. Roast. HCI and Requirements Engineering: Specifying Cogni-tive Interface Requirements. ACM SIGCHI Bulletin Vol. 29. No.1. Enero 1997.<http://www.acm.org/sigchi/bulletin/1997.1/roast.htm>

[2] A. Yeo. World-Wide CHI: Cultural User Interfaces, A Silver Lin-ing in Cultural Diversity. ACM SGCHI Bulletin Vol. 28, No. 3.July 1996.<http://www.acm.org/ sigchi/bulletin/1996/international.html>.

[3]B. Myers. User Interface Software. Human Computer InteractionInstitute. 1998.<http://www.cs.cmu.edu/~bam/uicourse/1998spring>

[4]J. Weber. Special Edition Using Java 1.1; Third Edition. 1997.ISBN 0-7897-1094-3.

[5]SUN MICROSYSTEMS The JavaBeans 1.0 API specification.1996. <http://www.sun.com/beans>

[6]A. Newell. Unified Theories of Cognition, Harvard UniversityPress. 1990. ISBN 0-674-92101-1.

[7]G. Y. Chung Wong, A. H Wai Chun. A Software Framework forthe Implementation of Fuzzy Logic Systems. Actas de PA Java2000: The Second International Conference on The PracticalApplication of Java. Manchester, April 12–14, 2000. ISBN 1-902426-09-6.

[8]J. Velarde. Filosofía del Conocimiento y Sistemas Expertos. ElBasilisco, 2ª. Época, No. 16. Oviedo. 1994

[9]J. Velarde. Perspectivas en Ingeniería del Conocimiento. ElBasilisco, 2ª Epoca, No 8, pp 11–18. Oviedo. 1991.

[10]K S. Karn, Keith, T. J. Perry, J. Thomas, M. J. Krolczyk. Testingfor Power Usability: a CHI 97 Workshop. ACM SIGCHI BulletinVol. 29, No. 4, October 1997.<http:// www.acm.org/sigchi/bulletin/1997.4/karn.htm.>3

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 31

User Interface Patterns for Object-Oriented Navigation

Pedro-Juan Molina-Moreno, Ismael Torres-Boigues, and Oscar Pastor-López

Conceptual patterns applied to user interface requirement elicitation can provide a common language foruser interface development teams. This paper proposes a set of patterns and shows its use in practice withan easy case study from specification to final implementation.

Keywords: Conceptual Modelling, Conceptual Patterns,Human Computer Interaction, Object Oriented, User Interfac-es.

IntroductionPatterns have been revealed as a fruitful field in Software

Engineering. Software Design [5], Application Architecture [4]or Web environments, as examples, have benefited frompatterns. More specifically, in the User Interface area, there arealso several papers like [14] where user interface design andusability patterns have been addressed. However, from thepoint of view of the Conceptual Modelling, the application ofpatterns to the user interface specification is a topic that has notbeen sufficiently explored and that we intend to consider.

This paper shows a set of conceptual patterns identifiedduring the process of user interfaces analysis for businessapplications. Their use in the conceptual level as primitives(building blocks or bricks) will make it easier to build suchsystems.

Patterns provide a common reference language for peopleinvolved in development. The usage of patterns as ideas is welldocumented and reduces ambiguities or misunderstandingsthat are frequent in natural language. Patterns improve commu-nication and comprehension amongst the members of a devel-opment team. At the same time, conceptual patterns can beused for validating the requirements with the end user aspatterns are expressed in terms of the problem domain.

This paper starts by presenting a brief case study which isuseful to exemplify the application of patterns. After that, theconcepts and patterns used will be described. The specificationand implementation of the user interface using the patterns forthe case study will then be shown. Finally, related work,conclusions and references end the paper.

The next section introduces a brief case study to illustrate thepatterns in context. A small requirements specification and theconceptual model follow in order to describe the functionalityof the system. The conceptual model used is a classical object-oriented model where specific user interface features are notconsidered. This model will be later completed with a userinterface description.

1.1 RequirementsThe management of a camping site is going to be computer-

ised. The camping site’s main business is renting bungalows

and plots. There are twenty bungalows with prices dependingon their characteristics. On the other hand, one hundred plotsare available with prices depending on the size and location ofeach.

The camping facilities available are: parking, television,electric power, bungalow cleaning services, washing machines,

1

Pedro-Juan Molina-Moreno graduated as an IT Engineer atthe Universidad Politécnica de Valencia (Spain) in 1998. Duringthe period 1998-1999 he was attached to the research group OO-Method in the DSIC/UPV (Department of Information Systemsand Computing / Universidad Politécnica de Valencia) workingon user interface specification models based on user interfacepatterns on an FPI (Research Personnel Training) scholarshipfrom the Spanish Ministry of Science and Technology. In Decem-ber 1999 he joined CARE Technologies S.A. (an R&D companyworking in the field of software engineering) where he works inthe same line of research but from an industrial and applied angle.He is currently in the closing stages of his doctoral thesis entitled“User interface specification: from requirements to automaticgeneration” under the supervision of Professor Óscar PastorLópez. He has authored more than a dozen publications incongresses and international workshops on user interfacepatterns. <[email protected]>

Ismael Torres-Boigues graduated as an IT engineer at theUniversidad Politécnica de Valencia (Spain) in 2000. During theperiod 1998-1999 he was attached to the research group OO-Method in the DSIC/UPV (Department of Information Systemsand Computing / Universidad Politécnica de Valencia) workingon the data repository for a CASE tool. Since the year 2000 he hasbeen at CARE Technologies S.A. in their R&D department work-ing on code generation for web interfaces and modelling quality.He is currently studying towards his doctorate. He has authoredseveral contributions for international congresses on the subjectof user interface patterns and metrics for conceptual models.<[email protected]>

Oscar Pastor-López is Professor of the Department of Informa-tion Systems and Computing of the Universidad Politécnica deValencia (Spain), where he is Head of Department and Head ofthe Research Group into Object Oriented Methods of SoftwareProduction (OO-METHOD). He has authored more than ahundred R&D publications both nationally and internationally,and his subjects of interest include Conceptual Modelling, Auto-matic Generation of Software from Conceptual Schema, SoftwareTechnology for Web Environments and Software Process andProduct Quality. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

32 UPGRADE Vol. IV, No. 1, February 2003 © Novática

etc. Customers can also make use of sports facilities for whichthere is no charge or a small fee. Sporting activities includefitness and aerobics classes as well as soccer, tennis and golf.Service prices vary depending on the season and the categoryof the customer. At the end of the customer’s stay, an invoice isissued. Customers can pay with cash or a credit card.

The administrative office of the camping site needs a toolcapable of managing the camping. This involves: controllingthe occupation of plots and bungalows, customer tracking,invoicing and maintaining information on the availability of theresources in the camping site.

Conceptual ModelFrom the previous initial informal requirements specifi-

cation, an analyst can build a classical object-oriented concep-tual model to gather the functional requirements of the system.Figure 1-A shows the analysis classes and relationships identi-fied in the problem space.

The central class of the diagram (Stay) stores informationabout the customer stay at the camping site. Related to Stay, theCustomer and Invoice are typical classes in business applications.The Season class represents the seasons in a year and is usefulfor setting charges for services. Classes Plot and Bungalow main-tain the resources assigned to a Stay. Finally, the ExtraServicesclass gathers together the additional facilities in the campingsite whereas UsedServices contains the list of extra services usedin a customer stay.

Providing more detail, Figure 1-B shows attributes and serv-ices (methods) identified in each domain class.

The presented model implements the functional require-ments initially stated. The introduction of the model descrip-tion at this point will make it easier to comprehend the follow-ing concepts, required in order to complete the specificationwith user interface information. The concepts, will be referredto in the study case.

Concepts & Conceptual PatternsBefore starting with the User Interface specification for

the case study, the basic set of concepts used will be described.Primitives and conceptual patterns will be explained.

3.1 Primitives We have identified five recurrent primitives in Object-

Oriented User Interface development:• Filter or Selection Criterion (how to search)• Sort Criterion (how to sort)• Display Set (what properties must be shown)• Navigation (what additional information can be queried)• Actions (what actions can be invoked over objects)

3.1.1 Filter CriterionA filter or selection criterion expresses a search condition for

users to find information from an object population belongingto a given class.

A filter can be defined as an open logic expression, i.e., it cancontain free variables. At execution, the user must assign valuesfor such variables before invoking the filter.

Making an assignment of value to the variables allows aclosed formula to be obtained that can be evaluated to a logicvalue. The closed formula is then evaluated for each objectbelonging to the class population. Objects satisfying the filtercondition belong to the object subset to be presented to the user.

Example:A filter can be defined to implement the following require-

ment: “I need to know the number of free bungalows for a giventype.” The example expressed in the formal language OASIS[10] has the following filter formula defined in the classBungalow:

InUse = False AND Type = vType

where it is assumed that:• InUse is an attribute from the Bungalow class to indicate if a

given bungalow is in use or not at the moment.• Type is an attribute from the Bungalow class containing the

type of the Bungalow.• VType is a free variable in the filter. The user must provide a

value at run time during the interaction with the filter.A specific filter is preferred instead of Query By Example

(QBE) techniques [15] because, in general, users do not needgeneric searching tools. On the contrary, in the development of

2

3

Figure 1: Analysis Class Schema for the Case study.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 33

a scenario with a set of well defined tasks, analysts can identifythe filter criteria more frequently and, in consequence, onlyoffer such criteria. Doing it this way, we obtain improvedusability. A query mechanism like QBE requires user learningand only power users will take advantage of such a feature.Therefore, QBE facilities can be added as advanced features forexpert users.

Another advantage of early identification of filters as part ofthe requirements is the fact that, from the beginning of thedesign phase, optimizations can be implemented to improve theefficiency of the searching process.

3.1.2 Order CriterionAn order criterion is a mechanism for object sorting based on

object properties. An order criterion consists of an ordered listof tuples <expr, way> where expr is a visible attribute (defined ina given class or accessible by aggregation or inheritance rela-tionships) and way is ASC (ascending) or DES (descending). Theorder of the items in the list indicates the sort priority: the firstitem takes priority over the second one.

Example:Sorting customers by surname and name can be expressed as

follows:

<Surname, ASC>, <Name, ASC>.

where it is assumed that:• Surname and Name are defined attributes in class Customer.

3.1.3 Display SetA display set gathers the properties of a class of objects that

users need to observe. A display set is specified as an orderedlist of property expressions (defined attributes in a given classor attributes accessible by inheritance or aggregation relation-ships).

Example:In order to display information about invoices, we need the

invoice ID, the total amount, the name and surname of thecustomer related to the invoiced stay. The set of properties to beshown is expressed in the following way1:

ID, Total, Stay.Customer.Name, Stay.Customer.Surname

3.1.4 NavigationUsually, information is not isolated. On the contrary, infor-

mation is interrelated by means of semantic relationships.Navigation is defined as a subset of semantic relationshipsestablished between classes (for example: aggregation andinheritance relationships). Accordingly, navigation allowscrossing from one object to other related objects using a seman-tic relationship.

This type of navigation has hypermedial characteristics; itcan reach new scenarios by jumping over object relationships.This is useful because users perceive an effective mechanism,for exploring data using the relationships in the domain, tolocate objects.

Example:For a given stay, we need to query the bungalows

(UII_Bungalow), plots (UII_Plot), the season (UII_Season) and thecustomer (UII_Customer) related to a stay.

3.1.5 ActionsA second type of navigation is also presented with different

semantics. Once the user has reached an object and it has beenselected, it is possible to operate on the object to modify itsstate. In an object-oriented world, the designated mechanismfor such a task is services (or methods) defined in the signatureof the class.

Therefore, actions are defined as an ordered list of servicesbelonging to a class. This subset of services will be visible tothe user in a given scenario in order to change object states.

Example:For a stay, users can create (UIS_Create) and alter them, add a

bungalow (UIS_AddBungalow), add a plot (UIS_AddPlot), add anextra service (UIS_AddExtrraService) and, finally, create an invoice(UIS_CreateInvoice).

3.2 Patterns for NavigationOnce the working primitives have been defined for objects,

they will be used as bricks or basic blocks to build recurrentscenarios in information systems.

“The user interacts with the system through the InteractionUnit entity in the user interface. In an analogous way, Bodart[2] defined the term PU (Presentation Unit) as the highest levelabstract interaction object (AIO) that manages the interactionwith the user.

From the study of business applications and developmentscarried out in an industrial context, we have identified foursubtypes of Interaction Units with characteristics recurrent indifferent domains [9]. Such patterns or subtypes of interactionunits gather together frequent scenarios in user interfaces thatcan be specified in a rigorous form with very little effort fromanalysts.

The patterns identified are the following (Figure 2): InstancePattern (shows an object), Population Pattern (shows a group ofobjects belonging to the same class), Master/Detail Pattern(shows information related in a unique unit) and ServicePattern (allows users to launch services). The next sections willdescribe each of these patterns.

3.2.1 Instance PatternAn Instance Pattern is an interaction unit where information

relative to one object (instance of a given class) is shown. Thegoal of this type of interaction unit is to allow querying of theobject state. Optionally, it is a good place to offer services to

1. The notation a.b is similar to the notation used in OCL formulas.Where a expresses the crossing of an association or aggregationrelationship using the a role.

Human-Computer Interaction: Overcoming Barriers

34 UPGRADE Vol. IV, No. 1, February 2003 © Novática

change the object’s state and also to offer links to other seman-tically related information.

An Instance Pattern is defined as the composition of (seeFigure 3):• a display set (what to see),• zero or one actions (how to modify objects), and• zero or one navigation (what links to other information must

be shown).

Example:In the case study it is useful to define an Instance IU for the

Stay class. This interaction unit allows us to review the infor-mation for a given stay. Name: UII_Stay

Display Set: Customer.Name, Customer.Surname, CheckInDate,CheckOutDateActions: AddService, AddPlot, AddBungalowNavigation: Customer, Bungalows, Plots, Services

3.2.2 Population PatternA Population Pattern is an interaction unit for showing a set

of objects from the population of a given class. In these scenar-ios, users can require search facilities, sorting, selection and,like in the previous case, actions and navigation.

The Population Pattern is defined as the composition of (seeFigure 4):• One or more display sets (what object properties must be

shown)• Zero or more filters (how to search objects)• Zero or more order criteria (how to sort the objects)• Zero or one actions (how to modify the object)

• Zero or one navigation (what links to other infor-mation must be provided)

Example:Carrying on with the case study, it will be interest-

ing to have a population interaction unit for the Bunga-low class for querying the population of bungalows.Name: UIP_Bungalow

Filters: InUse = VInUse AND Type = VTypeOrder Criterion: <ID, Asc>Display Set: Id, InUse, Price, Type, LocationActions: Create, Edit, DeleteNavigation: Stays, Bungalow

3.2.3 Master/Detail PatternThe Master/Detail Pattern is composed of other

interaction units. Its semantics are to show relatedinformation: details shown are related to the object

acting as master.The Master/Detail Pattern is defined as follows (see Figure

5):• A master: an interaction unit that plays the role of master. Its

goal is to show an object that determines the contents of thedetail units. As a master unit, any IU can be used that fixesor selects an object, i.e., an Instance or Population IU.

• One or more details: IUs that show information related tothe master object. Each detail is connected to the masterusing a path expression showing the semantic relationshipbetween the master and detail components. E.g.: if the mas-ter component belongs to the Invoice class and the detail com-ponent belongs to the Line class, the path expression couldbe: InvoiceLines, where the expression represents the role paththat is crossed in an aggregation relationship between thequoted classes. The detail role can be taken by the followingIUs: Instance, Population and, recursively, Master/Detail.

Example:In the case study, a natural Master/Detail IU can be modelled

to show a given Stay and its related UsedServices. Name: UIMD_Stay

Master: UII_StayDetail: UIP_UsedServices, Role: UsedServices

3.2.4 Service PatternFinally, the Service Pattern models the interaction units

designed for invoking services. Its goal is to require the user toenter a data set in order to assign a value to service’s arguments.Once the data entry has been completed, the user can launch theservice (commit) or cancel it (rollback).

InteractionUnit

Instance IU Population IU Master/Detail IU Service IU

Figure 2: Specialized Interaction Units.

Class Display Set Actions Navigation

Instance IU0:M

1:1 1:1 0:1 0:1

Figure 3: Meta-model for Instance Interaction Unit.

Class Filter Order Criteria Display Set

PopulationIU0:M

1:1 0:M 1:M

Actions Navigation

0:1 0:10:M

Figure 4: Meta-model of Population Interaction Unit.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 35

A service defined in a class can be reified or mapped to oneor more service interaction units (see Figure 6). In this way,analysts can use this interaction unit as an interface to the givenservice. The Service IU can have other patterns applied that areout of the scope of this work and will not be described here. Fora detailed review of auxiliary patterns see [11] and [8].

User Interface SpecificationNow that the working concepts have been introduced, we

can use them to build a specification for the case study. Next, agraphical notation will be introduced (as shown in Figure 7) torepresent the interaction units as boxes and navigation betweenunits (actions or navigation) as directed arrows. Accordingly, itis possible to build isomorphic diagrams to direct graphs whereinteraction units represent nodes and navigation represents thecorresponding directed arcs.

This notation will be used to show the specification of thecase study. The graphical notation provides a convenient wayof working and makes the comprehension of the specificationby the members of the development team easy as well as facil-itating the user validation of the requirements.

Coming back to the original problem, from the initial usecases, it is possible to make a task analysis [3]. Table 1 showsan example of a use case identified in the problem domain.

Using such a task analysis, we will sort (by functional crite-rion, etc., analyst’s criterion) the main user tasks in the systems.This information can be gathered in the Hierarchical Action

Tree (HAT) [6] (see Figure 8) in order to build an applicationaccess mechanism.

Using the Use Case shown (see Table 1) and compoundingthe user interface pattern presented, analysts can specify thenavigational map shown in Figure 9.

The example shows how to obtain a navigational diagramfrom a given task. In order to obtain a full application, we mustintegrate the different scenarios detected. Following the ideasof Lausen [7], some techniques of reuse can be applied in theuser interface so that it is more compact and usable for the finaluser.

ImplementationConcepts defined and used in the analysis of the user

interface are also useful in the implementation phase. Bymeans of refinements in the original specification, implementa-tion components can be obtained in a given language satisfyingthe initial requirements. This AIO (Abstract Interface Object[2]) to CIO (Concrete Interface Object) mapping can also bedone using automatic production techniques. In particular, thisstrategy is pursued in the products developed by CARETechnologies.

4

5

ClassMaster

Details

Instance IUMaster/Detail IU

0:M

1:1

Population IU

Master/Detail IU

1:1

1:M

Figure 5: Meta-model of Master/Detail Interaction Unit.

Service Auxiliary

Service IU0:M

1:1

0:M

0:M

Class1:1

Patterns

Figure 6: Meta-model of Service Interaction Unit.

Instance IU Population IU Master/Detail IU Service IU NavigationalLink

Figure 7: Graphical Notation for Interaction Unit.

Use Case: Create_Stay

Actors: Customer (init), Clerk

Type: Primary

Description: A customer asks the clerk for hosting in the camping. The clerk asks the customer the necessary data to check-in (check-out date, name, bungalow or plot). At the end of the check-in process, the customer can start using the services of the camping for the period of time determined.

Table 1: Example of Use Case for Task: Create Stay. Figure 8: Hierarchical Action Tree for the case study.

Human-Computer Interaction: Overcoming Barriers

36 UPGRADE Vol. IV, No. 1, February 2003 © Novática

The next few paragraphs describe a possible implementationfor the case study that has been automatically derived throughconceptual model translation techniques. The implementationhas been developed in Windows and, in particular, for theVisual Basic 6.0 environment. The generated componentcontains all the necessary pieces in order to communicate witha business logic component (that also can be obtained by auto-matic production techniques). Such business logic or servercomponents will provide data access and offer the means tolaunch services to change the data in the system s.

Figure 10-A shows a form obtained for an instance interac-tion unit (representing UII_Stay). The form is delimited by threewell differentiated zones. The central zone is used for display-ing the properties of a Stay. The right zone contains a toolbarthat provides the following functionality (of actions): creation,deletion, edition, and addition of services. Finally, the bottomzone shows a second toolbar that implements navigation:Customer, Bungalow, Plots, Invoices and User Services can bereached from a given Stay.

Figure 10-B shows a form derived for a population interac-tion unit (that in particular represents UIP_Bungalow). Like theprevious one, this form is delimited by three zones. The centralzone shows, in a tabular way, the properties for Bungalows.The upper zone is used to perform searches using filters where-as the right zone is again a toolbar to access the functionality ofcreation, deletion and modification of Bungalows. Similar to

the previous form, this one can also contain a fourth zone at thebottom used to implement the navigation.

Finally, Figure 11 shows a derived form for a master/detailinteraction unit (representing UIMD_Stay). This form is com-posed of two well differentiated components: • A master component (instance IU) in the upper part showing

the data for a given Stay.• A detail component (population IU) in the lower part shows

the Services related to the selected Stay in the master com-ponent. In this detail component, the right toolbar allowschanging the information of the used services.

Related WorkTRIDENT [1], [13] derives presentation units from an

Activity Chain Graph (ACG). Starting from the user task andwith the help of the ACG, the designer can identify presentationunits. However, TRIDENT is based on an entity-relationshipmodel that only describes the static part of the system.

OVID [12] is a research work developed in IBM for the spec-ification of user interfaces. It is the first attempt to combineobject-oriented models for the specification of systems usingUML with user interface specifications. In OVID, the notion ofview is presented as a class stereotype. This concept of viewcan be compared with the interaction unit. However, thecontent of a view is totally at the designer’s discretion (a viewcontains no semantics).

6

Figure 10: Implementation Example of Instance IU (A) & Population IU (B).

UIP_Stay

StayObservations

Plot

UII_ExtraServices

Extra Service

UIP_UsedServices

Used Services

UII_Invoice

Invoice

UII_Customer

Customer

UII_Plot

Bungalow

UII_Bungalow

Figure 9: Scenario for Stay.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 37

The approach presented in this paper starts enriching theconceptual description of interaction units exactly at the pointwhere TRIDENT and OVID give the control to the design. Asshown, in this phase it is possible to capture more requirementsabout the user interface to be built. This collection of require-ments can be achieved through the specialization of the inter-action units in well defined tasks and the use of patterns to buildit.

ConclusionsA set of concepts and patterns has been presented to set a

common language for object-oriented user interfaces. Such alanguage can be used through the development cycle and alsofor the user validation of the requirements.

The patterns presented refine the concept of interaction unitsin subtypes with clear semantics.

The pattern language is independent of the implementationtechnology. Accordingly, an analysis can be refined at thedesign level for different platforms. For example: implement-ing interaction units as forms in Windows or HTML pages in aWeb environment.

The use of a precise and non-ambiguous specification opensthe door to the use of automatic code generators able to produceprototypes for user interfaces on different platforms.

In CARE Technologies S.A. we are developing such a tech-nology supported by specification tools and automatic softwareproduction techniques from conceptual models in order toobtain user interfaces in an agile way. In this manner, theprototypes obtained can be shown to the user in early stages ofthe development and, once the specification is validated, it con-stitutes the final user interface for the application. As a result,development time for applications is considerably reduced.

As for future work, the following areas could be addressed:

• Continue with the study and detection of new patterns in thescope of business applications.

• Provide refinements to the model for design levels in orderto gather design properties dependent on the target platform.

• Increase the number of supported platforms as target for theproduction of automatic applications, for example, perva-sive devices, mobile devices, WAP, UMTS, etc.

Acknowledgements Work on this paper has been partially supported by the Comisión In-

terministerial de Ciencia y Tecnología (CICYT) of Spain by means ofthe Project with Reference TIC2001-3530-C02-01 and was also sup-ported by CARE Technologies S.A.

References[1]

F. Bodart, J. Vanderdonckt. Towards a Systematic Building of SoftwareArchitectures: The Trident Methodological Guide. Design, Specificationand Verification of Interactive Systems, pages 262–278, June 1995.

[2]F. Bodart, J. Vanderdonckt. Widget Standardisation Through AbstractInteraction Objects. Technical Report, Institut d’Informatique, FacultésUniversitaires Notre-Dame de la Paix, Namur, Belgium, 1996.

[3]L. Constantine, L. Lockwood. Software for use: A practical guide to theModels and Methods of Usage-Centred Design. Addison Wesley, 1999.

[4]J. Coplein. Software Patterns. http://www.enteract.com/˜bradapp /docs/patterns-intro.html, 1999.

[5]E. Gamma, R. Helm, R. Johnson, J. Vlissides. Design Patterns: Elementsof Reusable Object-Oriented Software. Addison Wesley, 1994.

[6]E. Insfrán, P. J. Molina, S. Martí, V. Pelechano. Ingenieria de Requisitosaplicada al modelado conceptual de interfaz de usuario. (In Spanish), 4asJornadas Iberoamericanas de Ingeniería de Requisitos y Ambientes Soft-ware, IDEAS 2001, Santo Domingo, Heredia, Costa Rica. Ed. CIT, April2001, pages. 181–192.

[7]S. Lausen, M. Borup Harning. Virtual Windows: Linking User Tasks,Data Models & Interface Design. IEEE Software, pages 67–75,July–August 2001.

[8]P. J. Molina, O. Pastor, S. Martí, J. J. Fons, E. Insfrán. SpecifyingConceptual Interface Patterns in an Object-Oriented Method with CodeGeneration. Proceedings of User Interfaces for Data Intensive Systems,UIDIS ’2001, Zurich, Switzerland, pages 72–79, IEEE Computer Socie-ty, May 2001.

[9]P. J. Molina, S. Meliá, O. Pastor. JUST-UI: A User Interface Specifica-tion Model. Proceedings of Computer Aided Design of User Interfaces,CADUI’2002, Les Valenciens, France, pages 323–334, Kluwer Academ-ic Publishers, May 2002.

[10]O. Pastor, I. Ramos. OASIS 2.1.1: A Class-Definition Language toModel Information Systems Using an Object-Oriented Approach,February 1994 (1st ed.) March 1995 (2nd ed.), October 1995 (3rd ed.).

[11]O. Pastor, P.J. Molina, A. Aparicio. Specifying Interface Properties inObject Oriented Conceptual Models. ISBN 1-58113-252-2, In Proceed-ings of Working Conference on Advanced Visual Interfaces, AVI 2000,Palermo, Italy. Ed. ACM, pages. 302–304, May 2000.

[12]D. Roberts, D. Berry, S. Isensee, J. Mullaly. Designing for the User withOVID: Bridging User Interface Design and Software Engineering.MacMillan, 1998.

[13]J. Vanderdonckt. Assisting Designers in Developing Interactive BusinessOriented Applications. In Proc. of HCI’99, 1999.

[14]M. van Welie. The Amsterdam Collection of Patterns. http://www.weile.com/patterns/, 2001.

[15]M. Zloof. Query-By-Example: A Data Base Language. IBM SystemJournal. Vol 4, pages 324–343, December 1977.

7

Figure 11: Implementation Example of Master/Detail IU.

Human-Computer Interaction: Overcoming Barriers

38 UPGRADE Vol. IV, No. 1, February 2003 © Novática

e-CLUB: A Ubiquitous e-Learning System for Teaching Domotics

Manuel Ortega-Cantero, José Bravo-Rodríguez, Miguel-Ángel Redondo-Duque, and Crescencio Bravo-Santos

Computer-assisted educational environments are an excellent complement to the learning process. However,when domains are complex, the expected learning support objectives may not be achieved. In this paper, wepresent an e-Learning system for the teaching of Domotics featuring some characteristics which improve onthe teaching-learning process. Among these improvements, we would highlight two factors: planning inorder to reach an intermediate solution to complex design problems, and collaboration for the in groupbuilding of these solutions. The last step, an excellent complement to these improvements, is theimplementation of the ubiquitous classroom with which we intend to reinforce the previous advantages bycomplementing e-Learning with ubiquity.

Keywords: Collaboration, e-Learning, Domotics, Planning,Ubiquitous Computing.

IntroductionFor several years now, software tools have been making

good use of the advantages that graphic interfaces offer. Theseinterfaces offer intuitive interaction mechanisms, ones whichare consequently closer to the user than traditional scriptlanguages (of commands). Direct Manipulation has appearedon the scene as an interaction style in which the user canmanipulate objects (represented by means of icons) with thehelp of a pointer device [10], [11]. The advantages are clear tosee as objects are handled by means of fast, reversible, incre-mental actions, with no need for syntactic memorization.

However, some authors hold that easily handled interfacesare not effective in educational environments. Svedensen [12]argues that interfaces based on instructions are more effectivein education than those based on direct manipulation. Holst [6]maintains that the way the students manipulate objects on thescreen distracts their attention from the concepts being learned.In a great variety of educational environments there are somany actions that the student can execute and so much freedomof use on offer that they may lose sight of the concepts they aresupposed be learning. On the other hand, the effort required toachieve objectives by means of manipulation is beneficial forthe student because it leads to a refinement of the mechanismsof knowledge acquisition, the analysis of the behaviour ofobjects, and the strategies needed to solve problems.

To eliminate the problems related to direct manipulation ineducational environments, new systems have been proposed.Sedighian [14], [15] proposes a strategy change and presents anew concept called “Direct Concept Manipulation” (DCM,)whereby the student manipulates concepts of the domain bymeans of the appropriate interaction mechanisms. He proposesan environment for learning Geometry in which the studentsshould correctly place the pieces of a puzzle making use of theoperations of translation, rotation and reflection.

1Manuel Ortega-Cantero received his M.Sc.Degree in Science

and his doctorate in Science at the Universitat Autònoma de Bar-celona (Spain).He is professor of Computer Science at the HigherSchool of Computer Science of Ciudad Real of the Universidadde Castilla-La Mancha (Spain). His research focuses on Comput-ers in Education, Collaborative Systems and Ubiquitous Comput-ing. He is the secretary of the ADIE (Association for the Devel-opment of Computers in Education) in Spain, editor of theEducation and Technology Journal and coordinator in Spain ofthe RIBIE (Ibero-American Network of Computers in educa-tion). <[email protected]>

José Bravo-Rodríguez received his M.Sc. Degree in Physics atthe Universidad Complutense de Madrid (Spain) and his doctor-ate in Industrial Engineering at the Spanish UNED (NationalUniversity for Distance Learning). He teaches Computer Scienceat the Higher School of Computer Science of Ciudad Real of theUniversidad de Castilla-La Mancha (Spain). His research focuseson Computers in Education and Ubiquitous Computing. <[email protected]>

Miguel-Ángel Redondo-Duque received his M.Sc.Degree inComputer Science at the Universidad de Granada (Spain) and hisdoctorate in Computer Science at the Universidad de Castilla-LaMancha (Spain). He teaches Computer Science at the HigherSchool of Computer Science of Ciudad Real of the same. Hisresearch focuses on Computers in Education, CollaborativeSystems and Ubiquitous Computing. <[email protected]>

Crescencio Bravo-Santos received his M.Sc. Degree in Com-puter Science at the University of Sevilla (Spain) and his doctor-ate in Computer Science at the Universidad de Castilla-La Man-cha (Spain). He teaches Computer Science at the Higher Schoolof Computer Science of Ciudad Real of the same University. Hisresearch is centred on Computers in Education, CollaborativeSystems and Ubiquitous Computing. <[email protected]>

Grupo CHICO - Universidad de Castilla-La Mancha <http://chico.inf-cr.uclm.es/>

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 39

In this paper we propose an e-Learning environment whichminimizes the problems caused by the use of Direct Manipula-tion in educational environments. We propose strategies suchas planning as an intermediate solution to complex problems ofdomotic design, collaboration in order to achieve agreed solu-tions for the students' groups and, finally, the advantagesoffered by setting up the ubiquitous classroom that willcomplement the aforementioned improvements in the learningof Domotics.

Domotics The aim of Domotics is the control of all the services in

a house, in a completely automated way. These services willrelease people from routines such as the control over thermaland luminous comfort, security or energy consumption.

The implementation of these services also means a stepforward in terms of the optimization of consumption accordingto such aspects as the climatic area (temperature and luminosi-ty), the habits of the inhabitants of each building (desiredtemperature and luminosity), etc. All this is achieved by meansof an automated control over the domotic elements in thehouse.

2.1 The Elements Basically, there are three types of domotic elements: sensors,

actuators and systems or controllers.The sensors, also called receivers, are elements that receive

the information from the atmosphere, for example, atmospher-ic or luminosity variables. They can also obtain information onthe actions humans carry out in their daily interaction at home,such as pressing a switch or going into a room. These includesensors of temperature, luminosity, gas, smoke, intrusion, etc.

Actuators are elements that receive the order to be activatedor deactivated. They consist of actions such as switching a lightfitting on/off or opening/closing a blind. As in the case ofsensors, there are a great variety of actuators. Among theseactuators we can include, for example, the heating system, airconditioning, light fittings, the opening/closing of blinds, thealarm, etc.

Finally, the systems or controllers are in charge of processingthe information coming from the sensors and, by means of theappropriate programming, they activate or deactivate the actu-ators. These elements possess (Figure 1) a certain processingcapacity which can be modified by the user when (s)he sodesires. In this figure we can see that by means of a program,the user can manipulate the variables (parameterization), andautonomously (s)he will have access to historical and otherdata, such as local climate.

The domotic elements are grouped by means of links intodifferent administration areas. Four such areas could be Ther-mal Comfort, Control over Luminosity, Security and EnergyControl. Basically these four areas include most of the domoticelements although their number and functionality is constantlyincreasing. There are also new controls being embedded inhousehold appliances [1], [2]: the refrigerator, the washingmachine, the microwave, etc.

2.2 Communications The communications or links between the different sensors,

actuators and systems are an essential requirement for thistechnology to work in any building. The networks installed inconventional houses are represented in Figure 2 (left). Here wecan see how the different elements are conventionally connect-ed together. Domotics uses a type of network in which all theelements are linked together differently from the traditionalmethod (Figure 2 right). This type of network type involves thecentralized connection of all the elements to a host computer.This control can also be located in the television so that, bymeans of the remote control, the user can modify the program-ming of any of the elements installed in the building (Figure 3)[3].

2

Figure 1: The System in Operation (Thermal Control).

Figure 2: Traditional and Domotic Networks

Human-Computer Interaction: Overcoming Barriers

40 UPGRADE Vol. IV, No. 1, February 2003 © Novática

This first type of centralized connection shows how virtualmultifunctional elements can be installed in the host computer.They are equivalent to/represent the systems and produce animportant saving in the integral automation of the building.These virtual elements can be adapted to different circumstanc-es such as the individual control of each room, extensions to thesystem, control of historical data, etc. (Figure 4).

In Figure 4 we can see the thermal control of the variousareas of the building. It can be extended according to the char-acteristics of each area. This control is distributed at differenttimes and days of the week according to the wishes of the users.Similarly, the control can be adaptive and give advice on thedifferent phases day/time/temperature, depending on the user’sstored data (historical file), and the data provided by the manu-facturer on such aspects as local climate. The buttons at the

bottom permit the user to fully parameterize the thermalcontrol.

As an alternative to this type of centralized network we couldconsider systems with connections to their sensors and actua-tors as autonomous computational elements. That is to say, wecan distribute the computer control of the house in smaller

Figure 3: Centralized Domotic Network (Control via PC)

Figure 4: Virtual Thermostat and Thermal Control of the Whole Building (Parameterization and Links).

Figure 5: Decentralized Domotic Network (wireless communications, autonomous systems and parameterization with PDAs).

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 41

elements, which leads us to think in terms of UbiquitousComputing [7], [8], [9]. Now we should be thinking in terms ofwireless communication and, consequently, radio wave sys-tems, etc. Thus our virtual system in Figure 4 can become theone in Figure 5. In the two figures we can see that by parame-terizing with a universal control, such as a program in a PDA(Personal Digital Assistant), and by communicating via infra-red, the programming of each system located in the buildingcan be modified without any type of wired connection.

2.3 InteractionThe interaction within the house, that is to say, the ways in

which the user can interact with the different devices installedin the house, can be of three types as listed below:

Daily programming: This is the usual, most common type ofinteraction, and basically the one for which all domoticelements are designed. The user carries out such actions aswalking into a room, switching the lights on (the light willcome on or, depending on the programming of the luminositysystem, the blinds will go up if there is enough external lumi-nosity), etc. In this type of interaction the user need not worryabout routine actions and the environment is at his or her serv-ice.

Parameterization: This consists of the modification of theprogramming of each system to change its performanceaccording to each user's wishes. This includes the differentschedules and temperature phases in each of the building’srooms, everything to do with luminosity, control of cheap rateenergy or control of water consumption, to give just a fewexamples.

Presence Simulation: This is very effective when household-ers are away on holiday. The mere fact of switching the light,the television, the hi-fi, etc. on and off at previously pro-grammed intervals can be very useful as a way to discouragepossible intruders.

The Teaching of Domotics To install a domotics system in a building, it is necessary

to train technicians so that they can know what to install in allkinds of building, and these techniques are already beingstudied in Secondary Education. That is why we thought itwould be a good idea to create a simulation tool which wouldbe much more effective than the complex and expensive boardsthat a Domotics classroom would require.

How the CHICO research group developed this tool is sum-marized below, step by step:

3

Figure 6: Home Automation Installation in an Apartment

Human-Computer Interaction: Overcoming Barriers

42 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Step 1: The Simulation Environment (the Design Problem). To create a visual system for the installation of domotic

elements in buildings, we developed a domotic simulationenvironment in 1996. This version was first developed in VisualBasic. Then we decided to use Java and the appearance of thefirst environment (1997) can be seen in Figure 6

In this Figure, we can see the various elements to be installedin a conventional house, their links and parameterizations. Inthis environment and, by means of Direct Manipulation [10],[11], the student can have a group of domotic elements on thetool bar and, by means of actions such as “drag and drop”events, (s)he will be able to locate, connect and parameterizethem.

The great variety of elements and the actions on them makesthis type of installation a complex problem in which the studentcan easily get lost and be unable to complete the installationsatisfactorily. In fact, some authors argue that these easy tohandle interfaces are not effective in educational environments[12]. Others try to solve the problem by means of DirectConcept Manipulation (DCM) [13], [14], [15].

We should bear in mind that, with just the four administrationareas mentioned before, in a conventional house (4 bedrooms,living room, kitchen and two bathrooms) we can install about95 elements taking as an average one sensor, one actuator andone system for each room and administration area. This instal-lation, including its connections and parameterizations, is quitedifficult to carry out properly.

Step 2: Planning (an Intermediate Abstract Solution) To tackle this complex problem we follow the ideas of Solo-

way [16], and Bonar and Cunningham [17] who proposed thedevelopment of intermediate solutions in their programmingenvironments. We have developed a tool called Plan Editor.With this editor, the student carries out a first approach to solv-ing the problem by making a plan with abstract actions ofdesign (Figure 7). The plan editor is equipped with a collectionof problems with different expert solutions in order to monitorstudents as they make their plans [4]. Thus, by workingabstractly, they should be able to manage the first approach tothe solution without any great difficulty [5].

Step 3: MatchingThere is a close relationship between the planning tools and

the simulation tools, so that, while the students are designingthe scenario to simulate, they can refer back to the schema oftheir plan and what they put in it in order to be able to matchtheir design with the plan they made previously. (Figure 8).

Once the design is finished, the Simulation Environmentprovides the student, in the form of conclusions, with a list ofthose aspects where there is a certain degree of divergencebetween what was planned ones and what was designed(abstraction and detail).

Step 4: Working in Group: Collaborative Learning In pursuit of the traditional model of the classroom where

students work collaboratively to perform practical tasks, we

Figure 7: The Plan Editor

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 43

decided to develop our system into one which would supportgroup work.

Consequently, we divided our work into two phases (Figure9) each with different objectives: - To foster reflection and discussion when building a schema

of the model to be designed in order to meet a specification. - To promote spontaneity in the realization of the schema

obtained in the previous phase, discussing aspects andhypotheses which will be proven via the simulation of themodel built.

In the first phase, tools based on asynchronous communica-tion are used in order to specify, discuss and organize thegeneral approaches to the solution to the design problem [18].An editor is available for the individual drawing up of designstrategies by means of a specification language with a highlevel of abstraction. The elements of this language have a visualrepresentation associated which facilitates their use by meansof the typical procedures of direct manipulation. Once built, themodels are presented to the other members of the group. Thegroup, by using a method of argumentative discussion,comment and request explanations about the decisions taken.The author is forced to react to these comments by arguing,justifying and refining his/her decisions. All this is done in aconstructive and collaborative process that leads to learning.The results generated from this process are organized in a tableof contents which an be accessed and visualized followingcertain search and classification criteria.

These results are a good starting point for the second phase.But it is necessary to list and to organize attributes associatedwith the elements comprising the model. This is carried out bymeans of a collaborative tool based on the direct manipulationof the domain objects. With this tool you can select, insert,eliminate, move, etc. [19]. The tasks associated with this designprocess can be distributed among the participants followingvarious criteria. As the objective of the design it is to obtain amodel fulfilling certain requirements, this is verified by meansof the hypothesis approach matched against the model's simu-lation. This simulation, in which all the members can interactin real time by direct manipulation and synchronous communi-cation, will contribute to error detection, the result of reflectionand the discovery of inconsistencies in the approach. Theseprocesses should lead the group to self-directed learning.

Figure 8: Plan-Design Matching

Figure 9: Modelling for Group Learning

Human-Computer Interaction: Overcoming Barriers

44 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Step 5: Proposal of a Ubiquitous Classroom In the evaluation process of both tools we realized that the

students had the need to communicate among each other tomake good use of them and to be able to complete theirexercises satisfactorily. We therefore thought that it would be agood idea to carry out an individual experiment in the planedition phase that in turn would be visualized and evaluated byall the students. In this first approach the student could makehis or her design plan (with or without monitoring), and lateron, this could be visualized and discussed. Thus it would benecessary to coordinate actions such as the contribution ofsolutions, the control of visualization, corrections, etc. This ledus to thinking about the idea of the ubiquitous classroom [20].

Our ubiquitous class features an architecture based on the useof various wireless network technologies, in some casesmaking use of vendor-dependent devices. This architecture canbe seen in Figure 10.

In order to be able to carry out this idea of collective evalua-tion of individual plans in the ubiquitous classroom, we haveadapted the Plan Editor to the characteristics of a PDA. To dothis it was necessary to restructure the user's interface to adaptit to the constraints of size, utility and potential ubiquitous useof the PDA.

The idea of the discussion is based on sending the proposedproblem (or the formulation of the problem projected on thescreen), from the coordinating PC to each PDA. Then eachstudent will solve his/her plan, send it together with his/heridentity to the coordinating PC and, in a way hidden way tohim/her, (s)he will also send the trace of the actions carried outto solve the proposed problem.

The coordinating PC will have a user's interface fitted to theclassroom management and it will coordinate such functionsas: - Visualization of the different proposals of each student's

individual plan on their screen. - Selection and projection of individual or group selections of

these proposals. - Control and projection of the system’s solutions and of the

student's errors (asynchronous plan monitoring). - Control of the teacher's manual corrections on the interac-

tion screen and their transmission to the student who submit-ted the proposal.

- Control of submissions and receptions. - Determination of the level of complexity. - Etc.

We opted for two styles of work with PDAs depending on thelevel of complexity required: - In the first case, the student has support when drawing up

his/her plan. The memory of the PDA is not large enough tostore the contents of the database needed to assist thestudent. So we opted for sending only the strictly necessaryinformation, that is to say, the formulation and the support tothe problem proposed.

- Conversely, in the second case we do not offer support to thestudent, so the coordinating PC will provide support aposteriori (support in the drawing up of the plan afterprocessing the complete plan). The corrections can appearon the projection whiteboard, but before that it is possiblefor each partner to receive this plan and propose modifica-tions that will be visualized on the interaction whiteboard ina similar way.

We are therefore proposing a ubiquitous model via themetaphor of the traditional classroom and based in particularon group correction of the proposed problems. In this class-room the student writes his/her solution on the whiteboardand this is visualized, discussed and corrected.

Conclusions and Future Works The advantages of our first two proposals, planning and

collaboration in the teaching of Domotics, have been evalu-ated with positive results, which is a step forward in the useof Direct Manipulation applied to educational environments.We have also demonstrated the need to use CollaborativeTeaching Systems when the problems to be solved are of acertain complexity. Finally, we consider that this project,which follows the new interaction paradigm of UbiquitousComputing, has made an important contribution, byendowing the traditional classroom with new methods ofdistributed computing which reinforce the proposals thathave gone before.

In the near future we will see the evaluation of the ubiqui-tous classroom and the results of fresh research into newways of interaction to improve learning in the domoticsclassroom and those of other domains.

4

Figure 10: Architecture of the Ubiquitous Classroom

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 45

References[1]

The Home Automation Association. <http://www.homeautomation.org; Smarthome Systems> <http://smarthomeusa.com>

[2]Merloni Electrodomestici Main Site.<http://www.merloni.it/eng/default.html>

[3]U. Hansmann, L. Merk, M.S. Nicklous, T. Stober. “PervasiveComputing Handbook”. Springer, 2001.

[4]J. Bravo Rodríguez. “Planificación del diseño en entornos desimulación para el aprendizaje a distancia”. Ph. D. Dissertation,Madrid. 1999,

[5]J. Bravo, M. Ortega, M. F. Verdejo. “Planning in problem solving:A case study in Domotics”. In Proceeding of Frontiers in Educa-tion. Kansas City – USA. 2000.

[6]S. J. Holst. “Directing Learner Attention With ManipulationStyles” In CHI'96 Conference Companion.

[7]M. Weiser. The computer for the twenty-first century. ScientificAmerican, September 1991, 94–104.

[8]M. Weiser. Ubiquitous computing. <http://www.ubiq.com/hypertext/weiser/UbiHome.html>

[9] M. Weiser. The future of Ubiquitous Computing on Campus,Comm. ACM, 41–1, January 1998, 41–42 (1998)

[10]B. Shneiderman. “Designing the User Interface”. Addison Wes-ley Publishing Company. 1997.

[11]B. Shneiderman. “Direct Manipulation”. In B. Shneiderman(ED.), Sparks of Innovation in Human-Computer Interaction.Ablex Publ., NJ. 1993.

[12]G.B. Svedensen. “Influences of Interface Style on Problem Solv-ing”. International Journal of Man-Machine Studies.35:379–397. 1991.

[13]K. Sedighian, M. Klawe. “An Interface Strategy for PromotingReflective Cognition in Children”. In HCI’97, Bristol, UK. 1997.

[14]K. Sedighian, M. Westrom. “Direct Object Manipulation vs.Direct Concept Manipulation: Effect of Interface Style on Reflec-tion and Domain Learning”. In HCI’97, Bristol, UK. 1997.

[15]K. Sedighian, M. Klawe, M. Westrom. “Role of Interface Manip-ulation Style in Cognition in Learnware”. In Research Alerts ofACM-Interaction. September–October 2000.

[16]E. Soloway. “Learning to Program = Learning to ConstructMechanisms and Explanations”. Communications of the ACM.1986.

[17]J. Bonar, R. Cunningham. “Intelligent Tutoring with IntermediateRepresentations” ITS-88 Montreal. 1988.

[18]M.A. Redondo, C. Bravo, J. Bravo, M. Ortega. Colaboración enentornos de aprendizaje basados en casos reales. Aplicación enambientes de diseño y simulación. Cañas, J. & Gea, M. (Eds.)Proceedings of INTERACCIÓN 2000. I Jornadas de InteracciónPersona-Ordenador, pp. 143–153. Granada (Spain). 2000.

[19]C. Bravo, M.A. Redondo, J. Bravo, M Ortega. DOMOSIM-COL:A Simulation Collaborative Environment for the Learning of Do-motic Design. SIGCSE Bulletin (ACM) – Inroads, pp. 65–67,vol. 32, no. 2, June 2000.

[20]M. Ortega, M. Paredes, M.A. Redondo, P.P. Sánchez, C. Bravo, J.Bravo. “AULA: un sistema abierto de enseñanza de idiomas”.Novática – 153. September– October 2001.

Figure 11: User's Interface in the PDA Figure 12: Model of Interaction in the Ubiquitous Classroom.

Human-Computer Interaction: Overcoming Barriers

46 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Designing Complex Systems in Industrial Reality:A Study of the DUTCH Approach

Cristina Chisalita, Mari-Carmen Puerta-Melguizo, and Gerrit C. Van der Veer

The main interest of our research group is to study the design process of “complex interactive systems”. Theconceptual design framework we are using is DUTCH [5]. Coming as we do from an academic environment,our main interest was to put this framework into practice and to test our ideas in the real world. Theopportunity arose several times in both academic and industrial settings [7]. In these cases our group actedas consultants but now, for the first time, we are learning from the inside. We are currently working asethnographers and therefore we are members of a real interdisciplinary design team in a leading IT industry.In this paper we will set out the initial results of our experience and the lessons we are learning from it. Morespecifically, we will focus on the problems we encountered when performing Task Analysis, and on theimplications for the modelling tool developed from DUTCH.

Keywords: Design, DUTCH, Ethnography, EUTERPE,GTA, Industrial Practice, Interactive Systems Design, TaskAnalysis

IntroductionThe HCI research group at the Vrije Universiteit (Am-

sterdam. The Netherlands) is mainly interested in the designprocess of “complex interactive systems”. That is to say, thedesign of information technology that features in situationswhere people work in groups and therefore the systems are

used by many colleagues for a variety of tasks. Work activitiesin these cases include communication and coordination amongpeople, and the actions of several persons on shared objects andin shared workspaces. Designing systems for such situations isa complex activity and normally needs the collaboration of amultidisciplinary team in the design process.

An important step in this research was the development of aconceptual framework where the design process and the differ-ent sources of knowledge and modelling are described. Thebest way to validate and improve the conceptual design frame-

1

Cristina Chisalita graduated in Work and OrganizationalPsychology. Currently she is a Ph.D. student in Software Engineer-ing at Vrije Universiteit Amsterdam, The Netherlands. The maintopic of her research is the study of organizational subcultures andcommunities of practice for matters of technological supportdesign. Her projects include research into collaborative practices (indesign teams) and the design process (role of mental models andscenarios). She has been involved in several field studies with TheNetherlands Police force, software companies, the Dutch Tax Officeand museums from several countries in Europe. Cristina has devel-oped several papers (in collaboration) and has organized workshopson HCI topics. She is currently co-authoring the HCI Handbook:Designing Interactive Systems in Context – A MultidisciplinaryApproach, to be published by McGraw-Hill. She is involved inteaching Human-Computer Interaction and User Interface Designclasses to undergraduate and graduate students. <[email protected]>

Mari-Carmen Puerta-Melguizo has an educational and profes-sional background in Cognitive Psychology. Her expertise includesresearch on user’s mental models of complex systems, visual andverbal representations, and psycholinguistics using both the experi-mental framework and ethnography as main research tools. In 2000,She obtained a Ph.D. Cum Laude in Cognitive Psychology at theUniversidad de Granada (Spain). Her thesis explored the processes

involved in picture naming and the differences in representingverbal and visual information. She currently holds a postdoc post inComputer Science at the Vrije University in Amsterdam (The Neth-erlands). The project is funded by the Dutch Ministry of EconomicAffairs (SENTER). The research topic is the role of mental modelsof prospective users in envisioning design. Related to this research,and in collaboration with Cristina Chisalita, she is developing amethodology to build scenarios in which design ideas are presentedto the users. Furthermore, she visits industry design teams as anethnographer to explore the role of user’s mental models in a realdesign process. Mari Carmen is a lecturer in Human-ComputerInteraction and Human Information Processing at the Vrije Univer-sity. <[email protected]>

Gerrit C. van der Veer was one of the first people in The Nether-lands to work with computers. He is currently working on the devel-opment of a new design method called “DUTCH” (Designing forUsers and Tasks from Concepts to Handles) and a related task anal-ysis method called “GTA” (Groupware Task Analysis). He is Coop-erating Societies Liaison for the European Association of CognitiveErgonomics (EACE), a member of the Dutch Local Special InterestGroup of ACM SIGCHI, a representative of NGI, the DutchComputer Society, in IFIP: “Technical Committee on HumanComputer Interaction” (TC 13), and a reader in Interactive Systems. <[email protected]>

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 47

work is to apply it in practice. This has been done on severaloccasions in the past in academic and industrial settings. Forexample, it is used in a design company in Austria for thedesign of safety-critical systems [4]. On these occasions, mem-bers of our group collaborated to some extent in the designprocess. A few months ago we were allowed to actively partic-ipate in an interdisciplinary design team where we acted asethnographers. Currently we are in the preliminary phase of ourinvestigation concerning the complete design process.

In the following section we will briefly describe our concep-tual framework. The explanation will focus on the first phase ofthe design process: Task Analysis. In the rest of the paper wewill describe the problems we found while performing TaskAnalysis and the way these findings help us to understand andimprove the modelling of several aspects of Task Analysis.

Our Conceptual Framework: DUTCHBecause designing complex systems is in itself a complex

activity, several factors have to be taken into account. For thisreason, it is extremely important to define and use a designframework in which the process to follow is clear andcomplete. From a User-Centred approach, several models ofthe design process have emerged (e.g. [1] [2] [3]). Our researchgroup has developed and works with DUTCH (Designing forUsers and Tasks from Concepts to Handles), a model specifi-

cally created for multidisciplinary teams to design complexsystems [5]. The method is driven by an extensive task analysisfollowed by structured design and is characterized by iterativeevaluations using usability criteria. To cover the wide range ofaspects of design, DUTCH uses a combination of multiplecomplementary representations and techniques. Figure 1 givesan overview of the whole design process with all the activitiesand sources of information:

Following the DUTCH process the main activities are:1) Analysing the “current” task situation (task model 1 or

TM1).2) Envisioning a future task situation for which information

technology is to be designed (task model 2 or TM2).3) Specifying the information technology to be designed (the

User’s Virtual Machine or UVM). UVM means specifyingthe functionality of the system, the dialogue between theusers and the system, and the representation or way thesystem is presented to the user.

4) Evaluating activities to allow an iterative process ofimproving the analysis and the detailed specifications.

Focusing on Task Analysis: GTAIn the case of complex task situations especially, several

disciplines need to contribute in order to analyse, describe, andmodel the task world. As a result, Van der Veer and Van Welie

2

3

workorganization/ practice

usersknowledgebehaviour / needs

task model 1

task model 2

client

technology

presentationinterface

implementation

functionality

languageinterface

ethnography

analytic /psychologicalknowledgeacquisition

problemanalysis /specification specification /

negotiation

constraints

maintaining

consistency

UVMconceptualmodel

scenario

simulation

prototype

earlyevaluation

specification feed-back

usabilitymeasuring

early evaluation

Figure 1: The DUTCH Design Process

Human-Computer Interaction: Overcoming Barriers

48 UPGRADE Vol. IV, No. 1, February 2003 © Novática

[5] developed GroupWare Task Analysis (GTA): a broad task-analysis conceptual framework based on the integration ofdifferent approaches. In the following paragraphs we brieflydiscuss GTA and the elicitation techniques that can be used toextract information about the task world:

The conceptual framework. In order to model the task knowl-edge, GTA proposes a conceptual framework based on theintegration of views from Human-Computer Interaction (HCI)and Computer Supported Cooperative Work (CSCW). Theimportant thing, especially in a complex situation in which thework involves collaboration and communication, is to considerall the aspects related with performance of the tasks. Therefore,GTA describes the task world focusing on three different view-points:1. The agents and the roles. Specifying the active entities in the

task world: users and other stakeholders, systems or organi-zations and their role, as well as the organization of work(structure of actors, roles, and allocation rules).

2. The work. Specifying the decomposition of tasks and strat-egies (different informal ways, partly situated, of perform-ing the tasks), goals and events (which trigger the tasks).

3. The situation. Specifying the objects used in the task worldas well as their structure, the history of past relevant eventsand the whole social and physical work environment.

The methods. GTA aims to describe the task world in all itscomplexity and to generate a task model which is as completeas possible. This conceptual framework integrates analyticalmethods from HCI (individual oriented) with ethnographicalmethods from CSCW (group oriented). Consequently, GTAconsiders four sources of knowledge about the task that have tobe considered in task analysis: implicit and explicit individual(expert) knowledge as well as implicit and explicit groupknowledge. For each type of knowledge several specific elici-tation techniques are used: e.g. for individual-explicit knowl-edge: interviews and psychological observation; forindividual implicit knowledge: hermeneutics; forgroup-explicit knowledge: document analysis; forgroup-implicit knowledge: participant observationand interaction analysis.

3.1 EUTERPE: A Tool to Support Task-Based Design

Modelling the task knowledge is not an easyactivity, especially when it is necessary to describeand analyse a complex situation and to work in amultidisciplinary design team. There are severalproblems that may arise in such a situation: e.g. theamount of data from task analysis (which can beoverwhelming) or communication issues in thedesign team (e.g. specialists from different domainswith different backgrounds who do not speak thesame language). In order to overcome these prob-lems a set of representations and a supporting toolwere developed for modelling the knowledge gath-ered during task analysis [6]. EUTERPE is based onthe task world ontology that describes the way welook at the task world during task analysis (see

Figure 2). This ontology is derived from the conceptual frame-work of GTA and specifies six concepts on which the taskworld is modelled (task, goals, objects, roles, agents, events) aswell as the relationship between them (uses, has, triggers,plays, performed by, sub-role, responsible, used by).

In order to represent the task knowledge and to use it inmultidisciplinary teams, EUTERPE uses multiple representa-tions:• Relation templates are used to represent the conceptual

entities in relation to other entities and other relevant infor-mation. For example a task can be described by using sever-al specifications: task name, goal, constructor, unit/basictask, etc. Furthermore, each template has a section withcomments (see Figure 4).

• Structure graphs like hierarchical trees and flow diagramsare used to represent the hierarchical decomposition of thework (see Figure 3), workflow and data flow.

• Pictures, drawings and video are used to represent objectsand the work situation (see Figure 5), etc.

An extra functionality of EUTERPE is to perform analysisqueries. E.g. it is possible to find out how many tasks do nothave a role related to them.

3.2 Modelling Two Task WorldsGTA and its tool EUTERPE are used to build TM1 and TM2.

TM1 involves performing a systematic analysis of the currentsituation and is built by modelling the task knowledge from thethree viewpoints described in the conceptual framework. Thisanalysis may help to formulate the design requirements and, atthe same time, may allow evaluation of the design later on.

TM1 reveals what problems, conflicts and inconsistenciesare currently found in performing the task. Based on the resultsof this analysis and on the requests formulated by the client ofthe design, and on ideas and requests from users and stakehold-

Used_by

Object Contains Role Subrole

Is Responsible

Uses

Performed_by

Plays

Task Agent

TriggersSubtask

Performed_byEvent Triggers

Has

SubgoalGoal

Figure 2: The Task World Ontology

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 49

ers, the new task world is envisioned. In general, TM2 will beformulated and structured in the same way as TM1, but in thiscase it is not considered as a descriptive model of users’ knowl-edge but as a prescriptive model about the knowledge a user ofthe new technology should possess.

DUTCH in PracticeAlthough quite a complete model, DUTCH is still only a

preliminary conceptual framework. An important goal of ourcurrent research is to adjust and improve the design framework.For this purpose we decided to study real design teams inindustrial settings. Our current participation in a multidiscipli-nary design team is helping us a great deal. In the followingparagraphs we present the first results of our experience andhow DUTCH and GTA are being applied in the design of a(very) complex interactive system.

4.1 Starting as Ethnographers and Continuing as ExpertsA few months ago we contacted a leading enterprise engaged

in the development, production, marketing, and servicing ofhigh-tech equipment. The company has approximately 8000employees, located in 16 countries of Europe, America andAsia. We were invited to collaborate with the User InterfaceDesign team and started as ethnographers inside the team. We

needed to understand the situation, the workflow in the designteam and the characteristics of the system they tried to design.

In our role of ethnographers we were originally meant to beparticipant observers and apprentices but about 2 weeks afterwe started we became real members of the design team. It wasclear the team had problems understanding and applying itsown defined design process (a derivation of ISO 13407:“Human Centred Design for Interactive Systems”) and ourcontribution proved to be quite useful in solving theseproblems. From that moment we were no longer consideredapprentices but experts giving our own knowledge to the team.In this situation we influenced and shaped their own designprocess with our approach and methodology. The fact that wedid learn something from them, but not everything, and thatthey learned from us, made our experience a special case ofethnography.

4.2 The Design Activity in the Multidisciplinary TeamDuring the design activity, the main difficulties we as a team

found were related with understanding and communication.The problems arose due to different factors. First of all, as ateam we had different backgrounds; the multidisciplinary teamincluded experts in usability, engineering, Cognitive Psycholo-gy and HCI. Secondly, we had different levels of knowledgeabout the task domain. The tasks and the technology to consid-

4

Figure 3: Example of a Task Hierarchy Representation

Human-Computer Interaction: Overcoming Barriers

50 UPGRADE Vol. IV, No. 1, February 2003 © Novática

er and learn about are highly specialized. We, as ethnographers,needed some time to understand them. And third, the team haddifferent knowledge about the design process itself. In thiscase, our expertise proved to be very helpful to the company inorder to improve their own ideas about the design process andhow to perform task analysis, among other activities.

After two weeks of work, these observations led us to intro-duce DUTCH and EUTERPE. The result of this presentationwas that the design team considered DUTCH a suitableapproach for their particular design project and decided to workwith it. From that moment communication became easier. As ateam we all started to speak the same language: we started totalk in terms of tasks, objects, actors, roles and events. We cansay that the improvement in communication was the result oflearning from both sides: the “natives” learned about DUTCHand we learned a lot about the specific task domain.

4.3 Understanding the Task WorldAs ethnographers as well as members of the design team, we

first had to try to understand the task domain. This proved to bevery complex itself because the tasks and the technology toconsider and learn about are highly specialized. The initialdescription of the design project was to re-design the userinterface controlling a complex system. Based on our growingshared understanding about the task domain and on the require-ments and constraints from management (concerning the time)we specified the goals of the design project in greater detail. Ingeneral, we needed to integrate different versions of the sameinterface (the variation is related with the “age” of the systems)and to support all types of users who are using the interface fordifferent purposes, in different locations and countries.

The Process. Following GTA we defined the agents, the workand the situation. We started the task analysis finding out whothe users are and what their tasks are in the current situation.We found out there are users who are from inside the company(from different departments) as well as users outside the com-pany (the company ‘s clients who buy the systems). After weidentified the types of users we started to investigate what tasksthey have to do and to analyse them using the work viewpoint(task decomposition and strategies). And finally, focusing onthe situation, we specified the objects used by each type of userin the different tasks. Socio-cultural aspects were also consid-ered (e.g. some of the company’s clients are from differentcontinents).

In performing the task analysis we encountered two mainproblems:• The company is very large with different departments that

do not communicate very much. This situation made the taskof identifying and defining the users much harder.

• The company’s clients may be located in different conti-nents. These cultural differences make the analysis morecomplex.

The Methodology. To identify the users and to analyse thetasks we used several techniques: interviews, focus groups (inwhich people were invited from different departments in orderto find out “who” the users of the system are), documents fromthe company (including a previous document on task analysis),etc.

4.4 Modelling the TM1: EUTERPE in PracticeWe used EUTERPE to develop task model 1. EUTERPE

proved to be a really useful tool in modelling the task knowl-

Figure 4: Example of a Task Template

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 51

edge of such a complex task domain, though it gave some prob-lems as well.

We modelled the tasks, objects, roles, agents and events. InFigure 3 we present some of the tasks we modelled.

The templates helped us to specify several aspects relatedwith each of the six concepts and the relationships betweenthem. An example of task template can be seen in Figure 4.However, we encountered some problems using them (seesection on aspects to be improved).

From our experience we found out that, while it is veryuseful, EUTERPE needed some improvements.

Positive aspects of using EUTERPE. The tool supportedcommunication and understanding in the design team verywell. Firstly, because it provided us with a common languageand secondly because it allows easy to use visual representa-tions. Having the representations in front of us we coulddiscuss and structure our ideas. In this way, the representationsbuilt with EUTERPE supported our understanding of the mainissues about the task domain. Another advantage is that wecould store the information in a small space and we could alsoretrieve it easily. Previously the team had represented the task

model on paper and had difficulties organizing and retrievingall of the relevant information.

Aspects to be improved and suggestions. Although at firstsight this may appear to be a problem, another advantage ofusing our tool in a real complex project is that we discoveredthat some features of the tool needed to be added or changed.As users of the tool, we came up with several ideas of how toimprove EUTERPE. For example, we discovered that openingthe templates every time we wanted to find out something aboutthe task (or the other concepts) was tedious and time consum-ing. Also, the program should allow the analysts to specify thepreconditions for a certain task, and it would be nice if youcould see the comments about the objects from the tasktemplate. Another idea was that the program should offer thepossibility of tracing the changes performed at differentmoments or locations by different members of the group. Otherproblems were that the current version only runs on Windows,the comments screen does not have a scroll bar and it would benice to have a spell-checker. In this type of company the designteam is typically international and includes people with differ-ent language backgrounds (in our case Dutch, Italian, Romani-an and Spanish).

Conclusions and Further StepsOne of the main characteristics of designing interactive

systems nowadays is their increasing complexity. Interactivesystems are used by a group of people in a variety of roles. Thedesign of these systems requires a multidisciplinary team withknowledge in different areas (e.g. HCI, Computer Science,Organizational Psychology, etc). The co-ordination of such acomplex team is sometimes difficult and tools to facilitatecommunication and understanding among the differentmembers of the team are needed. DUTCH and its task-analysisconceptual framework, GTA, are design frameworks developedto work in these situations. The main goal of our research wasto participate in a real design team in industry. As a result ofthis activity we intend to change and adapt our approach to fitthe requirements of real-life design. This paper shows ourresults in task analysis applying GTA and its related toolEUTERPE in a real design project. We are performing ourstudy as ethnographers as well as members of the design teamin a leading IT industry. In this setting we had the opportunityto apply and test our conceptual framework for task analysis. Atthe present moment we are engaged as a team in building TM1.

From the results we have had until now we feel entitled toclaim that GTA is an appropriate conceptual framework fortask analysis in such a complex task domain. The problems weencountered were not related at any moment with the taskworld ontology. As a matter of fact, to work with the ontologyhelped the design team to organize the knowledge obtainedabout the current situation. Also, EUTERPE proved to be ahelpful tool in modelling knowledge about TM1. The problemswe encountered in using EUTERPE in our modelling activityare currently being studied in our research group with a view toimproving the tool. Our goal is for EUTERPE to support realdesign teams better.

5

Figure 5: Picture of Distributed Documents during a Design Phase

Human-Computer Interaction: Overcoming Barriers

52 UPGRADE Vol. IV, No. 1, February 2003 © Novática

We intend to continue working with the design team through-out whole project. In this way we will validate the completeDUTCH design process. The next step is to investigate howwell GTA and EUTERPE support modelling of TM2. Furthersteps include using the New User Action Notation (NUAN)editor, the tool developed by our group in order to specify theUser’s Virtual Machine (see Figure 1).

To sum up, our research in the field supports DUTCH, andthe tools related with it, as an appropriate framework for thedesign of complex interactive systems.

References[1]

C. H. Lewis, J. Rieman. Task-centred user interface design: Apractical introduction. Shareware. 1993.<http://www.acm.org/~perlman/uidesign.html>,<ftp://ftp.cs.colorado.edu/pub/cs/distribs/clewis/HCI-Design-Book/>

[2]D. J. Mayhew. The Usability Engineering Lifecycle. A practition-er’s handbook for user interface design. Morgan Kaufmann: SanFrancisco, Ca. 1999.

[3]W. M. Newman, M. G. Lamming. Interactive system design.Addison Wesley: Harlow, UK. 1995.

[4]R. Van Loo, G. C. Van der Veer, M. Van Welie. GroupWare TaskAnalysis in practice: A scientific approach meets security prob-lems. In: 7th European Conference on Cognitive Science Ap-proaches to Process Control. Presses Universitaires de Valenci-ennes 1999.28: Valenciennes, Fr. P 105–110. 1999.

[5]G. C. Van der Veer, M. Van Welie. Task Based GroupWareDesign: Putting theory into practice. Proceedings of DIS 2000.ACM press: New York, NY. P326–337. 2000.

[6]M. Van Welie. Task-Based User Interface Design. SIKS Disserta-tion Series 2001–6. Vrije Universiteit, Amsterdam. Nl. 2000.

[7]M. Van Welie, G. C. Van der Veer. Structured methods and crea-tivity: A happy Dutch marriage. Collaborative Design. Proceed-ings of CoDesigning 2000. Springer Verlag: London, UK. 2000.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 53

Towards Universal Access in the Disappearing Computer Environment

Constantine Stephanidis

The emerging technological paradigm of the Disappearing Computer will bring about new challenges forthe discipline of Human-Computer Interaction, resulting in multiple new requirements for the developmentof user interfaces. These challenges will inevitably need to be addressed in the broader context of developingan Information Society acceptable to all citizens. In this respect, Universal Access is expected to play acritical role in providing appropriate user interface development methods and tools. This paper describesthe design, implementation and evaluation of an application experiment addressing some of the issues raisedby Universal Access in the Disappearing Computer environment, and in particular context-awareness,interface migration and continuous interaction, and briefly discusses the requirements that arise concerningthe design, development and evaluation of user interfaces in such a context.

Keywords: Disappearing Computer, Nomadic Application,Universal Access.

Introduction Computing technology evolves rapidly, and each genera-

tion of technology offers new opportunities to improve thequality of human life. The target user population addressed isbroadening, the availability, type, functionality and content ofnew products and services is expanding, and access technolo-gies are being diversified. At the same time, however, eachgeneration of technology has the potential of introducing newdifficulties and barriers in the use of products and services –and eventually in every-day life, new risks for the health andsafety of people, and new forms of social exclusion and

discrimination. Human-Computer Interaction (HCI) is one ofthe key factors towards ensuring social acceptability of compu-ter-based products and services, as users experience new tech-nologies through contact with their user interfaces. In thisrespect, HCI plays a critical role and is continuously calledupon to face new challenges.

In the years ahead, as a result of the increasing demand forubiquitous and continuous access to information and services,Information Society Technologies are anticipated to evolvetowards a new paradigm referred to as the DisappearingComputer (DC), i.e., an environment characterised by invisible(embedded) computational power in everyday appliances andother surrounding physical objects, and populated by intelli-gent mobile and wearable devices [1].

1

Constantine Stephanidis is Deputy Director of the Institute ofComputer Science (ICS), Foundation for Research and Technology– Hellas, and Head of its Human-Computer Interaction and Assis-tive Technology Laboratory. He is also a member of the Faculty atthe Department of Computer Science and member of the Senate ofthe University of Crete, Greece, <http://www.uch.gr/>. For manyyears, he has been engaged, as Prime Investigator, in pioneering re-search work, partly funded by the European Commission and, in1995, he introduced the concept of “User Interfaces for All” as a so-cio-technical goal in the context of the Information Society. He alsointroduced a new technical framework for achieving this goal – amethod and an accompanying set of tools that support the develop-ment of “Unified User Interfaces” facilitating universal access andusability of interactive products, applications and services. Prof.Stephanidis published more than 200 technical papers in scientificarchival journals and proceedings of international conferences relat-ed to his fields of expertise. He serves in the Editorial Board of sixscientific archival journals and the Programme Committee of manyInternational Conferences, and has organised several internationalscientific conferences, workshops, seminars, and panels. Prof.

Stephanidis is the Editor-in-Chief of the Springer international jour-nal “Universal Access in the Information Society”, and the Editor ofthe LEA book “User Interfaces for All - concepts, methods, andtools” (book flyer available in PDF <http://www.ics.forth.gr/proj/at-hci/files/LEA_Book.pdf> and TXT format <http://www.ics.forth.gr/proj/at-hci/files/LEA_Book.txt>). He is the Founding Chair ofthe International Conference “Universal Access in Human Compu-ter Interaction”, Founder of the ERCIM Working Group “UserInterfaces for All” and General Chair of its Workshop, and theFounding Chair of the International Scientific Forum “Towards anInformation Society for All”. Prof. Stephanidis is member of theExecutive Committee and member of the Board of Editors of theEuropean Research Consortium for Informatics and Mathematics(ERCIM), a member of the Advisory Committee of the World WideWeb Consortium (W3C), Head of the W3C Office in Greece, partic-ipant in the W3C – Web Accessibility Initiative, and member of var-ious international professional associations related to the fields ofhis expertise (IEEE, IEE, ACM, HFES, AAATE, RESNA, etc).<[email protected]>

Human-Computer Interaction: Overcoming Barriers

54 UPGRADE Vol. IV, No. 1, February 2003 © Novática

As a result, the DC is expected to have clear and profoundconsequences on the type, content and functionality of theemerging products and services, as well as on the way peoplewill interact with them, bringing about new research issues, asthe dimensions of diversity currently identified in users, tech-nologies and contexts of use will change radically, resulting inmultiple new requirements for the development of InformationSociety Technologies [2]. This technological evolution willinevitably need to be addressed in the broader context of devel-oping an Information Society acceptable to all citizens. In thiscontext, the notion of Universal Access is critically important.Universal Access implies the accessibility and usability ofInformation Society Technologies by anyone, anywhere,anytime [3][4][5], and aims to enable equitable access and ac-tive participation of potentially all citizens, in existing andemerging computer-mediated human activities, by developingusable products and services, capable of accommodating indi-vidual user requirements in different contexts of use, independ-ently of location, user’s primary task, target machine, or run-time environment.

The Disappearing Computer: Interaction ChallengesThe DC environment will be populated by a multitude of

hand-held and wearable “micro-devices” (e.g., wrist-watches,bracelets, personal mobile displays and notification systems),and computational power will be distributed in the environment(e.g., embedded screens and speakers, ambient pixel or non-pixel displays, smart clothing) [6][7]. Devices will range from“personal”, carrying individual and possibly private informa-tion, to “public” in the surrounding environment. Devices willalso vary in the type and specialisation of the functionality theyoffer, ranging from “personal gadgets” (e.g., monitors embed-ded in clothing), to “general-purpose appliances” (e.g., wall-mounted displays).

A variety of new products and services is made possible bythe emerging technological environment, including “homenetworking and automation” (e.g., [8]), mobile health manage-ment (e.g., [9]), interpersonal communication (e.g., [8]) andpersonalised information services (e.g., [10]). These applica-tions are characterised by increasing ubiquity, nomadicity andpersonalisation, and are likely to pervade all daily human activ-ities. They have the potential to enhance security in the physicalenvironment, save human time, augment human memory andsupport people in complex tasks as well as in simple activities.

The aforementioned issues shape the forms of interactionlikely to emerge in the new computing environment, and raisea series of challenges for the field HCI, related to the accessi-bility, usability and, ultimately, acceptability of the emergingtechnologies. Some of the particularly important challengesare:

Implicit and continuous interaction. Technology ‘disap-pears’ to humans both physically and mentally, as devices arenot perceived as computers, but rather as augmented elementsof the physical environment [13]. In such an environment,humans can no longer be focused on computing tasks. There-fore, interaction shifts from an explicit paradigm, in which theusers’ attention is on computing, towards an implicit paradigm,

in which interfaces themselves drive human attention whenrequired [14]. Interaction in the emerging environment is nolonger based on series of discrete steps, but on a continuousinput /output exchange of information [15]. Continuous inter-action differs from discrete interaction since it takes place overa relatively longer period of time, in which the exchange ofinformation between the user and the system occurs at a rela-tively high rate in real-time. A first implication is that thesystem must be capable of dealing in real time with the distri-bution of input and output in the environment [11][16]. Thisimplies an understanding of the factors which influence thedistribution and allocation of input and output resources indifferent situations for different individuals. The main chal-lenge, in this respect, lies in dynamic context-awareness [12].Adequate models, as well as monitoring and sensor technolo-gies for detecting and representing relevant contextual factorsare necessary [17], and appropriate techniques need to be iden-tified for supporting input / output distribution. It is likely thattechniques such as adaptation [18], interface migration [19],plasticity [20], mobile code [21], etc, will play an importantrole in this process.

Perceptual and Cognitive Issues. Due to the intrinsic charac-teristics of the new technological environment, it is likely thatinteraction will pose different perceptual and cognitivedemands on humans compared to currently available technolo-gy. It is therefore important to investigate how human percep-tual and cognitive functions will be engaged in the emergingforms of interaction, and how this will affect an individual’sperceptual and cognitive space (e.g., emotion, vigilance, infor-mation processing, memory). The main challenge in thisrespect is to identify and avoid forms of interaction which maylead to negative consequences such as confusion, cognitiveoverload, frustration, etc.

User control. The invisibility and complexity of distributedand dynamically defined systems prevent humans from operat-ing devices step-by-step towards the completion of a task.Rather, humans will manage tasks leaving the execution tocomputing units in the technological environment. Therefore,the distribution of tasks among humans and computing devicesundergoes radical changes, and needs to be redefined, andappropriate forms of context- and task-dependent delegation,shared control and responsibility, and initiative-taking need tobe elaborated. The provision of effective and efficient humancontrol on the dynamic and distributed system becomes, there-fore, critical.

An Example of Interactive Application in the Disap-pearing Computer Environment

The concept of DC reflects an infrastructure in which usersare engaged in mobile interaction sessions within environmentsconstituted of dynamically varying computational resources.Therefore, applications are required to continuously followend-users and provide high-quality interaction while migratingamong different computing devices, and dynamically utilizingthe available I/O resources of each device (see Figure 1). In thecontext of the 2WEAR - IST-2000-25286 project [29], anapplication experiment has been developed exhibiting elements

2

3

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 55

of mobile, wearable, and wireless I/O resources, multiple I/Oresources employed concurrently and interface migration withstate-persistent re-activation, ability to dynamically engage ordisengage I/O resources as those become available or unavail-able in mobile situations, and, finally, high-quality interfacedesign to ensure interaction continuity in dynamic I/O resourcecontrol.

The developed experimental application is a nomadic “musicbox” for MP3 files, providing downloading from a serverthrough the network, local storage, and decoding and play-backfunctionality through a software / hardware player with typicalaudio control functionality.

To support the development of such a distributed, migratingapplication, characterized by dynamic I/O configuration, asoftware architecture has been designed and implementedfacilitating the development of interfaces with the followingkey properties:• Remote utilisation of multiple I/O resources (e.g., audio

devices, different types of displays, input devices) over awireless LAN, supporting mobility (i.e., the applicationfollows the user) and state persistence (i.e., the applicationremembers its state in between interaction sessions).

• Management of dynamic loss of connection with some I/Oresources or discovery of new resources (the running inter-face dynamically employs new I/O resources or possibly re-allocate existing ones to ensure interaction continuity).

• The interface continuously informs the user of the detectionof new available computing units and of the loss of connec-tion with units within the interactive space. The user is ena-bled to engage or disengage computing units on a task-oriented basis.

Set upA specific set-up of the interaction environment has been

selected as a collection of independent mobile devices withwireless connection, comprising (see figure 2): (i) a portablemusic player unit (e.g., walkman), providing a hardware MP3player, local operating system with temporary file storage and

primitive program execution control, networking for down-loading, hardware buttons for playback control; (ii) headphones providing audio output capability with wirelessconnection, accepting streamed audio data and audio controlcommands from a server; (iii) a hi-fi system, providing thefunctionality of the portable music player, automaticallysupporting audio play, and allows exclusive external use of theloudspeakers; it offers audio playback with mixing capabilityenabling audio redirection to an audio service, and supportsspeech synthesis; (iv) a wrist watch, providing a softwarecontrolled LCD display, supporting 2 icons at fixed positionsfrom a collection of 8 icons; it supports the generation of 16tones that can be activated through software control, andprovides two programmable buttons as well as an alert service.These devices currently do not offer the functionality which isrequired to directly employ them as open computing platforms.In the context of the DC environment, however, it is expectedthat these types of devices will exhibit embedded operatingsystems and networking support.

DesignDue to the dynamic nature of I/O, generic I/O modelling has

been employed. Multi-modal logical I/O categories have beenintroduced in the dialogue design, and physical I/O resourceshave been explicitly linked to the respective logical categories.During interaction, for each I/O task, multiple availableresources may be detected at a particular point in time. In orderto ensure that the most suitable resource is selected at run-time,a relative ranking of these resources has been provided duringdesign. The concurrent employment of a set of I/O resourceshas been explicitly documented by assigning to the involvedresource equivalent “selection scores”. Finally, during interac-tion, there are situations in which, for any active I/O task, theassociated resource may become unavailable due to loss ofconnection. In this case, an available alternative resource has tobe detected. However, if all the candidate resources are used bycurrently active I/O tasks, the interface may have to makedynamic re-assignments based on task criticality and alterna-

computational environment

distributed applications

users

device loss / discovery, device mobility

migration, I/O re-configuration

user mobility

Figure 1: Mobile interaction in dynamically changing environments

Human-Computer Interaction: Overcoming Barriers

56 UPGRADE Vol. IV, No. 1, February 2003 © Novática

tive resource availability. The related design logic has been ap-propriately documented and subsequently embedded in animplementation form in the run-time interaction control com-ponent.

When interacting with applications in a distributed comput-ing environment, it is critically important that the user is ena-bled to manage alternative applications and to switch amongthem, choose the I/O re-configuration to be applied in case ofdynamic device engagement, or loss of device connection, anddecide the migration and re-activation of the system to anotherdevice. Accordingly, three categories of functionality need tobe provided in the user interface: application-specific dialogue,application management, and migration / dynamic I/O resourcecontrol.

A combination of design methods has been used in thedialogue design of the nomadic music box. Interaction taskshave been used to enable input abstraction. Interactors havebeen used to provide common re-usable physical I/O behav-iours. State-based specification for I/O tasks have been used toenable a common logical implementation. Finally, polymor-phic instantiation has been used to enable the provision ofalternative physical designs (for the dynamically available I/Oresources) of the abstract I/O tasks.

In the dialogue instantiation phase, the binding of the input /output action set to various alternative configurations of I/Oresources has been defined. Alternative configurations areranked according to their selection priority. Following thisapproach, an I/O task can be sufficiently supported duringinteraction if there exists at least one configuration, within theI/O task dialogue instantiation logic, in which all the necessaryI/O resources are available.

Four tasks has been selected in the ‘Music Box’ scenario,namely Selector (single choice), Command (performing a

direct action, such as a button press), Numeric (defininga numeric value), and Text input. On the basis of thesefour I/O tasks, the three dialogue categories (i.e., appli-cation management, migration and I/O control, music-box control) have been appropriately decomposed.Each I/O task has been instantiated with appropriate al-ternative I/O configurations. Each of the two design lay-ers for I/O tasks (i.e., logical dialogue control – autom-aton, device level control – instantiation) has beenmapped to a corresponding implementation layer. Asample dialogue specification of the I/O Selector task inthe form of a dialogue state automaton is presented infigure 3.

Implementation – SimulationFor the purposes of the nomadic application experi-

ment, the selected devices have been simulated withfour PDAs connected with a wireless LAN. Each devicehas been assigned its respective functional role(-s) (i.e.,computation device, I/O device, host device). All serv-ice components, except I/O resources, have been imple-mented in C++, building communication on top ofTCP/IP sockets. The application, which needed to

support migration and re-activation, has been developed inJava, still communicating via TCP/IP sockets with the variousservice components. Finally, the I/O resources have been im-plemented in Java, with simple Swing™ interfaces emulatingthe primitive I/O facilities offered by the real I/O devices.

Evaluation A small-scale iterative evaluation engaging three users has

been performed, starting from the early design phases. Threecompact evaluation sessions in total have been conducted, eachresulting in considerable interface modifications, and in somecases, leading to global updates of the distributed dialoguepolicy. In the evaluation experiments the wireless network hasbeen configured to detect hosts at a distance of less than five

storage downloading

player

display

program execution

programmable buttons

audio output audio control

display

programmable buttons

alarm signals

alarm service

storage downloading

player (mixing)

display

program execution

programmable buttons

speech (English) audio output

walkman

wrist watch

head

phones

i- fi system

O

O

I

O

Figure 2: Device configuration of the music box

h

I

I

I

O O

O

O OO O

O

O

notify “Selected” focusselect feedback

Select

PreviousNext

display focus

focus = next(focus) focus = previsous(focus)

InputOutput

Figure 3: Abstract I/O Selector task

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 57

meters. Users were given a scenario in which they were askedto move within the experimental set-up, so that dynamic devicedetection or loss of device connection could be triggered, inorder to test dynamic I/O re-configuration. The results from theoverall evaluation stage have been positive. The followingusability issues have emerged:

Focus task re-configuration. In the initial design, I/O re-configuration was applied to all active I/O tasks, including thefocus I/O task. All users reported that this caused problemswhen I/O devices were detected or disconnected dynamically.The interface has been modified to only update everything butthe focus task, and to provide explicit feedback regarding thecause and the completion status of configurations. This hasbeen proved to be a major improvement. However, users stillreported that it would be more helpful if the system couldoptionally give control to them for deciding configurationsmanually.

Dialogue stalls causing confusion. This occurs when for thefocus I/O task, no dialogue instantiation can be selected, sincethe required I/O resources are not available. This case wassimulated only in the third evaluation session. During the lossof connection, users were confused as to what happened and inwhat state the application remained internally. When theyapproached I/O resources again, They were informed that thedialogue re-started. The conclusion was that it should be avoid-ed to run applications in host devices that do not have at leastone output resource, so that upon dialogue stall, explicit outputfeedback can be given to the user.

Use of concurrent feedback. In the initial design it waschosen to implement confirmation dialogues by utilizingconcurrently different I/O resources, through I/O tasks calledMessages. Although this was considered to be a good designapproach, users were confused, since, in some configurations,the message was simply displayed at a distant device (two orthree meters away), rather than at the device through which themenu dialogue was managed. This has led to re-design in orderto implement such dialogues with the use of a single SelectorI/O task.

Loss of host device connection. This occurs when the user ismoving out of the wireless network scope of the host device onwhich the application is currently running. If the user does notcarry a host device, it is not possible for the application to “fol-low” the user. However, the application will never migrateautomatically, even if the user does carry a host device, unlessan explicit request is made for migration on the hand-helddevice (of the remote application), before moving. However,users reported that they would like applications to be smartenough to “follow them” in such situations, without an explicitrequest.

Textual input support. At the beginning of the design phase,the possibility of textual input was considered. However,considering that the device specifications posed for the experi-ment had to be minimal, the absence of a physical keyboardwas assumed. Consequently, the only alternative solution hasbeen the design and implementation of a keyboard emulator,based on similar emulators in mobile phones. In the performeddesign, two main problems have been encountered, namely, the

number of hardware buttons available is relatively smaller thanin mobile phones (seven for the walk-man device and the hi-fisystem, and two for the wrist-watch device), and, there was nooutput support. Several prototype designs with a limitednumber of buttons has been made, but early testing has revealeda rather poor interaction quality. This resulted in the decision todrop the use of textual input from the music box applicationexperiment, and leave it for future work.

ConclusionsThe Nomadic Music Box supports context awareness and

adaptation to context, as well as adaptation to the (dynamicallychanging) available devices, addressing therefore diversity ofcontexts of use and technological platforms in the DC environ-ment. Additionally, the adopted interface architecture and thedialogue design easily allow to cater for additional target usergroups (e.g., disabled users such as blind or motor-impairedusers). Therefore, the nomadic Music Box constitutes an exam-ple of prototype application addressing some of the issuesraised by Universal Access in the context of the DC.

From this experience, some considerations emerge concern-ing the new requirements that arise in the DC environmentconcerning the design, implementation and evaluation of userinterfaces. A first prerequisite for the successful developmentof the DC environment is that user requirements are appropri-ately captured and analysed. These requirements are anticipat-ed to be more subjective, complex, and interrelated than inprevious generations of technology. In particular, it is neces-sary to anticipate future computing needs in everyday life [22],and acquire an in- depth understanding of the factors whichwill determine the usefulness of interactive artefacts in context.Therefore, HCI needs to develop appropriate approaches, andrelated instruments, for capturing human requirements in thenew reality. Design of interactive artefacts is also likely tobecome a more complex and articulated process, requiring newmethods, techniques and tools. This is due to the multifunction-al nature of artefacts, and to the necessity to embed interactivebehaviours in everyday objects. It will therefore be necessaryto investigate how interactive functions combine with otherfunctions of artefacts, and how the design processes of thevarious functions interrelate [7]. In this context, new designprinciples, which will cater for the newly introduced designparameters, will be required. Furthermore, design approacheswill need to take into account ways of dynamically combiningthe interaction capabilities of interconnected objects anddevices [23].

The development of distributed interactive capabilities in amultitude of different devices, as well as in the surroundingtechnological infrastructure, will necessitate the elaboration ofappropriate architectural frameworks and tools [24]. In partic-ular, given the variety and differentiation of the co-existing andco-operating computing devices and the numerous dimensionsof diversity intrinsic in the DC environment, one of the biggestchallenges is the provision of support for the development ofinherently self-adapting and scalable user interfaces. Novelevaluation and testing methods will be required, combining auser-centred approach with the capability of dealing with

4

Human-Computer Interaction: Overcoming Barriers

58 UPGRADE Vol. IV, No. 1, February 2003 © Novática

context-awareness [25] and measuring how dynamic systemsbehave and adapt in context. As new technologies support avariety of tasks of everyday life, new metrics will be probablyneeded, such as helpfulness, engagement, fun, level of trans-parency, etc. In addition to these metrics, as technologyacquires human-like characteristics, new characteristics will beattributed and expected from “user interfaces”, such as polite-ness [26] and affectiveness [27]. Furthermore, tools that cansupport, to a certain degree, automatic evaluation againstspecific guidelines and principles (e.g., tools for working withguidelines [28]) may be particularly useful, since it will bedifficult to “manually” control and keep track of all the relevantparameters of interaction, as well as to assess their impact onthe humans as users of the designed artefacts. It is expected thatfuture testing and evaluation methods and activities will have torely to a large extent upon the simulation of the interactivebehaviour of single devices and the overall technologicalenvironment.

References[1]

European Commission. The Disappearing Computer. Informa-tion Document, IST Call for proposals, February 2000. Electron-ically available at: ftp://ftp.cordis.lu/pub/ist/docs/fetdc-2.pdf

[2]C. Stephanidis. Human-Computer Interaction in the age of theDisappearing Computer. In N. Avouris, & N. Fakotakis (Eds),“Advances in Human-Computer Interaction I”, Proceedings ofthe Panhellenic Conference with International Participation onHuman-Computer Interaction (PC-HCI 2001), Patras, Greece,7–9 December 2001 (pp. 15–22). Patras, Greece: TyporamaPublications.

[3]C. Stephanidis, G. Salvendy, D. Akoumianakis, N. Bevan, J.Brewer, P. L. Emiliani, A. Galetsas, S. Haataja, I. Iakovidis, J.Jacko, P. Jenkins, A. Karshmer, P. Korn, A. Marcus, H. Murphy,C. Stary, G. Vanderheiden, G. Weber, J. Ziegler. Toward an Infor-mation Society for All: An International R&D Agenda. Interna-tional Journal of Human-Computer Interaction 1998, 10 (2),107–134.

[4]C. Stephanidis, G. Salvendy, D. Akoumianakis, A. Arnold, N.Bevan, D. Dardailler, P. L. Emiliani, I. Iakovidis, P. Jenkins, A.Karshmer, P. Korn, A. Marcus, H. Murphy, C. Oppermann, C.Stary, H. Tamura, M. Tscheligi, H. Ueda, G. Weber, J. Ziegler.Toward an Information Society for All: HCI challenges and R&Drecommendations. International Journal of Human-Computer In-teraction 1999, 11 (1), 1–28.

[5]C. Stephanidis. User Interfaces for All: New perspectives intoHCI. In C. Stephanidis (Ed.) User Interfaces for All – Concepts,Methods and Tools, pp. 3–17. Mahwah, NJ: Lawrence ErlbaumAssociates 2001. ISBN 0-8058-2967-9.

[6]F. Mattern. The Vision and Technical Foundations of UbiquitousComputing. UPGRADE, Vol.II, No.5, 2001, Electronically avail-able at <http://www.inf.ethz.ch/vs/res/proj/smartits.html>.

[7]S. I. Hjelm. Designing the invisible computer from radio-clock toscreenfridge. NIC 2001 -Nordic Interactive Conference, Copen-hagen, 31 October – 3 November 2001.

[8]N. A. Streitz, P. Tandler, C. Müller-Tomfelde, S. Konomi.Roomware: Towards the Next Generation of Human-Computer

Interaction based on an Integrated Design of Real and VirtualWorlds. In: J. A. Carroll (Ed.): Human-Computer Interaction inthe New Millennium, Addison Wesley, 2001. pp. 553–578.

[9]P. Wyeth, D. Austin, H. Szeto. Designing Ambient Computing foruse in the Mobile Health Care Domain. Proceedings of CHI2001Workshop on Distributed and Disappearing Uis in UbiquitousComputing. Electronically available at <http://www.teco.edu/chi2001ws/17_wyeth.pdf>.

[10]R. Oppermann and M. Specht. Contextualised InformationSystems for an Information Society for All. In “Universal Accessin HCI: Towards an Information Society for All”, Volume 3 of theProceedings of HCI International 2001, New Orleans, Louisiana,USA (pp 850–853). Mahwah, New Jersey: Lawrence ErlbaumAssociates.

[11]A. K. Dey, P. Ljungstrand, A. Schmidt. Distributed and Disap-pearing User Interfaces in Ubiquitous Computing. Proceedings ofCHI2001 Workshop on Distributed and Disappearing Uis inUbiquitous Computing. Electronically available at <http://www.teco.edu/chi2001ws/disui.pdf>.

[12]D. Salber, A. K. Dey and G. D. Abowd. The context toolkit:Aiding the development of context-enabled applications. InProceedings of CHI’99, Pittsburgh, May 15–20, 1999, ACMPress, 434–441.

[13]N. A. Streitz. Mental vs. Physical Disappearance: The Challengeof Interacting with Disappearing Computers. Proceedings ofCHI2001 Workshop on Distributed and Disappearing Uis inUbiquitous Computing. Electronically available at http://www.teco.edu/chi2001ws/20_streitz.pdf

[14]A. Schmidt. Implicit Human Computer Interaction ThroughContext. Personal Technologies, Volume 4(2&3), June 2000.Springer-Verlag. pp. 191–199.

[15]M. Massink, G. Faconti. A reference framework for continuousinteraction. UAIS 1 (2002) 4, 237–251.

[16]T. Prante. Designing for Usable Disappearance – MediatingCoherence, Scope, and Orientation. Proceedings of CHI2001Workshop on Distributed and Disappearing Uis in UbiquitousComputing. Electronically available at: <http://www.teco.edu/chi2001ws/23_prante.pdf>

[17]A. Schmidt, M. Beigl and H.-W. Gellersen. Sensor-based adap-tive mobile user interfaces. Proceedings 8th International Confer-ence on Human-Computer Interaction, München, Germany,August 1999. Volume 2, pp 251–255.

[18]C. Stephanidis. Adaptive techniques for Universal Access. UserModelling and User Adapted Interaction International Journal 11(1–2), 2001: 159–179.

[19]M. Moore, S. Rugaber and P. Seaver. Knowledge Based UserInterface Migration, in Proceedings of the 1994 InternationalConference on Software Maintenance, Victoria, British Colum-bia, September 1994.

[20]J. Coutaz and G. Calvary. The Plasticity of User Interfaces, theDisappearing Computer and Situated Computing. Proceedings ofCHI2001 Workshop on Research Directions in Situated Comput-ing. Electronically available at <http://www.daimi.au.dk/~mbl/chi2000-sitcomp/pospapers.pdf>.

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 59

[21]A. Zeidler. User Interface Design for Ubiquitous Computing:W@PNotes, an example. Proceedings of CHI2001 Workshop onDistributed and Disappearing Uis in Ubiquitous Computing.Electronically available at <http://www.teco.edu/chi2001ws/18_zeitler.pdf>.

[22]G. D. Abowd and E. D. Mynatt. Charting Past, Present and FutureResearch in Ubiquitous Computing. ACM Transactions on Com-puter-Human Interaction, Special issue on HCI in the new Mille-nium, 7(1):29–58, March 2000.

[23]L. E. Holmquist, F. Mattern, B. Schiele, P. Alahuhta, M. Beigl andH. W. Gellersen. Smart-Its Friends: A Technique for Users toEasily Establish Connections between Smart Artefacts, Proceed-ings of UBICOMP 2001, Atlanta, GA, USA. Electronically avail-able at <http://www.smart-its.org/publication/smart-its-friends.ubicomp2001.pdf>.

[24]G. D. Abowd. Software Engineering Issues for UbiquitousComputing. In the Proceedings of the 21st International Confer-

ence on Software Engineering (ICSE ’99), Los Angeles, CA,May 16–22, 1999.

[25]J. Scholtz. Evaluation Methodologies for Context-Aware Com-puting. Proceedings of CHI2001 Workshop on Distributed andDisappearing Uis in Ubiquitous Computing. Electronically avail-able at <http://www.teco.edu/chi2001ws/19_scholz.pdf>.

[26]Alan Cooper. The Inmates are Running the Asylum: Why high-tech products drive us crazy and how to restore the sanity, SAMS1999.

[27]W. Picard Rosalind. Affective Computing, MIT Press, Cambridge1997.

[28]J. Vanderdonckt. Development Milestones towards a Tool forWorking with Guidelines, Interacting with Computers, Vol. 12,N°2, 1999, pp. 81–118.

[29]A. Savidis, N. Maou, I. Pachoulakis, C. Stephanidis. Continuityof interaction in nomadic interfaces through migration anddynamic utilization of I/O resources. UAIS 1 (2002) 4, 274–287.

Human-Computer Interaction: Overcoming Barriers

60 UPGRADE Vol. IV, No. 1, February 2003 © Novática

Customer Interaction Personalization: iSOCO Alize

Jesús Cerquides-Bueno, Enrique Hernández-Jiménez, Oscar Frías-Barranco, and Noyda Matos-Fuentes

User interaction personalization is a key factor in the design of electronic businesses. In this article wepresent a brief introduction to the concept of personalization, highlighting the difference between explicitand implicit personalization, as well as such aspects as confidentiality and security. After introducing thesubject, we go on to present the Alize architecture, iSOCO’s particular bet on personalization, to concludewith an explanation of its use within the framework of a virtual bookstore.

Keywords: Agents, Alize, e-Commerce, Framework,Personalization, Recommendation Systems

IntroductionRecently we have been witnessing a profound change in

the way we conduct business. The main driving force behindthis change is the growing impetus coming from new technol-ogies which will make customer interaction personalizationone of the key issues in the design of future electronic business-es. In the second Section of this paper we present a short intro-duction to the concept of personalization, emphasizing thedifference between explicit and implicit personalizationtogether with confidentiality and security issues. Later on, inSection 3 we describe briefly Alize’s agent-based architecture,the iSOCO personalization framework. Finally, in Section 4 webriefly describe the use of Alize in a virtual library.

Personalization: Types and Related AspectsLet us analyse a common act such as buying bread at the

bakers or at our regular grocery store. After a personal greeting,the baker offers us a lightly toasted roll, according to our pref-erence, because he knows our buying habits. Furthermore,since we are regular chocolate consumers, he lets us know thatthere is a promotion on chocolates and that he will let us takethem away and pay next week if we don’t have the money onus. He even offers to deliver them himself if we don’t want totake them ourselves. An interaction such as the one we describemakes the customer feel that he or she is receiving personalattention and creates the impression of a customized service.

The increased use of the Internet has enabled the growth ofnew business models that would have been unthinkable someyears ago, based on the “virtuality” concept inherent to thenetwork. We have been witness to the growth of electronicequivalents of the physical stores that we are used to dealingwith. In the beginning, electronic stores were simply staticcatalogues showing the different products offered by the store.Technological advances together with a conceptual evolutionhave allowed this equivalence to grow, providing electronicbusinesses with the same key characteristics that make thepurchase process easy and attractive in physical stores. Termssuch as personalization, “one-to-one” marketing and Customer

Relationship Management (CRM) are vital to this effort. Theconcept of personalization has been defined in several ways,but most of them have in common the idea of providing theright content to the right person at the right moment andthrough the most suitable channel. In order to do this, thecontribution from CRM tools which allow the interaction withcustomers to be tracked through different channels (web, e-mail, telephone, …) in a coherent way is of vital importance.Quoting Richard Danzell1:”The key is to construct a singlestore which continuously changes to adapt to each individualcustomer”. This suggests a transition from the conventionalparadigm, as Don Peppers2 points out: “Forward thinking com-

1

2

1. <http://amazon.com> 2. Peppers and Rogers Group Consulting.

Jesús Cerquides-Bueno graduated in Computer Science fromthe Universitat Politècnica de Catalunya (Barcelona, España) andwas awarded the National End of Studies Prize in 1995. He hasworked as an Associate Director in UBS’s IT research laboratoryin Zürich. He is currently Director of Technology at iSOCO, andis finishing his doctoral thesis on Artificial Intelligence. <[email protected]>

Enrique Hernández-Jiménez is an IT graduate from theUniversitat Politècnica de Catalunya (Barcelona, España). He iscurrently completing his doctoral thesis in the field of ArtificialIntelligence in the department of Computer Languages andSystems of the same university. His main area of interest is thestudy of automatic learning methods in uncertain domains.<[email protected]>

Oscar Frías-Barranco graduated in 1998 as a Higher Engineerin Telecommunications from the Universitat Politècnica deCatalunya (Barcelona, España) and has a Masters in Security ofInformation Technologies (2002) from La Salle (Barcelona). Hehas worked as a Sofware Engineer on several projects at IndraEspacio. He is currently a software engineer at iSOCO where hehas been working on the design and development of Alize.

Noyda Matos-Fuentes graduated in Computer Science fromthe University of La Habana (Cuba) in 1993. She is currentlyworking as a Software Engineer at iSOCO. <[email protected]><http://www.isoco.com>

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 61

panies are changing from the old world of finding customersfor their products to the new one of finding products for theircustomers”.

2.1 Types of PersonalizationThis vision of personalization, seen as the adaptation of the

contents on offer to the user, is broad enough to allow differentlevels of complexity. The interaction of a user with a particularbusiness enables us to set up a wide range of information gath-ering processes which are of great value in finding out users’preferences, ranging from the compilation of data explicitlysupplied by the users, to complex inference mechanisms basedon the study of their behaviour.

The first kind of personalization, which we could call explicitpersonalization, is the most primitive level. It consists of theextraction of personal data and relevant information by meansof form filling and similar methods. Thus, starting with datasuch as the user’s name and their town of residence, we canimplement personalized greeting protocols and offer informa-tion specific to that user, such as the weather forecast for thearea where they live. The advantage of this kind of personaliza-tion is that the information taken as a basis for the adaptation isvery reliable, because it has been explicitly given by the user.However, it is not very appealing as it places the burden on thecustomer side. Furthermore, various studies prove that usersdisplay a certain reluctance when faced with informationextraction forms [1]. With regard to this, it is important to stressthat the results obtained through any personalization techniquewill be, at best, only as good as the available information. Thisassertion, while it may seem obvious, poses a dilemma, in thatif we pester users for relevant information our recommenda-tions will be all the more suitable, but more forms will meanfewer users involved in the process. This paradox, know as theproblem of accuracy against friction, prompts the need forbuilding friendly mechanisms for information extraction, ableto get round the aversion caused by too many forms. Somecurrent trends call for gradual information capture methods,whereby information extraction mechanisms are set at particu-lar points and spread out over time.

Another possibility, which we could call implicit personali-zation, involves the use of indirect information inferred fromusers’ behaviour when they navigate through a website, or theuse of information from demographic databases. This method,contrary to the classic model based on information givenexplicitly by the users, proposes using logs or navigationalpatterns as its main source of information. For example, we caninfer that a user prefers sports news from his or her most visitedlinks. Furthermore, if we have access to demographical infor-mation showing that such a user resides in a neighbourhoodwith a high average income we can offer him or her products orservices which would be more difficult to sell in some otherareas. In this case, the information is not directly introduced bythe user; rather is it inferred from the data we already have atour disposal. This change of perspective gives rise to the needfor intelligent systems able to infer knowledge about a userfrom observed behaviour patterns.

In iSOCO we believe that the ability to learn behaviourpatterns efficiently and effectively will determine the efficacyand hence the success of a personalization product. The possi-bility to infer preferences from indirect sources (such as thepreferences of users with a similar profile) by means of tech-niques such as collaborative filtering or the more general meth-ods of case-based reasoning, the correct management of specif-ic business knowledge through business rules, together withdata mining tools able to systematically analyse data and pro-vide the user with new and useful information for a particularbusiness are all aspects which we believe to be vital for the newgeneration of personalization products.

2.2 Related AspectsAn aspect to be taken into account in the design of any

personalization strategy is that of data privacy and confidenti-ality. The adaptation of contents to user preferences involvesthe handling of users’ personal data. But phenomena such asmail spamming or account capture make the user reluctant toprovide such information, which is why we need to create atrust environment guaranteeing data security and confidentiali-ty. Various studies back this up, such as the one carried out byCyberDialogue which shows that 84% of users are reluctant toprovide personal information if they do not know what it isgoing to be used for. It is therefore necessary to make usersaware of the advantages of providing information by guaran-teeing that it will be used appropriately and in a controlledmanner. The same study states that 56% of users are more like-ly to buy from sites making use of personalization techniques,and 63% are more willing to be registered in those sites.

Security is by no means a minor issue, as is apparent from theemergence of various agencies (apart from the usual legisla-tion) which are working on setting up of codes of ethics toensure confidentiality and the correct use of information.

There are also some recommendations and misconceptionsconcerning personalization. Personalization techniques arefallible (we should debunk the myth of the modern oracle) andthe user may experience a considerable degree of frustrationwhen given an unsuitable response, to the point where theirtrust may be undermined and customer loyalty may be jeopard-ized. Carol Parenzan in [2] tells of her surprise upon receivingpersonalized mail with offers to buy aluminium siding at alarge discount and plant bulbs suitable for the climate whereshe lives. The “only” problem was that she lives in a cabinmade completely out of wood in the Adirondack Mountains, anarea to the north of New York usually covered in snow. Some-times it is a good idea to adopt a conservative approach to thegeneration of contents in order to avoid making such hugeerrors, and to have escape routes to non personalized content aswell as making it possible to interact with the user to allowthem to modify their profile whenever their needs or interestschange.

Some other recommendations should also be born in mind,such as keeping the response time within acceptable limits,remembering that sometimes an excessive delay can be worsethan an impersonal answer. And we should also remember thatany information we can obtain about the user may be valuable,

Human-Computer Interaction: Overcoming Barriers

62 UPGRADE Vol. IV, No. 1, February 2003 © Novática

and that knowing what they don’t like can be just as importantas knowing what they do like.

Alize ArchitectureIn this Section we describe the Alize agent oriented archi-

tecture. As can be seen in Figure 1, the basic architecture com-prises four different agents:• Customer Admitter Agent: Manages user authentication• Customer Profiling Agent: Manages user profiles created

from the information that the system acquires from users.• Learning Agent: Learns users’ behaviour models from the

data acquired by the Customer Profiling Agent • Output Selection Agent: Selects the information to be

presented to the user according to the user model and itsprofile.

3.1 Customer Admitter AgentThe Customer Admitter Agent is responsible for recognizing

and guaranteeing the identity of the users. More specifically, itcarries out the following functions:Authenticates and authorizes users and customers.

The Customer Admitter Agent communicates with theCustomer Profiling Agent to load profiles from the databaseand to merge profiles between anonymous and registeredusers.

Assigns the user identifier.The Customer Admitter Agent is responsible for assigningidentifiers enabling information from new customers to bereused.

Maintains information about user status.Enables differentiation between anonymous and identifiedusers.

3.2 Customer Profiling AgentThe Customer Profiling Agent builds a static model of

visitor behaviour by building a customer profile. To createcustomer profiles accurately the Customer Profiling Agenttakes into account:Current usage history.

The click-stream (the sequence of clicks that a user makeswhile browsing) together with the transaction history(purchases) have a clear influence on the user profile.

Static customer models.These are created by means of a data mining process fromthe customer data available in the data warehouse andfrom the business knowledge provided by the head ofmarketing of the site (which can be captured by means ofa set of rules, for example). These models should be sub-sequently revised when the Customer Profiling Agent hascaptured enough information about the customers. This isthe task of the Learning Agent.

Demographic information.This includes preferences established by the user and alsoinformation allowing users to be classified by demo-graphic or other kinds of groups.

Alize adapts the Customer Profiling Agent to the needs ofa particular business by building two XML models:

CPML.The Customer Profile Markup Language allows the descrip-tion, by means of XML, of the user attributes that are neededfor the personalization of the site.

WPML.The Web Page Markup Language uses XMLS to describethe contents and rules in order to update the user profile ashe or she browses.

3.3 Learning AgentThe Learning Agent, by means of click stream processing,

recognizes, generalizes and classifies recurring behaviour pat-terns and makes use of the transactions records to infer modelsenabling user profiles to be matched with the articles that mightinterest them, based on what the user has been accessing on theweb site [3].

The Learning Agent uses information about web activity andperforms the learning off line. The Learning Agent takes infor-mation about web browsing together with transactional infor-mation, performs the data mining operations and generates auser behaviour model which is revised and completed by thesite’s head of marketing. The Learning Agent can suggest newrules based on the revised data. Once it is revised and approved,the new model is put into operation on the site. The varioussteps can be organized in a cycle allowing data quality toimprove continuously. The Learning Agent could be thought ofas the website’s “subconscious”, since there is a parallelbetween the way it communicates with the site and the waysubconscious communicates with the brain. Thus, the imple-mentation of the new model on which the Learning Agent andthe marketing team have been working off line, would be likethe “light” we see when the solution to a problem suddenlysprings to mind while we weren’t even thinking about it.

3

WebServer

CustomerAdmitterAgent

WPML JSP Files

CPML

CustomerProfileAgent

AlizeAPI

Model

LearningAgent

OutputSelection

Agent

Data Base

Figure 1: Interaction between Alize agents

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 63

3.4 Output Selection AgentThe site’s products and services are stored in databases, each

annotated with information describing their characteristics sothat it is possible to know what kind of customers might beinterested in them. The Output Selection Agent selects prod-ucts and services according to product information, customerprofiles and the user model learned by the Learning Agent.

The Output Selection Agent allows different product selec-tion strategies such as:Best n.

Select deterministically the n products that best match theuser profile.

n out of the best m.Randomly selects n out of the m most suitable products forthe user.

Best n for my group.Uses collaborative filtering to select the n products or serv-ices most suitable to users whose profiles are similar to thecurrent user.

n out of the best m for my group.Equivalent to a “n out of the best m” but using collaborativefiltering.

We can add a set of constraints on the products to any of theseselections. Thus a particular page in a music store might be

interested in showing only the best CDs for a customer of acertain genre (i.e. jazz). This can be accomplished by means ofa constrained best n query.

Usage Example: A Virtual BookstoreWe have recently seen the startup and subsequent demise

of many virtual bookstores. In such a competitive environmentas this, the ability to make personalized offers to customers canbe a differential factor affecting the very survival of these busi-nesses. Furthermore, due to the quantity of products and prod-uct information available, they provide an excellent testbed forany personalization architecture [4].

The clear categorization by subject together with the taxo-nomical organization of the products facilitate the task ofreaching an intuitive definition of customer profiles. These pro-files contribute a series of improvements based on personaliza-tion such as: showing customers the most promising new prod-ucts given their particular profile, suggesting a list of books thatthey might be interesting in, taking into account those they havealready bought, etc. iSOCO has used a virtual bookstore as atestbed for the Alize agent based personalization frameworkwhich offers the desired functionality.

4

Figure 2: Alize demo bookstore

Human-Computer Interaction: Overcoming Barriers

64 UPGRADE Vol. IV, No. 1, February 2003 © Novática

ConclusionsThe Alize personalization framework constitutes iSO-

CO’s bet in this environment. Alize is a product for real-timeimplicit personalization whose design aims to bring togetherthe ideas set out in this article. Since we are convinced thatintelligent components are a key aspect, able to provide a com-petitive advantage over other products, the design has been spe-cially geared towards the ease of incorporation of such compo-nents. As a closing remark, we would like to stress the role ofpersonalization as a key factor if we are to realize the full po-tential of electronic commerce and make any inroads on thestatistic that still only 2% of all visits results in a purchase.

References[1]

Personalization Consortium: Personalization and privacy survey.<http://www.personalization.org, 2000>

[2] Carol Parenzan Smalley. Personalization: empowering custom-ers? maybe. <http://www.destinationcrm.com, May 2001>

[3]Data Mining your Website. Digital Press, 1999.

[4]L. Ardissono, A. Goy, R. Meo, G. Petrone, L. Console, L. Lesmo,C. Simone, and P. Toraso: A configurable system for the construc-tion of adaptive virtual stores. World Wide Web journal (WWW),1999.

Bibliography Gad Barnea.

Agents that communicate, learn, and predict: Intelligent agentcommunities. Web Techniques, 1999.

Robert Farrell.Capturing interaction histories on the web. In Proceedings of the2nd Workshop on Adaptive Systems and User Modelling on theWWW.

Dan R. Greening.What marketers really want to know: Tracking users. Web Tech-niques, 1999.

Mauro Marinilli, Alessandro Micarelli, and Filippo Sciarrone. A case-based approach to adaptive information filtering for thewww. In Proc. 7th Conference on User Modelling.

Mike Perkowitz and Oren Etzioni.Towards adaptive web sites: Conceptual framework and casestudy. In World Wide Web journal (WWW).

5

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 65

A Web Voice Solution: ConPalabras

Carlos Rebate-Sánchez, Yolanda Hernández-González, Carlos García-Moreno, and Alicia Fernández del Viso-Torre

ConPalabras (Spanish for “with words”) is a voice solution that enables your web pages to speak, either bysynthesising messages embedded on the web page or by synthesising text documents stored at a remotelocation. ConPalabras is a voice plug-in which, when installed on the client side (via the Internet or someother way, such as by connection kits) makes it possible to integrate voice into a web site.

Keywords: Abrelapuerta, Accessibility, ConPalabras, De-sign for All, SALT, Soluziona, Speech, VISUAL, Voice,VoiceXML

IntroductionConPalabras (Spanish for “with words”) is a voice solu-

tion that makes web pages speak, either by synthesising mes-sages embedded in the page or by synthesising text documentsstored at a remote location.

ConPalabras is a voice plug-in which, when installed on theclient side (via the Internet or some other way, such as byconnection kits) makes it possible to integrate voice into a website.

The novelty of ConPalabras compared with other methods ofvoice integration on the web lies in its flexibility and fluidity: • ConPalabras’ fluidity stems from the fact that speech

synthesis is performed on the client side by circulating textdocuments (either HTML documents or VoiceXML docu-ments – VoiceXML is an extension of the well known XMLlanguage that includes some specific tags to handle speechfunctions).

• ConPalabras’ flexibility is a result of the structured nature ofthese HTML and VoiceXML files, which enables operationssuch as dynamic construction, data base extraction, etc.

1

Carlos Rebate-Sánchez was born in Navalmoral de la Mata(Spain) in 1974. He graduated as an IT Engineer at the Universidadde Extremadura (Spain), and received his doctorate in AdvancedArtificial Intelligence from UNED (Spanish Distance LearningUniversity). He is also a 3rd year Philosophy student. He wasrecently awarded a Diploma of Advanced Studies (research apti-tude) in the doctorate programme "Advanced Artificial Intelligence:Symbolic and Connectionist Approaches". He has been working atSoluziona since 1998. He was joint winner of the 2002 InnowatioPrize for Innovation awarded by Unión Fenosa. He is currently thehead of Accessibility Projects, with a broad knowledge of SpeechProcessing and Accessibility, and he has presented many of theseprojects at congresses, events and conferences.<[email protected]>

Yolanda Hernández-González was born in Orense (Spain) in1971. She graduated as an IT Engineer at the University de Deusto(Spain)) and received her doctorate in Advanced Artificial Intelli-gence from UNED (Spanish Distance Learning University). Shewas recently awarded a Diploma of Advanced Studies (researchaptitude) in the doctorate programme "Advanced Artificial Intelli-gence: Symbolic and Connectionist Approaches". She has beenworking at Soluziona since 1998. She was joint winner of the 2002Innowatio Prize for Innovation awarded by Unión Fenosa. She iscurrently responsible for R&D in Accessibility, managing and coor-dinating proposals for national and international researchprogrammes. <[email protected]>

Carlos García-Moreno was born in Madrid (Spain) in 1977 andgraduated as a Technical Engineer in Computer Systems at the

Universidad Politécnica de Madrid. He is currently studying thesecond cycle of IT Engineering with UNED (Spanish DistanceLearning University). He has been working at Soluziona since2000. He is an expert in web development and also in accessiblewebsite building and the use of the various tools used for the vali-dation and repair of web page accessibility. He has a wide experi-ence in the development of distance learning web tools. He wasinvited to join AENOR’s Technical Committee on Standardization,AEN/CTN 139/SN 8 “Systems and devices for elderly and disabledpeople”. As a guest consultant he attended the Technical Confer-ence on RD&I (11 and 12 June, 2002) organized by the SpanishMinistry of Work and Social Affairs to draft the White Paper onRD&I to benefit the disabled and the elderly.<[email protected]>

Alicia Fernández del Viso-Torre was born in Avilés (Spain) in1974 and graduated as an IT Engineer at the University of Oviedo(Spain). She has obtained English and German language certificatesfrom The Spanish Official School of Languages, plus the Cam-bridge First Certificate and Proficiency certificates in English. Shehas worked at Soluziona since 1999. She has a broad knowledge ofspeech processing and accessibility, and of client/server architec-tures and is currently working on the VISUAL (Voice for Informa-tion Society Universal Access and Learning) project, leading amultidisciplinary and multilingual team in the development of anaccessible authoring tool (VISUAL Tool) to generate accessiblecontents. <[email protected]> <http://www.soluziona.com>

Human-Computer Interaction: Overcoming Barriers

66 UPGRADE Vol. IV, No. 1, February 2003 © Novática

ConPalabras

2.1 TechnologyHuman-computer interaction through voice commands is

becoming ever more important. Voice is used to interact withour mobile phone, our operating system, to surf the net, etc.Voice navigation is fast and natural and does not require thedevelopment of sophisticated visual interfaces.

With voice portals, a simple phone call gives us access to alarge number of on-line services, such as e-mail, stock marketinformation, electronic banking, news, etc.

The emergence of voice portals in recent years (2000, 2001and 2002), particularly on the American market, has brought

about a new form of interaction that will lead to – and in somecases has already led to – the replacement of traditional callcentres by automatic speech interaction systems.

The publication of new standards such as VoiceXML, theheavy investment made in the market by companies such asTellme Networks and Ydilo, and the development of new prod-ucts such as the Motorola Mobile Application Developer's Kitor the Nuance Voice Site Staging Centre led us to believe thatthis technological model could be used on the web.

In theory it was an easy change: the aim was to “distribute”the model among users of a particular web site. This “distribut-ed web model” should work as a phone portal architecture. Thisnew concept lies at the heart of ConPalabras.

ConPalabras’ architecture is basedon a speech plug-in (speech synthesisin version 1.0 and speech synthesisand recognition in version 2.0) thatreads messages stored in the html pageor on a remote server. The figure be-low describes ConPalabras’ architec-ture:

This architecture (client/server)guarantees the flexibility and the fluid-ity of the interaction, since it enablesdynamic generation of messages to besynthesised by ConPalabras. Both

2

Figure 1: ConPalabras.com Web Site

Figure 2: Schema of ConPalabras Working Model

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 67

synthesis and recognition engines are stored on the end userPC, which means that no additional information is travellingthrough the channel, thereby guaranteeing a fluid communica-tion.

If we want an html page to speak, the ConPalabras objectshould be embedded in the page. The page can communicatewith the object and react to any event. It is also possible to proc-ess remote files, which should be in VoiceXML format (only asmall subset of VoiceXML tags are needed), but it is alsopossible to process any file having a structured format, such asfiles complying with the SALT specification for multi-modespeech integration headed by Microsoft and Intel.

VoiceXML sparked the change in philosophy we mentionedbefore (from a centralized to a distributed model). It is a stand-ard proposed by IBM, AT&T, Lucent and Motorola, approvedby W3C on May 22, 2000. This initiative aims to create a newspecification based on XML tags used for speech interaction.This new language is focused on voice portals and has a goodchance of succeeding in the market, since it is a consolidatedtechnology in a world where the tendency is to become evermore distributed, and where mobility and communications arebecoming more and more important.

ConPalabras’ architecture is designed to be flexible in twosenses:

1) It can use synthesis and recognition engines from differentvendors (the current version uses IBM’s ViaVoice synthesisengine)

2) It can use different XML files and different grammar files.This speech integration architecture for web sites opens up a

wide range of possibilities in the world of human computerinteraction.

To have an html page “speak” using ConPalabras is verysimple; you need only make a call to the plug-in installed onyour computer indicating: the message to be synthesised (or thelocation of the VoiceXML file), the voice properties to be usedand the event that will trigger the synthesis (on page load, onpage download, on page refresh, on field focus, on image rollover, etc.).

For instance, if we want ConPalabras to synthesis the follow-ing message “We say it with words” when the user rolls over animage, we need to write the following code:

<img src=“img/logo.jpg” onmouseover=“ConPalabras.leerMensaje (‘We say it with words’,1,2)”>

If we want to synthesis an XML file to store the messages, thecode will appear as follows:

<?xml version=“1.0” encoding=“ISO-8859-1”?><vxml><prompt>This paragraph is synthesised with default voice proper-ties.</prompt>

Figure 3: Sample of Voice Assistance in Forms

Human-Computer Interaction: Overcoming Barriers

68 UPGRADE Vol. IV, No. 1, February 2003 © Novática

<prompt><voice gender=“1” category=“5”>This paragraph is synthesised with a male elder voice.</voice></prompt>

<prompt><voice gender=“1” category=“1”><prosody pitch=“300” rate=“500”>This paragraph is synthesised with a male boyish voice, with at 300 pitch and at 500 speed.</prosody></voice></prompt></vxml>

In this example, three messages are synthe-sised, each one with its own properties (gen-der, age, pitch, speed).

These examples demonstrate that speechintegration is easy and flexible. Let us look atsome practical uses of ConPalabras.

2.2 ApplicationThe fact that a web page speaks to us should

not be so surprising. We are used to living in amultimedia environment and the web is noth-ing less than another “world”. So, what are theadvantages of having web pages that speak?

In our opinion, speech is a more natural,direct and effective way of interaction. Byusing speech capabilities we are adding valueto an otherwise undervalued web communica-tion channel that has not yet been fullyexploited.

For instance, imagine that you arrive at oneof the millions of existing horizontal portals; itwould be useful to have ConPalabras tellingyou what it has to offer, the layout of thedifferent sections, the latest news, advertise-ments, information, etc.

ConPalabras can be used in any web site want-ing to offer a new way to access its contents:general purpose horizontal portals, electronicbanking, on-line education, portals for children,news portals, accessibility, etc.

ConPalabras has the added value of a “thirddimension” element that complements text andimages. In the following figure you can see theform filling sample, where each time the userreaches a field, a speech output is given to theuser telling the information that must be enteredaccording to the field. Once data is submitted,ConPalabras reads a summary of the input dataand encourages the user to confirm them. Thissample highlights the possibilities opened up byspeech integration in the world of electronicbanking (help making transfers, deposits, etc.),on-line education, electronic administration, etc.

2.2.1 E-learning Portal with SpeechAs we mentioned before, ConPalabras can be

used in an eLearning environment. In such anenvironment, ConPalabras was the technology used for aproject developed for the Technical Investigation Program, par-tially funded by the Science and Technology Ministry: eLearn-ing portal with speech.

The goal of the project was to design a web environment inwhich didactic units, stories and games could be created withspeech incorporated, making them more appealing to thestudent. The teacher is given an easy-to-use tool that helpshim/her create learning units with four cross-subjects (educa-tion for peace, consumer education, traffic education and envi-

Figure 5: Sample of how to Create a Didactic Unit

Figure 4: ConPalabras stories

Human-Computer Interaction: Overcoming Barriers

© Novática UPGRADE Vol. IV, No. 1, February 2003 69

ronmental education), with the possibility of choosing thescenery, the characters, the dialogues and so on.

2.2.2 Visual (Voice for Information Society Universal Access and Learning)

Voice is also an essential element for people with disabilities. In this field, Soluziona, along with seven Eu-

ropean organizations, is leading a Research andDevelopment project called Visual which is par-tially funded by the European Commission aspart of the Fifth Framework Program within thespecial action “Information Society Technolo-gies”. ConPalabras is the technology used inVISUAL to achieve a new way of accessing in-formation and eLearning contents in theInformation Society.

Soluziona leads this project from Spain, incollaboration with the following partners:Royal National Institute of the Blind, which isthe leading charity offering information,support and advice to the over 2 million peoplein the UK with a serious sight problem; UnioneItaliana Ciechi, an Italian charity for the blind;Federation des Aveugles et Handicapés Visuelsde France, created to integrate visuallyimpaired people into society; and DeutscherBlinden- und Sehbehindertenverband e.V.,which works on behalf of visually impairedpeople in Germany.

Others involved are the City University (Unit-ed Kingdom), a leader in the study of interactivesystems; Katholieke Universiteit Leuven Re-

search and Development (Belgium), renowned in thefield of accessibility and in the production of acces-sible electronic documentation; and the EuropeanBlind Union which is the voice of blind and partiallysighted people in Europe.

The VISUAL project will create a Web authoringtool for both visually and non-visually impaired webdesigners who want to create accessible web content.

Another output of the VISUAL project is aneLearning portal, accessible in five Europeanlanguages (Spanish, French, English, German andItalian), managed by the European organisations ofvisually impaired persons. This portal will centralizeservices and information devoted to the visuallyimpaired community.

The main challenge facing Soluziona and itsEuropean partners with this project, is that of makingthe social integration of visually impaired users inthe world of Internet a reality.

A self accessible web authoring tool is currentlybeing developed to create accessible web pages. Thedevelopment team is making a great effort to followaccessibility and usability guidelines, enabling usersto change the look and feel of the application (col-

ours, font, size etc.), voice interaction, internationalisation, etc.Web pages created with VISUAL will not only be “designed

for all”, they will also integrate voice as an alternative or as acomplement to provide enhanced interaction.

Figure 6: VISUAL Web Site

Figure 7: Abrelapuerta Web Site

Human-Computer Interaction: Overcoming Barriers

70 UPGRADE Vol. IV, No. 1, February 2003 © Novática

2.2.3 Abrelapuerta.comOur knowledge and interest in the field of accessibility solu-

tions is reflected in the web site Abrelapuerta (http://www.abre-lapuerta.com). This site groups together all the Accessibilityand Usability projects developed by the Accessibility and VoiceSolutions Department of Soluziona.

With regard to ConPalabras, a special effort has been madeto analyse the advantages that this product can bring to peoplewith disabilities: the visually impaired, people with learning ormotor disabilities, or those with aging problems.

With this in mind, a new version of ConPalabras has beendeveloped to endow our product with screen-reading technolo-gies for use by visually impaired people.

FutureConPalabras 1.0 (synthesis version) is available in Eng-

lish and Spanish for Microsoft Internet Explorer and Windows95/98/2000/NT/ME/XP. There is also a beta speech recognition

version, to allow voice navigation for web sites and the integra-tion of speech into multimedia applications.

For many years speech experts had been saying that the daysof keyboard devices were numbered, and that voice inputwould be its natural successor within a short period of time.

These prophecies have not come true, though it is a fact thatevery day new achievements are being made in this field, andspeech recognition engines are ever more reliable and offer ahigher level of independence from the speaker, while speechsynthesis engines provide a friendlier voice, closer to a humanvoice.

Little by little voice interaction is becoming a reality. Perhapsit is still to soon to talk about doing away with our keyboards,but we could start by managing browsers or executing operat-ing system commands by voice.

With ConPalabras we can use speech to gain access to infor-mation and navigate through a web site, a very attractive prop-osition that may change the world of human-computer interac-tion on the Internet for the coming years.

3